• 免费好用的星瞳AI云服务上线!简单标注,云端训练,支持OpenMV H7和OpenMV H7 Plus。可以替代edge impulse。 https://forum.singtown.com/topic/9519
  • 我们只解决官方正版的OpenMV的问题(STM32),其他的分支有很多兼容问题,我们无法解决。
  • 如果有产品硬件故障问题,比如无法开机,论坛很难解决。可以直接找售后维修
  • 发帖子之前,请确认看过所有的视频教程,https://singtown.com/learn/ 和所有的上手教程http://book.openmv.cc/
  • 每一个新的提问,单独发一个新帖子
  • 帖子需要目的,你要做什么?
  • 如果涉及代码,需要报错提示全部代码文本,请注意不要贴代码图片
  • 必看:玩转星瞳论坛了解一下图片上传,代码格式等问题。
  • 人形(不是人脸)检测如何返回人形在画面中的位置?如果用.rect()例程中返回的x,y为何都是0



    • 问题如题所示,麻烦回答一下



    • 运行的是那个代码?

      如果涉及代码,需要报错提示与全部代码文本,请注意不要贴代码图片



    • 此回复已被删除!


    • @kidswong999

      # TensorFlow Lite Person Dection Example
      #
      # Google's Person Detection Model detects if a person is in view.
      #
      # In this example we slide the detector window over the image and get a list
      # of activations. Note that use a CNN with a sliding window is extremely compute
      # expensive so for an exhaustive search do not expect the CNN to be real-time.
      
      import sensor, image, time, os, tf
      
      sensor.reset()                         # Reset and initialize the sensor.
      sensor.set_pixformat(sensor.GRAYSCALE) # Set pixel format to RGB565 (or GRAYSCALE)
      sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
      sensor.set_windowing((240, 240))       # Set 240x240 window.
      sensor.skip_frames(time=2000)          # Let the camera adjust.
      
      # Load the built-in person detection network (the network is in your OpenMV Cam's firmware).
      net = tf.load('person_detection')
      labels = ['unsure', 'person', 'no_person']
      
      clock = time.clock()
      while(True):
          clock.tick()
      
          img = sensor.snapshot()
      
          # net.classify() will run the network on an roi in the image (or on the whole image if the roi is not
          # specified). A classification score output vector will be generated for each location. At each scale the
          # detection window is moved around in the ROI using x_overlap (0-1) and y_overlap (0-1) as a guide.
          # If you set the overlap to 0.5 then each detection window will overlap the previous one by 50%. Note
          # the computational work load goes WAY up the more overlap. Finally, for multi-scale matching after
          # sliding the network around in the x/y dimensions the detection window will shrink by scale_mul (0-1)
          # down to min_scale (0-1). For example, if scale_mul is 0.5 the detection window will shrink by 50%.
          # Note that at a lower scale there's even more area to search if x_overlap and y_overlap are small...
      
          # default settings just do one detection... change them to search the image...
          for obj in net.classify(img, min_scale=1.0, scale_mul=0.5, x_overlap=0.0, y_overlap=0.0):
              print("**********\nDetections at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
              for i in range(len(obj.output())):
                  print("%s = %f" % (labels[i], obj.output()[i]))
              img.draw_rectangle(obj.rect())
              img.draw_string(obj.x()+3, obj.y()-1, labels[obj.output().index(max(obj.output()))], mono_space = False)
          print(clock.fps(), "fps")
      
      


    • net.classify这是分类器,可以判断区域内是否有人。



    • @kidswong999 无法在人形上面画框是嘛?



    • 对,不能找到位置。