• 免费好用的星瞳AI云服务上线!简单标注,云端训练,支持OpenMV H7和OpenMV H7 Plus。可以替代edge impulse。 https://forum.singtown.com/topic/9519
  • 我们只解决官方正版的OpenMV的问题(STM32),其他的分支有很多兼容问题,我们无法解决。
  • 如果有产品硬件故障问题,比如无法开机,论坛很难解决。可以直接找售后维修
  • 发帖子之前,请确认看过所有的视频教程,https://singtown.com/learn/ 和所有的上手教程http://book.openmv.cc/
  • 每一个新的提问,单独发一个新帖子
  • 帖子需要目的,你要做什么?
  • 如果涉及代码,需要报错提示全部代码文本,请注意不要贴代码图片
  • 必看:玩转星瞳论坛了解一下图片上传,代码格式等问题。
  • fomo检测可以分区域检测吗?为什么对roi区域划分循环识别无法实现?没有报错但是实现不了分区域,可以怎么改进?



    • import sensor, image, time, os, tf, math, uos, gc, ustruct
      from pyb import UART,LED
      from image import SEARCH_EX, SEARCH_DS
      
      sensor.reset()                         # Reset and initialize the sensor.
      sensor.set_pixformat(sensor.GRAYSCALE)    # Set pixel format to RGB565 (or GRAYSCALE)
      sensor.set_framesize(sensor.QQVGA)      # Set frame size to QVGA (320x240)
      sensor.set_windowing((240, 240))       # Set 240x240 window.
      sensor.skip_frames(time=2000)          # Let the camera adjust.
      sensor.set_contrast(1)
      sensor.set_gainceiling(16)
      
      net = None
      labels = None
      min_confidence = 0.5
      rois = [(0,30,80,80),(70,30,85,85)]
      
      try:
          # load the model, alloc the model file on the heap if we have at least 64K free after loading
          net = tf.load("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024)))
      except Exception as e:
          raise Exception('Failed to load "trained.tflite", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
      
      try:
          labels = [line.rstrip('\n') for line in open("labels.txt")]
      except Exception as e:
          raise Exception('Failed to load "labels.txt", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
      
      colors = [ # Add more colors if you are detecting more than 7 types of classes at once.
          (255,   0,   0),
          (  0, 255,   0),
          (255, 255,   0),
          (  0,   0, 255),
          (255,   0, 255),
          (  0, 255, 255),
          (255, 255, 255),
      ]
      
      #rois = [(0,20,190,200),(150,35,160,170)]
      for roi in rois:
        clock = time.clock()
      
        num = 0
        while(True):
            clock.tick()
      
            img = sensor.snapshot()
          
          # detect() returns all objects found in the image (splitted out per class already)
          # we skip class index 0, as that is the background, and then draw circles of the center
          # of our objects
      
            for i, detection_list in enumerate(net.detect(img, thresholds=[(math.ceil(min_confidence * 255), 255)])):
                if (i == 0): continue # background class
                if (len(detection_list) == 0): continue # no detections for this class?
              
                print(" %s " % labels[i])
                tem = labels[i]
                #print(tem)
                num = 1
             
                for d in detection_list:
                    [x, y, w, h] = d.rect()
                    center_x = math.floor(x + (w / 2))
                    center_y = math.floor(y + (h / 2))
                    print('x %d\ty %d' % (center_x, center_y))
                    img.draw_circle((center_x, center_y, 12), color=colors[i], thickness=2)
                 
            if (num != 0):break
      


    • 因为你代码写的不对。

      可以运行的代码:

      import sensor, image, time, os, tf, math, uos, gc, ustruct
      from pyb import UART,LED
      from image import SEARCH_EX, SEARCH_DS
      
      sensor.reset()                         # Reset and initialize the sensor.
      sensor.set_pixformat(sensor.GRAYSCALE)    # Set pixel format to RGB565 (or GRAYSCALE)
      sensor.set_framesize(sensor.QQVGA)      # Set frame size to QVGA (320x240)
      sensor.set_windowing((240, 240))       # Set 240x240 window.
      sensor.skip_frames(time=2000)          # Let the camera adjust.
      sensor.set_contrast(1)
      sensor.set_gainceiling(16)
      
      net = None
      labels = None
      min_confidence = 0.5
      rois = [(0,30,80,80),(70,30,85,85)]
      
      try:
          # load the model, alloc the model file on the heap if we have at least 64K free after loading
          net = tf.load("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024)))
      except Exception as e:
          raise Exception('Failed to load "trained.tflite", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
      
      try:
          labels = [line.rstrip('\n') for line in open("labels.txt")]
      except Exception as e:
          raise Exception('Failed to load "labels.txt", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
      
      colors = [ # Add more colors if you are detecting more than 7 types of classes at once.
          (255,   0,   0),
          (  0, 255,   0),
          (255, 255,   0),
          (  0,   0, 255),
          (255,   0, 255),
          (  0, 255, 255),
          (255, 255, 255),
      ]
      
      
      num = 0
      while(True):
          
          img = sensor.snapshot()
          
          for roi in rois:
              clock = time.clock()
          
              for i, detection_list in enumerate(net.detect(img, roi=roi, thresholds=[(math.ceil(min_confidence * 255), 255)])):
                  if (i == 0): continue # background class
                  print(detection_list)
                  if (len(detection_list) == 0): continue # no detections for this class?
                  
                  print(" %s " % labels[i])
                  tem = labels[i]
                  #print(tem)
                  num = 1
                  
                  for d in detection_list:
                    [x, y, w, h] = d.rect()
                    center_x = math.floor(x + (w / 2))
                    center_y = math.floor(y + (h / 2))
                    print('x %d\ty %d' % (center_x, center_y))
                    img.draw_circle((center_x, center_y, 12), color=colors[i], thickness=2)
                 
              if (num != 0):break