• OpenMV VSCode 扩展发布了,在插件市场直接搜索OpenMV就可以安装
  • 如果有产品硬件故障问题,比如无法开机,论坛很难解决。可以直接找售后维修
  • 发帖子之前,请确认看过所有的视频教程,https://singtown.com/learn/ 和所有的上手教程http://book.openmv.cc/
  • 每一个新的提问,单独发一个新帖子
  • 帖子需要目的,你要做什么?
  • 如果涉及代码,需要报错提示全部代码文本,请注意不要贴代码图片
  • 必看:玩转星瞳论坛了解一下图片上传,代码格式等问题。
  • 我们需要比9600更慢的传输速率



    • 回复: 如何加延时函数

      当openmv识别到色块时,需要让它一秒钟之内不发送数据,请问怎么解决,好像加上延迟函数它就跟睡死了,请问怎么解决呢?谢谢

      import sensor, image, pyb
      import time, os, tf, math, uos, gc
      from pyb import UART,LED
      
      LED(3).on()
      
      sensor.reset()                         # Reset and initialize the sensor.
      sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
      sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
      sensor.set_windowing((240, 240))       # Set 240x240 window.
      sensor.skip_frames(30)          # Let the camera adjust.
      
      uart = UART(3, 115200)
      uart.init(115200, bits=8, parity=None, stop=1)
      def send_data(data1):
          global uart
          data = bytearray([0xb3,0xb3,data1,0x5b])
          uart.write(data)
      
      
      net = None
      labels = None
      min_confidence = 0.5
      
      try:
          # load the model, alloc the model file on the heap if we have at least 64K free after loading
          net = tf.load("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024)))
      except Exception as e:
          raise Exception('Failed to load "trained.tflite", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
      
      try:
          labels = [line.rstrip('\n') for line in open("labels.txt")]
      except Exception as e:
          raise Exception('Failed to load "labels.txt", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
      
      colors = [ # Add more colors if you are detecting more than 7 types of classes at once.
          (255,   0,   0),
          (  0, 255,   0),
          (255, 255,   0),
          (  0,   0, 255),
          (255,   0, 255),
          (  0, 255, 255),
          (255, 255, 255),
      ]
      
      clock = time.clock()
      while(True):
          clock.tick()
          row_data=0 # 0 正常行走,1 停止,2 等待
      
          img = sensor.snapshot()
      
          # detect() returns all objects found in the image (splitted out per class already)
          # we skip class index 0, as that is the background, and then draw circles of the center
          # of our objects
      
          for i, detection_list in enumerate(net.detect(img, thresholds=[(math.ceil(min_confidence * 255), 255)])):
              if (i == 0): continue # background class
              if (len(detection_list) == 0): continue # no detections for this class?])
              if labels[i] == 's':
                  row_data = 1
                  print('s')
                  pyb.delay(1000)
                  send_data(row_data)
              if labels[i] == 'w':
                  row_data = 2
                  print('w')
                  pyb.delay(1000)
                  send_data(row_data)
              else:
                  print(0)
                  send_data(row_data)
          #print(row_data)
      
      
      
      


    • 你这个逻辑是有问题。每次识别可能会识别到多个目标点的。

      我改了一下,大概是这样的:

          img = sensor.snapshot()
          row_data = []
          for i, detection_list in enumerate(net.detect(img, thresholds=[(math.ceil(min_confidence * 255), 255)])):
              if (i == 0): continue # background class
              if (len(detection_list) == 0): continue # no detections for this class?])
              if labels[i] == 's':
                  row_data.append(1)
                  print('s')
                  
              if labels[i] == 'w':
                  row_data.append(2)
                  print('w')
              else:
                  print(0)
          for d in row_data:
              send_data(row_data)
          pyb.delay(1000)