• 免费好用的星瞳AI云服务上线!简单标注,云端训练,支持OpenMV H7和OpenMV H7 Plus。可以替代edge impulse。 https://forum.singtown.com/topic/9519
  • 我们只解决官方正版的OpenMV的问题(STM32),其他的分支有很多兼容问题,我们无法解决。
  • 如果有产品硬件故障问题,比如无法开机,论坛很难解决。可以直接找售后维修
  • 发帖子之前,请确认看过所有的视频教程,https://singtown.com/learn/ 和所有的上手教程http://book.openmv.cc/
  • 每一个新的提问,单独发一个新帖子
  • 帖子需要目的,你要做什么?
  • 如果涉及代码,需要报错提示全部代码文本,请注意不要贴代码图片
  • 必看:玩转星瞳论坛了解一下图片上传,代码格式等问题。
  • 图像识别中程序识别结果出现的问题



    • # Edge Impulse - OpenMV Image Classification Example
      
      import sensor, image, time, os, tf
      
      sensor.reset()                         # Reset and initialize the sensor.
      sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
      sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
      sensor.set_windowing((240, 240))       # Set 240x240 window.
      sensor.skip_frames(time=2000)          # Let the camera adjust.
      from pyb import UART
      net = "trained.tflite"
      labels = [line.rstrip('\n') for line in open("labels.txt")]
      uart = UART(3, 115200)
      uart.init(115200, bits=8, parity=None, stop=1)
      clock = time.clock()
      while(True):
          if uart.any():
              a=uart.readline().strip()
              clock.tick()
              time.sleep(1000)
              img = sensor.snapshot()
              for obj in tf.classify(net, img, min_scale=1.0, scale_mul=0.8, x_overlap=0.5, y_overlap=0.5):
                  print("**********\nPredictions at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
                  img.draw_rectangle(obj.rect())
                  predictions_list = list(zip(labels, obj.output()))
                  for i in range(len(predictions_list)):
                      print("%s = %f" % (predictions_list[i][0], predictions_list[i][1]))
                      max_size = predictions_list[0][1]
                      zubie=0
                      for i in range(len(predictions_list)):
                          if predictions_list[i][1] >= max_size:
                             zubie=i+1
                             max_size=predictions_list[i][1]
                          print(max_size)
                          print(zubie)
              if a==b'\x01':
                 data = bytearray([zubie])
                 uart.write(data)
                 print(zubie)
      

      为什么加了串口通信的语句之后,识别结果的概率和没加串口通信语句的概率差别这么大,没加的时候特别精确,但是加了之后老是出错

      # Edge Impulse - OpenMV Image Classification Example
      
      import sensor, image, time, os, tf
      
      sensor.reset()                         # Reset and initialize the sensor.
      sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
      sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
      sensor.set_windowing((240, 240))       # Set 240x240 window.
      sensor.skip_frames(time=2000)          # Let the camera adjust.
      
      net = "trained.tflite"
      labels = [line.rstrip('\n') for line in open("labels.txt")]
      
      clock = time.clock()
      while(True):
          clock.tick()
      
          img = sensor.snapshot()
      
          # default settings just do one detection... change them to search the image...
          for obj in tf.classify(net, img, min_scale=1.0, scale_mul=0.8, x_overlap=0.5, y_overlap=0.5):
              print("**********\nPredictions at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
              img.draw_rectangle(obj.rect())
              # This combines the labels and confidence values into a list of tuples
              predictions_list = list(zip(labels, obj.output()))
      
              for i in range(len(predictions_list)):
                  print("%s = %f" % (predictions_list[i][0], predictions_list[i][1]))
      
          print(clock.fps(), "fps")
      
      


    • 是两个代码的:print("%s = %f" % (predictions_list[i][0], predictions_list[i][1]))
      这一句的结果不一样?