• 免费好用的星瞳AI云服务上线!简单标注,云端训练,支持OpenMV H7和OpenMV H7 Plus。可以替代edge impulse。 https://forum.singtown.com/topic/9519
  • 我们只解决官方正版的OpenMV的问题(STM32),其他的分支有很多兼容问题,我们无法解决。
  • 如果有产品硬件故障问题,比如无法开机,论坛很难解决。可以直接找售后维修
  • 发帖子之前,请确认看过所有的视频教程,https://singtown.com/learn/ 和所有的上手教程http://book.openmv.cc/
  • 每一个新的提问,单独发一个新帖子
  • 帖子需要目的,你要做什么?
  • 如果涉及代码,需要报错提示全部代码文本,请注意不要贴代码图片
  • 必看:玩转星瞳论坛了解一下图片上传,代码格式等问题。
  • OpenMV4跑不了神经网络,模型和label文件都拷贝到TF卡,提示帧缓冲内存不足“,修改图像分辨率也不行!



    • # TensorFlow Lite Mobilenet V1 Example
      #
      # Google's Mobilenet V1 detects 1000 classes of objects
      #
      # WARNING: Mobilenet is trained on ImageNet and isn't meant to classify anything
      # in the real world. It's just designed to score well on the ImageNet dataset.
      # This example just shows off running mobilenet on the OpenMV Cam. However, the
      # default model is not really usable for anything. You have to use transfer
      # learning to apply the model to a target problem by re-training the model.
      #
      # NOTE: This example only works on the OpenMV Cam H7 Pro (that has SDRAM) and better!
      # To get the models please see the CNN Network library in OpenMV IDE under
      # Tools -> Machine Vision. The labels are there too.
      # You should insert a microSD card into your camera and copy-paste the mobilenet_labels.txt
      # file and your chosen model into the root folder for ths script to work.
      #
      # In this example we slide the detector window over the image and get a list
      # of activations. Note that use a CNN with a sliding window is extremely compute
      # expensive so for an exhaustive search do not expect the CNN to be real-time.
      
      import sensor, image, time, os, tf
      
      sensor.reset()                         # Reset and initialize the sensor.
      sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
      sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
      sensor.set_windowing((240, 240))       # Set 240x240 window.
      sensor.skip_frames(time=2000)          # Let the camera adjust.
      
      mobilenet_version = "1" # 1
      mobilenet_width = "0.5" # 1.0, 0.75, 0.50, 0.25
      mobilenet_resolution = "128" # 224, 192, 160, 128
      
      mobilenet = "mobilenet_v%s_%s_%s_quant.tflite" % (mobilenet_version, mobilenet_width, mobilenet_resolution)
      labels = [line.rstrip('\n') for line in open("mobilenet_labels.txt")]
      print(labels)
      clock = time.clock()
      while(True):
          clock.tick()
      
          img = sensor.snapshot()
      
          # net.classify() will run the network on an roi in the image (or on the whole image if the roi is not
          # specified). A classification score output vector will be generated for each location. At each scale the
          # detection window is moved around in the ROI using x_overlap (0-1) and y_overlap (0-1) as a guide.
          # If you set the overlap to 0.5 then each detection window will overlap the previous one by 50%. Note
          # the computational work load goes WAY up the more overlap. Finally, for multi-scale matching after
          # sliding the network around in the x/y dimensions the detection window will shrink by scale_mul (0-1)
          # down to min_scale (0-1). For example, if scale_mul is 0.5 the detection window will shrink by 50%.
          # Note that at a lower scale there's even more area to search if x_overlap and y_overlap are small...
      
          # Setting x_overlap=-1 forces the window to stay centered in the ROI in the x direction always. If
          # y_overlap is not -1 the method will search in all vertical positions.
      
          # Setting y_overlap=-1 forces the window to stay centered in the ROI in the y direction always. If
          # x_overlap is not -1 the method will serach in all horizontal positions.
      
          # default settings just do one detection... change them to search the image...
          for obj in tf.classify(mobilenet, img, min_scale=1.0, scale_mul=0.5, x_overlap=-1, y_overlap=-1):
              print("**********\nTop 5 Detections at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
              img.draw_rectangle(obj.rect())
              # This combines the labels and confidence values into a list of tuples
              # and then sorts that list by the confidence values.
              sorted_list = sorted(zip(labels, obj.output()), key = lambda x: x[1], reverse = True)
              for i in range(5):![0_1638240495420_微信截图_20211130104736.png](正在上传 100%) 
                  print("%s = %f" % (sorted_list[i][0], sorted_list[i][1]))
          print(clock.fps(), "fps")
      
      


    • edge impulse只能在OpenMV4 Plus上用,OpenMV4上不行。