Navigation

    • Login
    • Search
    • 版块
    • 产品
    • 教程
    • 论坛
    • 淘宝
    1. Home
    2. iv2r
    I
    • Flag Profile
    • Profile
    • Following
    • Followers
    • Blocks
    • Topics
    • Posts
    • Best
    • Groups

    iv2r

    @iv2r

    0
    Reputation
    10
    Posts
    90
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    iv2r Follow

    Posts made by iv2r

    • RE: 用的官方例程,想保存视频至openmv,总是写入失败

      @iv2r 0_1678883920224_e1c4853c701314f4d86a6c9d72702ed.png

      posted in OpenMV Cam
      I
      iv2r
    • 用的官方例程,想保存视频至openmv,总是写入失败
      # 保存视频例程
      #
      # 注意:您将需要SD卡来运行此演示。
      #
      # 您可以使用OpenMV Cam来录制mjpeg文件。 您可以为记录器对象提供JPEG帧
      # 或RGB565 /灰度帧。 一旦你完成了一个Mjpeg文件的录制,你可以使用VLC
      # 来播放它。 如果你在Ubuntu上,那么内置的视频播放器也可以工作。
      import sensor, image, time, mjpeg, pyb
      
      RED_LED_PIN = 1
      BLUE_LED_PIN = 3
      
      sensor.reset() # 初始化sensor
      
      sensor.set_pixformat(sensor.RGB565) # or sensor.GRAYSCALE
      #设置图像色彩格式,有RGB565色彩图和GRAYSCALE灰度图两种
      
      sensor.set_framesize(sensor.QVGA) # or sensor.QQVGA (or others)
      #设置图像像素大小
      
      sensor.skip_frames(time = 2000) # 让新的设置生效
      clock = time.clock() # 跟踪FPS帧率
      
      pyb.LED(RED_LED_PIN).on()
      sensor.skip_frames(30) # 给用户一个时间来准备
      
      pyb.LED(RED_LED_PIN).off()
      pyb.LED(BLUE_LED_PIN).on()
      
      m = mjpeg.Mjpeg("example.mjpeg")
      #mjpeg.Mjpeg(filename, width=Auto, height=Auto)创建一个mjpeg对象,
      #filename为保存mjpeg动图的文件路径
      
      print("You're on camera!")
      for i in range(200):
          clock.tick()
          m.add_frame(sensor.snapshot())
          #mjpeg.add_frame(image, quality=50),向mjpeg视频中中添加图片,
          #quality为视频压缩质量。
          print(clock.fps())
      
      m.close(clock.fps())
      pyb.LED(BLUE_LED_PIN).off()
      print("Done! Reset the camera to see the saved recording.")![0_1678883849044_e1c4853c701314f4d86a6c9d72702ed.png](https://fcdn.singtown.com/c0e7f99a-0f65-48dd-82d6-5de5f967c161.png) 
      
      posted in OpenMV Cam
      I
      iv2r
    • 请问如何提高模型准确率呢

      如何提高训练的模型的正确率呢?增大数据集吗

      posted in OpenMV Cam
      I
      iv2r
    • 我们需要比9600更慢的传输速率

      回复: 如何加延时函数

      当openmv识别到色块时,需要让它一秒钟之内不发送数据,请问怎么解决,好像加上延迟函数它就跟睡死了,请问怎么解决呢?谢谢

      import sensor, image, pyb
      import time, os, tf, math, uos, gc
      from pyb import UART,LED
      
      LED(3).on()
      
      sensor.reset()                         # Reset and initialize the sensor.
      sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
      sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
      sensor.set_windowing((240, 240))       # Set 240x240 window.
      sensor.skip_frames(30)          # Let the camera adjust.
      
      uart = UART(3, 115200)
      uart.init(115200, bits=8, parity=None, stop=1)
      def send_data(data1):
          global uart
          data = bytearray([0xb3,0xb3,data1,0x5b])
          uart.write(data)
      
      
      net = None
      labels = None
      min_confidence = 0.5
      
      try:
          # load the model, alloc the model file on the heap if we have at least 64K free after loading
          net = tf.load("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024)))
      except Exception as e:
          raise Exception('Failed to load "trained.tflite", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
      
      try:
          labels = [line.rstrip('\n') for line in open("labels.txt")]
      except Exception as e:
          raise Exception('Failed to load "labels.txt", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
      
      colors = [ # Add more colors if you are detecting more than 7 types of classes at once.
          (255,   0,   0),
          (  0, 255,   0),
          (255, 255,   0),
          (  0,   0, 255),
          (255,   0, 255),
          (  0, 255, 255),
          (255, 255, 255),
      ]
      
      clock = time.clock()
      while(True):
          clock.tick()
          row_data=0 # 0 正常行走,1 停止,2 等待
      
          img = sensor.snapshot()
      
          # detect() returns all objects found in the image (splitted out per class already)
          # we skip class index 0, as that is the background, and then draw circles of the center
          # of our objects
      
          for i, detection_list in enumerate(net.detect(img, thresholds=[(math.ceil(min_confidence * 255), 255)])):
              if (i == 0): continue # background class
              if (len(detection_list) == 0): continue # no detections for this class?])
              if labels[i] == 's':
                  row_data = 1
                  print('s')
                  pyb.delay(1000)
                  send_data(row_data)
              if labels[i] == 'w':
                  row_data = 2
                  print('w')
                  pyb.delay(1000)
                  send_data(row_data)
              else:
                  print(0)
                  send_data(row_data)
          #print(row_data)
      
      
      
      
      posted in OpenMV Cam
      I
      iv2r
    • edge impulse训练的模型 导出固件类型 这个float型的不能导出 应该怎么解决呢

      0_1678106350982_4JF89`D}7GJ0Q49H5HGXX}5.png

      posted in OpenMV Cam
      I
      iv2r
    • openmv4与stm32

      openmv4巡线的程序把固定阈值更改为自适应后巡线,有时转弯转不过来,好像就是看见弯道也没反应,请问是算力不够吗?谢谢。

      posted in OpenMV Cam
      I
      iv2r
    • 请问这个程序采用了自适应阈值巡线(黑线),找不到黑线(屏幕全白)是什么原因
      import sensor, image, math
      from pyb import UART,LED
      
      LED(3).on()
      
      uart = UART(3, 115200)
      uart.init(115200, bits=8, parity=None, stop=1)
      def send_data(data1,data2,data3):
          global uart
          data = bytearray([0xb3,0xb3,data1,data2,data3,0x5b])
          uart.write(data)
      
      
      ROIS = [
            (0, 0, 128, 40, 0.2),
            (0, 40, 128, 40, 0.4),
            (0, 80, 128, 40, 0.6)
           ]
      range_stop = [400,100,70]
      range_wait = 500
      
      weight_sum = 0
      for r in ROIS: weight_sum += r[4]
      
      sensor.reset()
      sensor.set_contrast(1)
      sensor.set_pixformat(sensor.RGB565)
      sensor.set_framesize(sensor.QQVGA2)
      sensor.set_brightness(10)
      sensor.skip_frames(30)
      sensor.set_auto_whitebal(False)
      sensor.set_vflip(False)
      sensor.set_hmirror(False)
      
      def find_max(blobs):
          max_size=[0,0]
          max_ID=[-1,-1]
          for i in range(len(blobs)):
              if blobs[i].pixels()>max_size[0]:
                  max_ID[1]=max_ID[0]
                  max_size[1]=max_size[0]
                  max_ID[0]=i
                  max_size[0]=blobs[i].pixels()
              elif blobs[i].pixels()>max_size[1]:
                  max_ID[1]=i
                  max_size[1]=blobs[i].pixels()
          return max_ID
      
      def degrees(radians):
          return (180 * radians) / math.pi
      
      #def stop1():
      
      
      def car_run():
          centroid_sum = [0,0]
          left_center=[-1,-1,-1]
          right_center=[-1,-1,-1]
          flag_cross=0
          flag_wait = [0,0]
          flag_stop = [0,0]
      
          for r in range(3):
              blobs = img.find_blobs([(0, l), (-128, a),(-128, b)], roi=ROIS[r][0:4],merge=True,area_threshold=200,margin=3,pixel_threshold=200)
              if blobs:
                  stop_flag = flag_stop[0]
                  max_ID=[-1,-1]
                  max_ID=find_max(blobs)
                  img.draw_rectangle(blobs[max_ID[0]].rect(),50)
                  img.draw_cross(blobs[max_ID[0]].cx(),blobs[max_ID[0]].cy(),50)
                  if max_ID[1]!=-1:
                      flag_cross=1 #有岔路口
                      img.draw_rectangle(blobs[max_ID[1]].rect(),50)
                      img.draw_cross(blobs[max_ID[1]].cx(),blobs[max_ID[1]].cy(),50)
                      if blobs[max_ID[0]].cx()<blobs[max_ID[1]].cx():
                          left_center[r]=blobs[max_ID[0]].cx()
                          right_center[r]=blobs[max_ID[1]].cx()
                      else:
                          left_center[r]=blobs[max_ID[1]].cx()
                          right_center[r]=blobs[max_ID[0]].cx()
                  else:
                      flag_cross=0
                      if blobs[max_ID[0]].pixels()>range_wait:
                          flag_wait[0]+=1
                      elif blobs[max_ID[0]].pixels()>range_stop[r]:
                          flag_stop[0]=r+1
                      left_center[r]=right_center[r]=blobs[max_ID[0]].cx()
                  centroid_sum[0] += left_center[r] * ROIS[r][4]
                  centroid_sum[1] += right_center[r] * ROIS[r][4]
                  if stop_flag == 3 and flag_stop[0] == 2:
                      flag_stop[1] = 1
                  else:
                       flag_stop[1] = 0
      
          center_pos =[0,0]
          center_pos[0] = (centroid_sum[0] / weight_sum)
          center_pos[1] = (centroid_sum[1] / weight_sum)
          deflection_angle = [0,0]
          deflection_angle[0] = -math.atan((center_pos[0]-64)/80)
          deflection_angle[1] = -math.atan((center_pos[1]-64)/80)
          deflection_angle[0] = math.degrees(deflection_angle[0])
          deflection_angle[1] = math.degrees(deflection_angle[1])
          if flag_wait[0] >= 2:
              flag_wait[1] = 1
          else:
              flag_wait[1] = 0
          if center_pos[0]==center_pos[1]==0:
              deflection_angle[1]=deflection_angle[0]=0
          A=[int(deflection_angle[0]),int(deflection_angle[1]),flag_stop[1],flag_wait[1]] #1等待
          return A
      
      
      while(True):
          cross = 1
          flag = 1
          if(uart.any()):
              flag = uart.readchar()
              if flag == 1:
                  cross = 1
              if flag == 2:
                  cross = 1
              if flag == 3:
                  cross = 1
              if flag == 6:
                  cross = 1
              if flag == 7:
                  cross = 0
              if flag == 4:
                  cross = 1
          else:
              cross = 1
      
          img = sensor.snapshot().lens_corr(strength = 0.5, zoom = 1.0)
          img.bilateral(3,color_sigma=0.1,space_sigma=0.1)#采用双边滤波
          img.gaussian(3, unsharp=True,)#图像消除锐化
          histogram = img.get_histogram()
          THRESHOLD = histogram.get_threshold()
          l  = THRESHOLD .l_value()
          a = THRESHOLD .a_value()
          b = THRESHOLD .b_value()
          print(THRESHOLD)
          img.binary([(0, l), (-128, a),(-128, b)])
      
          row_data=[0,0,0,0]
          row_data[0],row_data[1],row_data[2],row_data[3]=car_run()
          if cross == 0:
              if row_data[0] < -4:
                  send_data(-4,row_data[2],row_data[3])
              if -4<= row_data[0] < 0:
                  send_data(row_data[0],row_data[2],row_data[3])
              if 0 <= row_data[0] <= 3:
                  send_data(0,row_data[2],row_data[3])
              if row_data[0] >3:
                  send_data(row_data[0],row_data[2],row_data[3])
          if cross == 1:
              if row_data[1] < -5:
                  send_data(-5,row_data[2],row_data[3])
              if -5<= row_data[1] < 0:
                  send_data(row_data[1],row_data[2],row_data[3])
              if 0 <= row_data[1] <= 3:
                  send_data(0,row_data[2],row_data[3])
              if row_data[1] >3:
                  send_data(row_data[1],row_data[2],row_data[3])
      
      posted in OpenMV Cam
      I
      iv2r
    • 在巡线过程中看到这种标志就停车用什么方法识别,谢谢

      0_1677118956506_5C77141B-BB39-4C9D-9A45-2174D427C1AB.jpeg

      posted in OpenMV Cam
      I
      iv2r
    • 这是刚买的openmv 请问出了什么问题?谢谢。

      0_1667.png

      posted in OpenMV Cam
      I
      iv2r
    • lcd屏幕显示的区域可以设置吗?怎么设置让它显示摄像头中间区域?

      lcd屏幕显示的区域可以设置吗?怎么设置让它显示摄像头中间区域?

      posted in OpenMV Cam
      I
      iv2r