导航

    • 登录
    • 搜索
    • 版块
    • 产品
    • 教程
    • 论坛
    • 淘宝
    1. 主页
    2. xhk5
    3. 楼层
    X
    • 举报资料
    • 资料
    • 关注
    • 粉丝
    • 屏蔽
    • 帖子
    • 楼层
    • 最佳
    • 群组

    xhk5 发布的帖子

    • RE: edge impulse上图片太多了,他显示训练时间超过20min,买企业级的又太贵了,应该怎么搞

      只能训练20轮,效果会不会很差

      发布在 OpenMV Cam
      X
      xhk5
    • edge impulse上图片太多了,他显示训练时间超过20min,买企业级的又太贵了,应该怎么搞

      0_1683811933954_1683811909235.jpg 这个方法现在是不能用了嘛

      发布在 OpenMV Cam
      X
      xhk5
    • RE: 请问大佬,openmv在电脑上识别挺好的,但是连到工控机上,与别的硬件一起运行就效果就差了

      @kidswong999 就是openmv插电脑上识别效果挺好的,但是把openmv连到工控机上,识别效果就不行了

      发布在 OpenMV Cam
      X
      xhk5
    • RE: 请问大佬,openmv在电脑上识别挺好的,但是连到工控机上,与别的硬件一起运行就效果就差了

      @xhk5 神经网络识别的时候也是和在edge implase上label的时候一样去掉图像两边的吗

      发布在 OpenMV Cam
      X
      xhk5
    • 请问大佬,openmv在电脑上识别挺好的,但是连到工控机上,与别的硬件一起运行就效果就差了

      这是怎么回事啊,应该怎么解决

      发布在 OpenMV Cam
      X
      xhk5
    • RE: 大佬,想问一下怎么在代码层面提高物品识别的精度啊,硬件这边神经网络样例拍了一千多个了。

      @kidswong999 好的好的谢谢啦

      发布在 OpenMV Cam
      X
      xhk5
    • 大佬,想问一下怎么在代码层面提高物品识别的精度啊,硬件这边神经网络样例拍了一千多个了。

      import sensor, image, time, os, tf, math, uos, gc,pyb
      from pyb import UART
      from pyb import LED
      red_led = LED(1)
      sensor.reset()
      sensor.set_pixformat(sensor.RGB565)
      sensor.set_framesize(sensor.QVGA)
      sensor.set_windowing((240, 240))
      sensor.skip_frames(time=2000)
      uart = UART(3, 9600)
      net = None
      labels = None
      min_confidence = 0.92
      try:
      net = tf.load("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024)))
      except Exception as e:
      raise Exception('Failed to load "trained.tflite", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
      try:
      labels = [line.rstrip('\n') for line in open("labels.txt")]
      except Exception as e:
      raise Exception('Failed to load "labels.txt", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
      colors = [
      (255, 0, 0),
      ( 0, 255, 0),
      (255, 255, 0),
      ( 0, 0, 255),
      (255, 0, 255),
      ( 0, 255, 255),
      (255, 255, 255),
      ]
      r=0
      a=0
      count=0
      b=0
      t=0
      flag=0
      A=C=D=E=H=I=J=K=L=M=N=O=0
      clock = time.clock()
      while(True):
      img = sensor.snapshot()
      if uart.any():
      red_led.on()
      time.sleep_ms(500)
      red_led.off()
      A=C=D=E=H=I=J=K=L=M=N=O=0
      start = pyb.millis()
      count=0
      m=uart.read(1).decode()
      if m!='o':
      a=int(m)

          while( count <=6 ):
              detected = True
              img = sensor.snapshot()
              if pyb.millis() - start>5000:
                  uart.write("d")
                  count=7
              for i, detection_list in enumerate(net.detect(img, thresholds=[(math.ceil(min_confidence * 255), 255)])):
      
                  if (i == 0):
      
                      continue
      
                  if (len(detection_list) == 0):
      
                      continue
      
      
                  print("********** %s **********" % labels[i])
                  for d in detection_list:
                      [x, y, w, h] = d.rect()
                      center_x = math.floor(x + (w / 2))
                      center_y = math.floor(y + (h / 2))
                      print('x %d\ty %d' % (center_x, center_y))
                  if center_y>100:
                      if labels[i]=="旺仔" :
                          if center_x<=60:
                              A=A+1
                          elif center_x>60 and center_x<=165:
                              C=C+1
                          else:
                              D=D+1
                      if labels[i]=="王老吉" :
                          if center_x<=60:
                              E=E+1
                          elif center_x>60 and center_x<=165:
                              H=H+1
                          else:
                              I=I+1
                      if labels[i]=="雪花" :
                          if center_x<=60:
                              J=J+1
                          elif center_x>60 and center_x<=165:
                              K=K+1
                          else:
                              L=L+1
                      if labels[i]=="AD" :
                          if center_x<=60:
                              M=M+1
                          elif center_x>60 and center_x<=165:
                              N=N+1
                          else:
                              O=O+1
                      count=count+1
              #if detected:
              #    flag=1
              #if flag==1:
              #    uart.write("d")
              #    flag=0
          list=[A,C,D,E,H,I,J,K,L,M,N,O]
          Z=max(list)
          if Z<=3 and Z>0 :
      
              uart.write("d")
              A=C=D=E=H=I=J=K=L=M=N=O=0
      
          elif Z>3:
              if A==Z :
                  uart.write("A")
      
              if C==Z :
                  uart.write("C")
              if D==Z :
                  uart.write("D")
              if E==Z :
                  uart.write("E")
              if H==Z :
                  uart.write("H")
              if I==Z :
                  uart.write("I")
              if J==Z :
                  uart.write("J")
              if K==Z :
                  uart.write("K")
              if L==Z :
                  uart.write("L")
              if M==Z :
                  uart.write("M")
              if N==Z :
                  uart.write("N")
              if O==Z :
                  uart.write("O")
              A=C=D=E=H=I=J=K=L=M=N=O=0
      发布在 OpenMV Cam
      X
      xhk5
    • 请问大佬,如果没有识别到给定物体,怎么让他输出”null“。还有就是openmv怎么实现多线程同时进行两个while循环
      # Edge Impulse - OpenMV Object Detection Example
      
      import sensor, image, time, os, tf, math, uos, gc
      
      sensor.reset()                         # Reset and initialize the sensor.
      sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
      sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
      sensor.set_windowing((240, 240))       # Set 240x240 window.
      sensor.skip_frames(time=2000)          # Let the camera adjust.
      
      net = None
      labels = None
      min_confidence = 0.5
      
      try:
          # load the model, alloc the model file on the heap if we have at least 64K free after loading
          net = tf.load("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024)))
      except Exception as e:
          raise Exception('Failed to load "trained.tflite", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
      
      try:
          labels = [line.rstrip('\n') for line in open("labels.txt")]
      except Exception as e:
          raise Exception('Failed to load "labels.txt", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
      
      colors = [ # Add more colors if you are detecting more than 7 types of classes at once.
          (255,   0,   0),
          (  0, 255,   0),
          (255, 255,   0),
          (  0,   0, 255),
          (255,   0, 255),
          (  0, 255, 255),
          (255, 255, 255),
      ]
      
      clock = time.clock()
      while(True):
          clock.tick()
      
          img = sensor.snapshot()
      
          # detect() returns all objects found in the image (splitted out per class already)
          # we skip class index 0, as that is the background, and then draw circles of the center
          # of our objects
      
          for i, detection_list in enumerate(net.detect(img, thresholds=[(math.ceil(min_confidence * 255), 255)])):
              if (i == 0): continue # background class
              if (len(detection_list) == 0): continue # no detections for this class?
      
              print("********** %s **********" % labels[i])
              for d in detection_list:
                  [x, y, w, h] = d.rect()
                  center_x = math.floor(x + (w / 2))
                  center_y = math.floor(y + (h / 2))
                  print('x %d\ty %d' % (center_x, center_y))
                  img.draw_circle((center_x, center_y, 12), color=colors[i], thickness=2)
      
          print(clock.fps(), "fps", end="\n\n")
      
      发布在 OpenMV Cam
      X
      xhk5
    • RE: 想问下大佬,openmv能做到在三样物品中识别出不同的一个物品吗?

      @kidswong999 找不同呀,左中右放三个未知的物品,找出不一样的一个

      发布在 OpenMV Cam
      X
      xhk5
    • RE: 想问下大佬,openmv能做到在三样物品中识别出不同的一个物品吗?

      @kidswong999 但是神经网络不是要预先录入已知的物品的嘛,我说的这个三个物品是未知的

      发布在 OpenMV Cam
      X
      xhk5
    • 想问下大佬,openmv能做到在三样物品中识别出不同的一个物品吗?

      如果可以的话,应该怎么实现呀😘

      发布在 OpenMV Cam
      X
      xhk5
    • 想问下大佬,一个代码里面怎么样可以用多套神经网络?

      比如像同时区别交通标志和物块,但是这样labels.text就重复了

      发布在 OpenMV Cam
      X
      xhk5
    • edge impulse 输出的library是前一个,怎么搞?

      比如之前识别的是标志,这次是识别物块,但是重新在edge impulse上面搞了之后,在openmv上试了,还是原来的标志识别

      发布在 OpenMV Cam
      X
      xhk5
    • RE: openmv偶尔发问号,是硬件问题吗,波特率不稳定?

      0_1680100391758_1680100293312.jpg

      发布在 OpenMV Cam
      X
      xhk5
    • RE: openmv偶尔发问号,是硬件问题吗,波特率不稳定?

      大多数情况没问题,但是有时候发不出字母,有时候发问号,尤其发生在和工控机通讯的时候

      发布在 OpenMV Cam
      X
      xhk5
    • RE: openmv偶尔发问号,是硬件问题吗,波特率不稳定?
      @kidswong999 # Single Color Code Tracking Example
      #
      # This example shows off single color code tracking using the OpenMV Cam.
      #
      # A color code is a blob composed of two or more colors. The example below will
      # only track colored objects which have both the colors below in them.
      
      import sensor, image, time, math
      
      # Color Tracking Thresholds (L Min, L Max, A Min, A Max, B Min, B Max)
      # The below thresholds track in general red/green things. You may wish to tune them...
      #hongse
      thresholds1 = [(30, 100, 15, 127, 15, 127),
                    (30, 100, -64, -8, -32, 32)]
      #lvse
      thresholds2=[(59, 37, -67, -21, 22, 49),
                  (59, 37, -127, -21, 10, 127)]
      #lanse
      thresholds3=[(39, 60, -18, -3, -49, -11),
                  (0, 100, -18, -3, -49, -11)]
      
      
      
      
      import time
      from pyb import UART
      sensor.reset()
      sensor.set_pixformat(sensor.RGB565)
      sensor.set_framesize(sensor.QVGA)
      sensor.skip_frames(time = 2000)
      sensor.set_auto_gain(False) # must be turned off for color tracking
      sensor.set_auto_whitebal(False) # must be turned off for color tracking
      clock = time.clock()
      uart = UART(3,115200)
      # Only blobs that with more pixels than "pixel_threshold" and more area than "area_threshold" are
      # returned by "find_blobs" below. Change "pixels_threshold" and "area_threshold" if you change the
      # camera resolution. "merge=True" must be set to merge overlapping color blobs for color codes.
      a=0
      d=0
      b=1
      t=0
      while(True):
          img = sensor.snapshot().lens_corr(strength = 1.8, zoom = 1.0)
          if t==0 and uart.any():
              m=uart.read(1).decode()
              if m!='o':
                  a=int(m)
      
              clock.tick()
      
      
      
              for blob in img.find_blobs(thresholds1, pixels_threshold=10, area_threshold=10, merge=True):
                  if blob.code() == 3: # r/g code == (1 << 1) | (1 << 0)
      
                      img.draw_rectangle(blob.rect())
                      img.draw_cross(blob.cx(), blob.cy())
                      # Note - the blob rotation is unique to 0-180 only.
                      img.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=20)
      
                      b=1
                      #print(b)
                      #print(a)
                      print('lse')
      
      
      
      
      
              for blob in img.find_blobs(thresholds2, pixels_threshold=10, area_threshold=10, merge=True):
                  if blob.code() == 3: # r/g code == (1 << 1) | (1 << 0)
      
                      img.draw_rectangle(blob.rect())
                      img.draw_cross(blob.cx(), blob.cy())
                      # Note - the blob rotation is unique to 0-180 only.
                      img.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=20)
      
                      b=3
                      #print(b)
                      #print(a)
                      #print(c)
                      print('hse')
      
      
      
              for blob in img.find_blobs(thresholds3, pixels_threshold=10, area_threshold=10, merge=True):
                  if blob.code() == 3: # r/g code == (1 << 1) | (1 << 0)
      
                      img.draw_rectangle(blob.rect())
                      img.draw_cross(blob.cx(), blob.cy())
                      # Note - the blob rotation is unique to 0-180 only.
                      img.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=20)
      
                      b=5
      
      
                  print(b)
              if a==b or a==b+1 :
                      uart.write("d")
      
              elif a!=b and b==3:
                  uart.write("G")
                  uart.write("G")
                  n=3
      
                  d=a
      
                  t=1
              elif  a!=b and b==5:
                  uart.write("B")
      
                  d=a
      
                  t=1
              else:
                  uart.write("d")
              print(t)
              print(a)
              print(b)
              print(d)
          if t==1 and uart.any():
              p=0
              n=uart.read(1).decode()
              if n!='o':
                  r=int(n)
      
              clock.tick()
      
      
      
      
              for blob in img.find_blobs(thresholds1, pixels_threshold=10, area_threshold=10, merge=True):
                  if blob.code() == 3: # r/g code == (1 << 1) | (1 << 0)
      
                      img.draw_rectangle(blob.rect())
                      img.draw_cross(blob.cx(), blob.cy())
                      # Note - the blob rotation is unique to 0-180 only.
                      img.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=20)
      
                      p=1
                      #print(b)
                      #print(a)
      
      
      
      
      
      
              for blob in img.find_blobs(thresholds2, pixels_threshold=10, area_threshold=10, merge=True):
                  if blob.code() == 3: # r/g code == (1 << 1) | (1 << 0)
      
                      img.draw_rectangle(blob.rect())
                      img.draw_cross(blob.cx(), blob.cy())
                      # Note - the blob rotation is unique to 0-180 only.
                      img.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=20)
      
                      p=3
                      #print(b)
                      #print(a)
                      #print(c)
      
      
      
      
              for blob in img.find_blobs(thresholds3, pixels_threshold=10, area_threshold=10, merge=True):
                  if blob.code() == 3: # r/g code == (1 << 1) | (1 << 0)
      
                      img.draw_rectangle(blob.rect())
                      img.draw_cross(blob.cx(), blob.cy())
                      # Note - the blob rotation is unique to 0-180 only.
                      img.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=20)
      
                      p=5
      
              if p==d or p==d-1:
                  uart.write("F")
      
                  t=0
      
              else:
                  uart.write("d")
      
              print(t)
              print(r)
              print(p)
              print(d)
      
      发布在 OpenMV Cam
      X
      xhk5
    • RE: openmv偶尔发问号,是硬件问题吗,波特率不稳定?

      @kidswong999 没有报错,就是偶尔会发出问号

      发布在 OpenMV Cam
      X
      xhk5
    • RE: openmv偶尔发问号,是硬件问题吗,波特率不稳定?

      好像是有时候发不出

      发布在 OpenMV Cam
      X
      xhk5
    • openmv偶尔发问号,是硬件问题吗,波特率不稳定?

      大多数情况不会。

      发布在 OpenMV Cam
      X
      xhk5
    • uart.any后怎么清楚缓存区数据呀?

      uart.clear()吗

      发布在 OpenMV Cam
      X
      xhk5