你好,我想请问一下,你这个IDE显示的图像偏绿的问题解决了吗,我现在也遇到了这个问题,应该怎么解决呢,在软件上面解决可以吗
袁
袁苗basdglfw
@袁苗basdglfw
0
声望
2
楼层
946
资料浏览
0
粉丝
0
关注
袁苗basdglfw 发布的帖子
-
uncaught exception in Timer(2) interrupt handler,这种问题怎么解决呢
import sensor, image, time import pyb from pyb import Timer, UART uart = UART(3, 115200, timeout_char = 1000) uart.init(115200, bits = 8, parity = None, stop = 1 ) uart_buf = bytearray([0x55,0,0,0,0,0]) def tick(timer): print("回调") uart.write(uart_buf) print("回调结束") senddata = Timer(2, freq = 100) senddata.callback(tick) # Tracks a black line. Use [(128, 255)] for a tracking a white line. GRAYSCALE_THRESHOLD = [(0, 64)] #设置阈值,如果是黑线,GRAYSCALE_THRESHOLD = [(0, 64)]; #如果是白线,GRAYSCALE_THRESHOLD = [(128,255)] # Each roi is (x, y, w, h). The line detection algorithm will try to find the # centroid of the largest blob in each roi. The x position of the centroids # will then be averaged with different weights where the most weight is assigned # to the roi near the bottom of the image and less to the next roi and so on. ROIS = [ # [ROI, weight] (0, 100, 160, 20, 0.7), # You'll need to tweak the weights for you app (0, 050, 160, 20, 0.3), # depending on how your robot is setup. (0, 000, 160, 20, 0.1) ] #roi代表三个取样区域,(x,y,w,h,weight),代表左上顶点(x,y)宽高分别为w和h的矩形, #weight为当前矩形的权值。注意本例程采用的QQVGA图像大小为160x120,roi即把图像横分成三个矩形。 #三个矩形的阈值要根据实际情况进行调整,离机器人视野最近的矩形权值要最大, #如上图的最下方的矩形,即(0, 100, 160, 20, 0.7) # Compute the weight divisor (we're computing this so you don't have to make weights add to 1). weight_sum = 0 #权值和初始化 for r in ROIS: weight_sum += r[4] # r[4] is the roi weight. #计算权值和。遍历上面的三个矩形,r[4]即每个矩形的权值。 # Camera setup... sensor.reset() # Initialize the camera sensor. sensor.set_pixformat(sensor.GRAYSCALE) # use grayscale. sensor.set_framesize(sensor.QQVGA) # use QQVGA for speed. sensor.skip_frames(20) # Let new settings take affect. sensor.set_auto_gain(False) # must be turned off for color tracking sensor.set_auto_whitebal(False) # must be turned off for color tracking #关闭白平衡 clock = time.clock() # Tracks FPS. while(True): clock.tick() # Track elapsed milliseconds between snapshots(). img = sensor.snapshot() # Take a picture and return the image. centroid_sum_x = 0 centroid_sum_y = 0 #利用颜色识别分别寻找三个矩形区域内的线段 for r in ROIS: blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=r[0:4], merge=True) # r[0:4] is roi tuple. #找到视野中的线,merge=true,将找到的图像区域合并成一个 #目标区域找到直线 if blobs: # Find the index of the blob with the most pixels. most_pixels = 0 largest_blob = 0 for i in range(len(blobs)): #目标区域找到的颜色块(线段块)可能不止一个,找到最大的一个,作为本区域内的目标直线 if blobs[i].pixels() > most_pixels: most_pixels = blobs[i].pixels() #merged_blobs[i][4]是这个颜色块的像素总数,如果此颜色块像素总数大于 #most_pixels,则把本区域作为像素总数最大的颜色块。更新most_pixels和largest_blob largest_blob = i # Draw a rect around the blob. img.draw_rectangle(blobs[largest_blob].rect()) #img.draw_rectangle((0,0,30, 30)) #将此区域的像素数最大的颜色块画矩形和十字形标记出来 img.draw_cross(blobs[largest_blob].cx(), blobs[largest_blob].cy()) centroid_sum_x += blobs[largest_blob].cx() * r[4] # r[4] is the roi weight. centroid_sum_y += blobs[largest_blob].cy() * r[4] #计算centroid_sum,centroid_sum等于每个区域的最大颜色块的中心点的x坐标值乘本区域的权值 center_pos_x = int(centroid_sum_x / weight_sum) # Determine center of line. center_pos_y = int(centroid_sum_y / weight_sum) #print(center_pos_x, center_pos_x) # print(center_pos_x>>8, center_pos_x, center_pos_y>>8, center_pos_y) #中间公式 #print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while # connected to your computer. The FPS should increase once disconnected. uart_buf = ([0x55, center_pos_x>>8, center_pos_x, center_pos_y>>8, center_pos_y,0])