共 499 条结果匹配 "sensor",(耗时 0.07 秒)
请问运行一会会出现这个提示框是代码的问题吗
应该是sensor板没有插紧。看一下螺丝有没有拧紧,如果不行可以寄回检查维修。
openmv4H7无法运行人脸识别官方例程
import sensor
import time
import image
Reset sensor
sensor.reset()
Sensor settings
sensor.set_contrast(3)
sensor.set_gainceiling(16)
HQVGA and GRAYSCALE are the best for face tracking.
sensor.set_framesize(sensor.HQVGA)
sensor.set_pixformat(sensor.GRAYSCALE)
Load Haar Cascade
By default this will use all stages, lower satges is faster but less accurate.
face_cascade = image.HaarCascade("/rom/haarcascade_frontalface.cascade", stages=25)
print(face_cascade)
FPS clock
clock = time.clock()
while True:
clock.tick()
# Capture snapshot
img = sensor.snapshot()
# Find objects.
# Note: Lower scale factor scales-down the image more and detects smaller objects.
# Higher threshold results in a higher detection rate, with more false positives.
objects = img.find_features(face_cascade, threshold=0.75, scale_factor=1.25)
# Draw objects
for r in objects:
img.draw_rectangle(r)
# Print FPS.
# Note: Actual FPS is higher, streaming the FB makes it slower.
print(clock.fps())
这一行
face_cascade = image.HaarCascade("/rom/haarcascade_frontalface.cascade", stages=25)报错,OSError: [Errno 19] ENODEV。
串口助手不回数
使用星瞳回数的串口助手,使用其他串口助手不回数
import sensor, image, time,pyb
led = pyb.LED(3)
led.on()
sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.RGB565) # use RGB565.
sensor.set_framesize(sensor.QQVGA) # use QQVGA for speed.
sensor.skip_frames(10) # Let new settings take affect.
sensor.set_auto_whitebal(False) # turn this off.
clock = time.clock() # Tracks FPS.
while(True):
clock.tick() # Track elapsed milliseconds between snapshots().
img = sensor.snapshot() # Take a picture and return the image.
a = img.get_pixel(75,60)
print("A",a,"B")
from pyb import CAN 报错?
import sensor, image, time
from pyb import CAN
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.skip_frames(time = 2000) # Wait for settings take effect.
clock = time.clock() # Create a clock object to track the FPS.
while(True):
clock.tick() # Update the FPS clock.
img = sensor.snapshot() # Take a picture and return the image.
print(clock.fps()) # Note: OpenMV Cam runs about half as fast when connected
# to the IDE. The FPS should increase once disconnected.
为什么openmv会自己打印帧率?有时脱机也不运行里面程序?
import sensor, image, time,math
import car
from pid import PID
from pyb import UART
from pyb import LED
sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.RGB565) # use RGB565.
sensor.set_framesize(sensor.QVGA) # use QQVGA for speed.
sensor.skip_frames(10) # Let new settings take affect.
sensor.set_auto_whitebal(True) # turn this off.
clock = time.clock() # Tracks FPS.
uart = UART(3, 9600, timeout_char=1000)
uart.init(9600, bits=8, parity=None, stop=1,timeout_char=1000)#串口通讯初始化设置
while(True):
clock.tick() # Track elapsed milliseconds between snapshots().
print("haha")
为什么openmv会把二维码识别成人脸??
import sensor, image, time
sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.RGB565) # use RGB565.
sensor.set_framesize(sensor.QQVGA) # use QQVGA for speed.
sensor.skip_frames(10) # Let new settings take affect.
sensor.set_auto_whitebal(False) # turn this off.
clock = time.clock() # Tracks FPS.
face_cascade = image.HaarCascade("frontalface", stages=25)
while(True):
clock.tick() # Track elapsed milliseconds between snapshots().
img = sensor.snapshot() # Take a picture and return the image.
objects = sensor.snapshot().to_grayscale().find_features(face_cascade, threshold=0.75, scale=1.35)
if objects:
print("检测到人脸")
for code in img.find_qrcodes():
message = code.payload()
print(message)
麻烦看下
如何只对一张图片进行二值化处理
是因为IDE没有接收到数据。代码里是正常采集到的。
如果想显示,可以这样:
import sensor, image, time
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.skip_frames(time = 2000) # Wait for settings take effect.
# 获得一帧图像
img = sensor.snapshot()
threshold = (0,100, -128,127, 20,50)
img.binary([threshold])
img = sensor.snapshot()
while True:
time.sleep(1)
运行代码后一直闪白灯,屏幕卡死,然后就识别不了openmv了
# This code run in OpenMV4 H7 or OpenMV4 H7 Plus
import sensor, image, time, os, tf
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.GRAYSCALE) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240)) # Set 240x240 window.
sensor.skip_frames(time=2000) # Let the camera adjust.
clock = time.clock()
while(True):
clock.tick()
img = sensor.snapshot().binary([(0,64)])
for obj in tf.classify("trained.tflite", img, min_scale=1.0, scale_mul=0.5, x_overlap=0.0, y_overlap=0.0):
output = obj.output()
number = output.index(max(output))
print(number)
print(clock.fps(), "fps")
如何获取图像的二进制数据流?
要将图片变成base64的码流,是否在sensor抓取图片之后可以直接将图片进行base64编码?