请问关于教程中face_tracking例程是实现什么功能的,这个例程一直在输出fps值,如何让它更稳定的找到运动的人脸
-
import sensor, time, image
Reset sensor
sensor.reset()
sensor.set_contrast(3)
sensor.set_gainceiling(16)
sensor.set_framesize(sensor.VGA)
sensor.set_windowing((320, 240))
sensor.set_pixformat(sensor.GRAYSCALE)跳过几帧使图像稳定
sensor.skip_frames(time = 2000)
加载Haar算子
默认情况下,这将使用所有阶段,较低的阶段更快但不太准确。
face_cascade = image.HaarCascade("frontalface", stages=25)
print(face_cascade)第一组关键点
kpts1 = None
找到人脸
while (kpts1 == None):
img = sensor.snapshot()
img.draw_string(0, 0, "Looking for a face...")
# 找到人脸
objects = img.find_features(face_cascade, threshold=0.5, scale=1.25)
if objects:
# 在每个方向上将ROI扩大31个像素
face = (objects[0][0]-31, objects[0][1]-31,objects[0][2]+312, objects[0][3]+312)
# 使用检测面大小作为ROI提取关键点
kpts1 = img.find_keypoints(threshold=10, scale_factor=1.1, max_keypoints=100, roi=face)
# 围绕第一个面绘制一个矩形
img.draw_rectangle(objects[0])画关键点
print(kpts1)
img.draw_keypoints(kpts1, size=24)
img = sensor.snapshot()
time.sleep(2000)FPS clock
clock = time.clock()
while (True):
clock.tick()
img = sensor.snapshot()
# 从整个帧中提取关键点
kpts2 = img.find_keypoints(threshold=10, scale_factor=1.1, max_keypoints=100, normalized=True)if (kpts2): # 将第一组关键点与第二组关键点匹配 c=image.match_descriptor(kpts1, kpts2, threshold=85) match = c[6] # C[6] 包含匹配度. if (match>5): img.draw_rectangle(c[2:6]) img.draw_cross(c[0], c[1], size=10) print(kpts2, "matched:%d dt:%d"%(match, c[7])) # 绘制 FPS img.draw_string(0, 0, "FPS:%.2f"%(clock.fps())) print(clock.fps())
请问这个例程是找到人脸后就一直输出FPS值吗?
-