一段代码可以实现灰度图人脸识别,一段代码可以实现口罩检测,请问如何合二为一?
-
两段代码都可以运行成功,想要实现既可以用矩形框框出人脸,又可以在终端显示带口罩的概率,跪求大神一助
-
@fggx # Edge Impulse - OpenMV Image Classification Example
import sensor, image, time, os, tf, uos, gc
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240)) # Set 240x240 window.
sensor.skip_frames(time=2000) # Let the camera adjust.net = None
labels = Nonetry:
# load the model, alloc the model file on the heap if we have at least 64K free after loading
net = tf.load("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024)))
except Exception as e:
print(e)
raise Exception('Failed to load "trained.tflite", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')try:
labels = [line.rstrip('\n') for line in open("labels.txt")]
except Exception as e:
raise Exception('Failed to load "labels.txt", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')clock = time.clock()
while(True):
clock.tick()img = sensor.snapshot() # default settings just do one detection... change them to search the image... for obj in net.classify(img, min_scale=1.0, scale_mul=0.8, x_overlap=0.5, y_overlap=0.5): print("**********\nPredictions at [x=%d,y=%d,w=%d,h=%d]" % obj.rect()) img.draw_rectangle(obj.rect()) # This combines the labels and confidence values into a list of tuples predictions_list = list(zip(labels, obj.output())) for i in range(len(predictions_list)): print("%s = %f" % (predictions_list[i][0], predictions_list[i][1])) print(clock.fps(), "fps")
口罩识别的
导入相应的库
import sensor, image, time
初始化摄像头
sensor.reset()
设置相机图像的对比度为1
sensor.set_contrast(1)
设置相机的增益上限为16
sensor.set_gainceiling(16)
设置采集到照片的大小
sensor.set_framesize(sensor.HQVGA)
设置采集到照片的格式:灰色图像
sensor.set_pixformat(sensor.GRAYSCALE)
加载Haar Cascade 模型
默认使用25个步骤,减少步骤会加快速度但会影响识别成功率
face_cascade = image.HaarCascade("frontalface", stage = 25)
print(face_cascade)创建一个时钟来计算摄像头每秒采集的帧数FPS
clock = time.clock()
while(True):
# 更新FPS时钟
clock.tick()# 拍摄图片并返回img img = sensor.snapshot() # 寻找人脸对象 # threshold和scale_factor两个参数控制着识别的速度和准确性 objects = img.find_features(face_cascade, threshold=0.75, scale_factor=1.25) # 用矩形将人脸画出来 for r in objects: img.draw_rectangle(r) # 串口打印FPS参数 # print(clock.fps())
人脸灰度识别的
-
这是口罩检测的代码: # Edge Impulse - OpenMV Image Classification Example import sensor, image, time, os, tf, uos, gc sensor.reset() # Reset and initialize the sensor. sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE) sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240) sensor.set_windowing((240, 240)) # Set 240x240 window. sensor.skip_frames(time=2000) # Let the camera adjust. net = None labels = None try: # load the model, alloc the model file on the heap if we have at least 64K free after loading net = tf.load("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024))) except Exception as e: print(e) raise Exception('Failed to load "trained.tflite", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')') try: labels = [line.rstrip('\n') for line in open("labels.txt")] except Exception as e: raise Exception('Failed to load "labels.txt", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')') clock = time.clock() while(True): clock.tick() img = sensor.snapshot() # default settings just do one detection... change them to search the image... for obj in net.classify(img, min_scale=1.0, scale_mul=0.8, x_overlap=0.5, y_overlap=0.5): print("**********\nPredictions at [x=%d,y=%d,w=%d,h=%d]" % obj.rect()) img.draw_rectangle(obj.rect()) # This combines the labels and confidence values into a list of tuples predictions_list = list(zip(labels, obj.output())) for i in range(len(predictions_list)): print("%s = %f" % (predictions_list[i][0], predictions_list[i][1])) print(clock.fps(), "fps") 这是人脸识别的代码: import sensor, image, time sensor.reset() sensor.set_contrast(1) sensor.set_gainceiling(16) sensor.set_framesize(sensor.HQVGA) sensor.set_pixformat(sensor.GRAYSCALE) face_cascade = image.HaarCascade("frontalface", stage = 25) print(face_cascade) clock = time.clock() while(True): clock.tick() img = sensor.snapshot() objects = img.find_features(face_cascade, threshold=0.75, scale_factor=1.25) for r in objects: img.draw_rectangle(r) print(clock.fps())
-
你直接神经网络里加一个“无人”的分类得了。
如果要组合代码,可以参考 https://singtown.com/learn/50029/
-
人脸检测+口罩识别:
使用impluse edge 和haar,需要添加到U盘里生成的两个文件,可以直接复制运行。以下代码可以成功运行,矩形框标出人脸,并显示出mask yes 或者mask no,并实现了在lcd上显示。欢迎大家参考,有问题可以留言。
import sensor, image, time, os, tf, uos, gc, lcd sensor.reset() sensor.set_pixformat(sensor.RGB565) sensor.set_framesize(sensor.QVGA) sensor.set_windowing((240, 240)) sensor.skip_frames(time=2000) face_cascade = image.HaarCascade("frontalface", stage=25) net = tf.load("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024))) labels = [line.rstrip('\n') for line in open("labels.txt")] clock = time.clock() lcd.init() lcd.clear() while True: clock.tick() img = sensor.snapshot() objects = img.find_features(face_cascade, threshold=0.75, scale_factor=1.25) for r in objects: img.draw_rectangle(r) for r in objects: face_img = img.crop(r) mask = net.classify(face_img)[0].output()[0] if mask > 0.1: img.draw_rectangle(r, color=(0, 255, 0)) img.draw_string(r[0], r[1], "Mask", color=(0, 255, 0)) else: img.draw_rectangle(r, color=(255, 0, 0)) img.draw_string(r[0], r[1], "No Mask", color=(255, 0, 0)) lcd.display(img) print(clock.fps())