一、mediapipe是什么?
mediapipe官网
二、使用步骤
1.引入库
代码如下:
import cv2
from mediapipe import solutions
import time
2.主代码
代码如下:
cap = cv2.VideoCapture(0)
mpHands = solutions.hands
hands = mpHands.Hands()
mpDraw = solutions.drawing_utils
pTime = 0
count = 0
while True:
success, img = cap.read()
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
results = hands.process(imgRGB)
if results.multi_hand_landmarks:
for handLms in results.multi_hand_landmarks:
mpDraw.draw_landmarks(img, handLms, mpHands.HAND_CONNECTIONS)
cTime = time.time()
fps = 1 / (cTime - pTime)
pTime = cTime
cv2.putText(img, str(int(fps)), (25, 50), cv2.FONT_HERSHEY_PLAIN, 2, (255, 0, 0), 3)
cv2.imshow("Image", img)
cv2.waitKey(1)
3.识别结果
以上就是今天要讲的内容,本文仅仅简单介绍了mediapipe的使用,而mediapipe提供了大量关于图像识别等的方法。
补充:
下面看下基于mediapipe人脸网状识别。
1.下载mediapipe库:
pip install mediapipe
2.完整代码:
import cv2
import mediapipe as mp
import time
mp_drawing = mp.solutions.drawing_utils
mp_face_mesh = mp.solutions.face_mesh
drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1)
cap = cv2.VideoCapture("3.mp4")
with mp_face_mesh.FaceMesh(
min_detection_confidence=0.5,
min_tracking_confidence=0.5) as face_mesh:
while cap.isOpened():
success, image = cap.read()
if not success:
print("Ignoring empty camera frame.")
# If loading a video, use 'break' instead of 'continue'.
continue
# Flip the image horizontally for a later selfie-view display, and convert
# the BGR image to RGB.
image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB)
# To improve performance, optionally mark the image as not writeable to
# pass by reference.
image.flags.writeable = False
results = face_mesh.process(image)
time.sleep(0.02)
# Draw the face mesh annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.multi_face_landmarks:
for face_landmarks in results.multi_face_landmarks:
mp_drawing.draw_landmarks(
image=image,
landmark_list=face_landmarks,
connections=mp_face_mesh.FACE_CONNECTIONS,
landmark_drawing_spec=drawing_spec,
connection_drawing_spec=drawing_spec)
cv2.imshow('MediaPipe FaceMesh', image)
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()
到此这篇关于python+mediapipe+opencv实现手部关键点检测功能(手势识别)的文章就介绍到这了,更多相关python mediapipe opencv手势识别内容请搜索编程网以前的文章或继续浏览下面的相关文章希望大家以后多多支持编程网!