学习目标
在本章中,将学习用于跟踪视频中对象的Meanshift和Camshift算法
Meanshift
Meanshift背后的原理很简单,假设有点的集合(它可以是像素分布,例如直方图反投影)。 给定一个小窗口(可能是一个圆形),必须将该窗口移动到最大像素密度(或最大点数)的区域。如下图所示:
初始窗口以蓝色圆圈显示,名称为“C1”。其原始中心以蓝色矩形标记,名称为“C1_o”。但是,如果找到该窗口内点的质心,则会得到点“C1_r”(标记为蓝色小圆圈),它是窗口的真实质心。当然,二者是不匹配的。因此,移动窗口使新窗口的圆与上一个质心匹配。再次找到新的质心。很可能不会匹配。因此,再次移动它,并继续迭代,以使窗口的中心及其质心落在同一位置(或在很小的期望误差内)。因此,最终获得的是一个具有最大像素分布的窗口。它带有一个绿色圆圈,名为“C2”。正如您在图像中看到的,它具有最大的点数。整个过程在下面的静态图像上演示:
因此,通常会传递直方图反投影图像和初始目标位置。当对象移动时,显然该移动会反映在直方图反投影图像中。因此,Meanshift算法就是将窗口移动到最大密度的新位置的算法。
OpenCV中的Meanshift
要在OpenCV中使用Meanshift,首先需要设置目标,找到其直方图,以便可以将目标反投影到每帧上以计算均值偏移。我们还需要提供窗口的初始位置。对于直方图,此处仅考虑色相(Hue)。另外,为避免由于光线不足而产生错误的值,可以使用cv2.inRange()
函数丢弃光线不足的值。 使用的视频中的三帧如下:
import cv2
import numpy as np
video_file = 'slow_traffic_small.mp4'
cap = cv2.VideoCapture(video_file)
# take first frame of the video
ret, frame = cap.read()
# setup initial location of window
x, y, w, h = 300, 200, 100, 50 # simply hardcoded the values
track_window = (x, y, w, h)
# setup the roi for tracking
roi = frame[y:y+h, x:x+w]
hsv_roi = cv2.cvtColor(roi, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv_roi, np.array((0., 60.,32.)), np.array((180.,255.,255.)))
roi_hist = cv2.calcHist([hsv_roi], [0], mask, [180], [0, 180])
cv2.normalize(roi_hist, roi_hist, 0, 255, cv2.NORM_MINMAX)
# setup the termination criteria, either 10 iteration or move by atleast 1 pt
term_crit = (cv2.TERM_CRITERIA_EPS|cv2.TERM_CRITERIA_COUNT, 10, 1)
while True:
ret, frame = cap.read()
if ret == True:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
dst = cv2.calcBackProject([hsv], [0], roi_hist, [0, 180], 1)
# apply meansift to get the new location
ret, track_window = cv2.meanShift(dst, track_window, term_crit)
# draw it on image
x, y, w, h = track_window
img2 = cv2.rectangle(frame, (x, y), (x+w, y+h), 255, 2)
cv2.imshow('img2', img2)
cv2.waitKey(0)
else:
cv2.destroyAllWindows()
break
Camshift
度为止。
OpenCV中的Camshift
它与Meanshift相似,但是返回一个旋转的矩形和box参数(用于在下一次迭代中作为搜索窗口传递)。
import cv2
import numpy as np
video_file = 'slow_traffic_small.mp4'
cap = cv2.VideoCapture(video_file)
# take first frame of the video
ret, frame = cap.read()
# setup initial location of window
x, y, w, h = 300, 200, 100, 50 # simply hardcoded the values
track_window = (x, y, w, h)
# setup the roi for tracking
roi = frame[y:y+h, x:x+w]
hsv_roi = cv2.cvtColor(roi, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv_roi, np.array((0., 60.,32.)), np.array((180.,255.,255.)))
roi_hist = cv2.calcHist([hsv_roi], [0], mask, [180], [0, 180])
cv2.normalize(roi_hist, roi_hist, 0, 255, cv2.NORM_MINMAX)
# setup the termination criteria, either 10 iteration or move by atleast 1 pt
term_crit = (cv2.TERM_CRITERIA_EPS|cv2.TERM_CRITERIA_COUNT, 10, 1)
while True:
ret, frame = cap.read()
if ret == True:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
dst = cv2.calcBackProject([hsv], [0], roi_hist, [0, 180], 1)
# apply meansift to get the new location
ret, track_window = cv2.CamShift(dst, track_window, term_crit)
# draw it on image
pts = cv2.boxPoints(ret) # find four points of the box
pts = np.int0(pts)
img2 = cv2.polylines(frame, [pts], True, 255, 2)
cv2.imshow('img2', img2)
cv2.waitKey(0)
else:
cv2.destroyAllWindows()
break
三帧的结果如下:
附加资源
- French Wikipedia page on Camshift
- Bradski, G.R., "Real time face and object tracking as a component of a perceptual user interface," Applications of Computer Vision, 1998. WACV '98. Proceedings., Fourth IEEE Workshop on , vol., no., pp.214,219, 19-21 Oct 1998
以上就是OpenCV目标检测Meanshif和Camshift算法解析的详细内容,更多关于OpenCV Meanshif Camshift算法的资料请关注编程网其它相关文章!