轉載自:http://blog.topspeedsnail.com/archives/10797
本帖使用OpenCV檢測移動的物體(洋文:Motion Detection)。它的應用非常廣泛,常用在視訊監控(當攝像頭内有移動物體出現時,攝像頭會自動抓拍,并儲存圖像/視訊)、車流量監控等等。
我喜歡聽着音樂上大号,我就想有沒有辦法在我上大号時自動播放音樂,智能馬桶的滾粗(能放音樂的應該不多)。這時,我想起了閑置的樹莓派,使用OpenCV+樹莓派做Motion Detection,隻要檢測到有移動的東西(人)就開始播放音樂。恩,沒錯,本人相當懶。另外,在上大号時被攝像頭照着也挺别扭。
Motion Detection的實作方法有很多,我使用的方法是Background subtraction、tutorial_py_bg_subtraction。
Background subtraction基本原理:首先取一張靜态的背景圖(不包含要檢測的移動物體),然後比較監控圖像(包含移動物體)和背景圖,找到不同區域,這個區域就是要檢測的物體。在現實環境中要複雜的多,我們還要考慮到光線變化、陰影、反射等等影響背景環境的因素。
本帖代碼運作環境:Ubuntu + OpenCV 3.1,稍作修改即可在樹莓派上運作。
import cv2
import time
camera = cv2.VideoCapture(0)
if camera is None:
print('請先連接配接攝像頭')
exit()
fps = 5 # 幀率
pre_frame = None # 總是取前一幀做為背景(不用考慮環境影響)
play_music = False
while True:
start = time.time()
res, cur_frame = camera.read()
if res != True:
break
end = time.time()
seconds = end - start
if seconds < 1.0/fps:
time.sleep(1.0/fps - seconds)
"""
cv2.imshow('img', cur_frame)
key = cv2.waitKey(30) & 0xff
if key == 27:
break
"""
gray_img = cv2.cvtColor(cur_frame, cv2.COLOR_BGR2GRAY)
gray_img = cv2.resize(gray_img, (500, 500))
gray_img = cv2.GaussianBlur(gray_img, (21, 21), 0)
if pre_frame is None:
pre_frame = gray_img
else:
img_delta = cv2.absdiff(pre_frame, gray_img)
thresh = cv2.threshold(img_delta, 25, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.dilate(thresh, None, iterations=2)
image, contours, hierarchy = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for c in contours:
if cv2.contourArea(c) < 1000: # 設定敏感度
continue
else:
#print(cv2.contourArea(c))
print("前一幀和目前幀不一樣了, 有什麼東西在動!")
play_music = True
break
pre_frame = gray_img
camera.release()
cv2.destroyAllWindows()
我在攝像頭前稍有移動,它就檢測出來了。如果我在攝像頭前保持靜止,由于前一幀和目前幀沒有大的變化,它就認為場景内沒有移動的東西
下面隻要起一個線程播放音樂就大公告成了。
import os
import subprocess
import random
import cv2
import time
camera = cv2.VideoCapture(0)
if camera is None:
print('請先連接配接攝像頭')
exit()
fps = 5 # 幀率
pre_frame = None # 總是取前一幀做為背景
mp3_path = '/root/Music'
mp3_filenames = []
for mp3 in os.listdir(mp3_path):
if mp3.endswith('.mp3'):
mp3_filenames.append(mp3)
while True:
start = time.time()
res, cur_frame = camera.read()
if res != True:
break
end = time.time()
seconds = end - start
if seconds < 1.0/fps:
time.sleep(1.0/fps - seconds)
"""
cv2.imshow('img', cur_frame)
key = cv2.waitKey(30) & 0xff
if key == 27:
break
"""
gray_img = cv2.cvtColor(cur_frame, cv2.COLOR_BGR2GRAY)
gray_img = cv2.resize(gray_img, (500, 500))
gray_img = cv2.GaussianBlur(gray_img, (21, 21), 0)
if pre_frame is None:
pre_frame = gray_img
else:
img_delta = cv2.absdiff(pre_frame, gray_img)
thresh = cv2.threshold(img_delta, 25, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.dilate(thresh, None, iterations=2)
image, contours, hierarchy = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for c in contours:
if cv2.contourArea(c) < 1000: # 設定敏感度
continue
else:
mp3_file = mp3_path + '/' + random.choice(mp3_filenames)
print("playing", mp3_file)
p = subprocess.Popen('mplayer ' + mp3_file,stdin=None,stdout=None, shell=True)
p.wait()
break
pre_frame = gray_img
camera.release()
cv2.destroyAllWindows()
camera = PiCamera()
camera.resolution = (500, 500)
camera.framerate = 5
rawCapture = PiRGBArray(camera)
for f in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
cur_frame = f.array