這裡寫目錄标題
- 1. 稀疏光流跟蹤:Lucas-Kanade方法
-
- 原理
- 程式示例
- 2. 稠密光流跟蹤:Farneback算法
1. 稀疏光流跟蹤:Lucas-Kanade方法
原理
前提
- 亮度恒定。圖像場景中目标的像素在幀間運動時外觀上保持不變。對于灰階圖像,需要假設像素被逐幀跟蹤時且亮度不變。
- 時間連續或者運動是“小運動”。圖像的運動随時間的變化比較緩慢。實際應用中指的是時間變化相對圖像中運動的比例要足夠小,這樣目标在幀間的運動就比較小。
- 空間一緻。一個場景中同一表面上鄰近的點具有相似的運動,在圖像平面上的投影也在鄰近區域。
void calcOpticalFlowPyrLK(InputArray prevImg,
InputArray nextImg,
InputArray prevPts,
InputOutputArray nextPts,
OutputArray status,
OutputArray err,
Size winSize = Size(21, 21),
int maxLevel = 3,
TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 0.01),
int flags = 0,
double minEigThreshold = 1e-4
)
參數詳解:
- 第一個參數:深度為8位的前一幀圖像或金字塔圖像。
- 第二個參數:和prevImg有向圖
- 第三個參數:計算光流所需要的輸入2D點矢量。點坐标必須時單精度浮點數。
- 第四個參數:輸出2D矢量。
- 第五個參數:輸出狀态矢量。
- 第六個參數:輸出誤差矢量。
- 第七個參數:每個金字塔層搜尋視窗大小。
- 第八個參數:金字塔層的最大數目。
- 第九個參數:指定搜尋算法收斂疊代的類型。
- 第十個參數:算法計算的光流等式的2x2正常矩陣的最小特征值。
程式示例
int main(int argc, char* argv[])
{
// Variable declaration and initialization
// Iterate until the user hits the Esc key
while(true)
{
// Capture the current frame
cap >> frame;
// Check if the frame is empty
if(frame.empty())
break;
//Resize the frame
resize(frame, frame, Size(), scalingFactor, scalingFactor, INTER_AREA);
// Copy the input frame
frame.copyTo(image);
// Convert the image ot gtayscale
cvtColor(image, curGrayImage, COLOR_BGR2GRAY);
// Check if there are points to track
if(!trackingPoints[0].empty())
{
// Status vector to indicate whether the flow for the corresponding features has been found
vector<uchar> statusVector;
// Error vector to indicate the error for the corresponding feature
vector<float> errorVector;
// Check if previous image is empty
if(prevGrayImage.empty())
{
curGrayImage.copyTo(preGrayImage);
}
// Calculate the optial flow using Lucas-Kanade algorithm
calcOpticalFlowPyrLK(prevGrayImage, curGrayImage, trackingPoints[0], trackingPoints[1], statusVector, errorVector, windowSize, 3, terminationCriteria, 0, 0.001);
int count = 0;
// Minimum distace between any two tracking points
int minDist = 7;
for(int i = 0; i < trackingPoints[1].size(); i++)
{
if(pointTrackingFlag)
{
// If the new point is within 'minDist' distance from an existing point, it will not be tracked
if(norm(currentPoint - trackingPoints[1][i]) <= minDist)
{
pointTrackingFlag = false;
continue;
}
}
// Check if the status vector is good
if(!statusVector[i])
continue;
trackingPoints[1][count ++] = trackingPoints[1][i];
// Draw a filled circle for each of the tracking points
int radius = 8;
int thickness = 2;
int lineType = 8;
circle(image, trackingPoints[1][i], radius, Scalar(0, 255, 0), thickness, lineType);
}
trackingPoints[1].resize(count);
}
// Refining the location of the feature points
if(pointTrackingFlag && trackingPoints[1].size() < maxNumPoints)
{
vector<Pointt2f> tempPoints;
// Function to refine the location of the corners to subpixel accuracy
// Here 'pixel' refers to the image path of size 'windowSize' and not the actual image pixel
cornerSubPix(curGrayImage, tempPoints, windowSize, Size(-1, -1), terminationCriteria);
trackingPoints[1].push_back(tempPoints[0]);
pointTrackingFlag = false;
}
// Display the image with the tracking points
imshow(windowName, image);
// Check if the user pressed the Esc key
char ch = waitkey(10);
if(ch == 27)
break;
// Swap the 'points' vectors to update 'previous' to 'current'
std::swap(trackingPoints[1], trackingPoints[0]);
// Swap the images to update previous image to current image
cv::swap(prevGrayImage, curGrayImage);
}
return 1;
}