天天看點

OpenCV從入門到精通——角點特征點提取比對算法

harris角點

  • 角點可以是兩個邊緣的角點;
  • 角點是鄰域内具有兩個主方向的特征點;
  • 角點通常被定義為兩條邊的交點,更嚴格的說,角點的局部鄰域應該具有兩個不同區域的不同方向的邊界。或者說,角點就是多條輪廓線之間的交點。
  • 像素點附近區域像素無論是在梯度方向、還是在梯度幅值上都發生較大的變化。
  • 一階導數(即灰階的梯度)的局部最大所對應的像素點;
  • 兩條及兩條以上邊緣的交點;
  • 圖像中梯度值和梯度方向的變化速率都很高的點;
  • 角點處的一階導數最大,二階導數為零,訓示物體邊緣變化不連續的方向。
  • 角點在任意一個方向上做微小移動,都會引起該區域的梯度圖的方向和幅值發生很大變化。
  • 具有旋轉不變形,但是不具備尺度不變性。

算法步驟

- 求x,y兩個方向梯度,并計算出矩陣M
- 對矩陣M計算特征值、行列式和迹
- 根據特征值的關系并使用門檻值确定圖像特征      
OpenCV從入門到精通——角點特征點提取比對算法
OpenCV從入門到精通——角點特征點提取比對算法
OpenCV從入門到精通——角點特征點提取比對算法

函數api

CV_EXPORTS_W void cornerHarris( InputArray src, OutputArray dst, int blockSize,
                                int ksize, double k,
                                int borderType = BORDER_DEFAULT );      

dst:Harri算法的輸出矩陣(輸出圖像),CV_32FC1類型,與src有同樣的尺寸

src:輸入圖像,單通道,8位或浮點型

blockSize:鄰域大小

ksize:Sobel算子的孔徑大小

k:Harri算法系數

import cv2
import numpy as np

img = cv2.imread("./images/32.jpg")

img2gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

img2gray = np.float32(img2gray)

dst = cv2.cornerHarris(img2gray,blockSize=2,ksize=3,k=0.02)

dst = cv2.dilate(dst,cv2.getStructuringElement(cv2.MORPH_RECT,ksize=(8,8)))

img[dst>0.01*dst.max()] = [0,0,255]

cv2.imshow("dst",img)
cv2.waitKey(0)      
OpenCV從入門到精通——角點特征點提取比對算法

托馬斯算法

  • 原理
  • Harris 角點檢測中每個視窗的分數公式是将矩陣 M 的行列式與 M 的迹相減:
  • 由于 Harris 角點檢測算法的穩定性和 k 值有關,而 k 是個經驗值,不好設定最佳值。

Shi-Tomasi 發現,角點的穩定性其實和矩陣 M 的較小特征值有關,于是直接用較小的那個特征值作為分數。這樣就不用調整k值了。是以 Shi-Tomasi 将分數公式改為如下形式:

OpenCV從入門到精通——角點特征點提取比對算法
import cv2
import numpy as np

img = cv2.imread("./images/32.jpg")

img2gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

img2gray = np.float32(img2gray)

# dst = cv2.cornerHarris(img2gray,blockSize=2,ksize=3,k=0.02)
corners = cv2.goodFeaturesToTrack(img2gray,100,0.01,10)
# dst = cv2.dilate(dst,cv2.getStructuringElement(cv2.MORPH_RECT,ksize=(8,8)))

# img[dst>0.01*dst.max()] = [0,0,255]
corners = np.int0(corners)

for i in corners:
    x,y = i.ravel()
    cv2.circle(img,(x,y),radius=3,color=255,thickness=3)
    
cv2.imshow("dst",img)
cv2.waitKey(0)      
OpenCV從入門到精通——角點特征點提取比對算法

fast 算法

  • 點與周圍的值比較,門檻值确定。
import cv2

src = cv2.imread("33.jpg")
grayImg = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)

fast = cv2.FastFeatureDetector_create(threshold=35)
# fast.setNonmaxSuppression(False)
kp = fast.detect(grayImg, None)
img2 = cv2.drawKeypoints(src, kp, None, (0, 0, 255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

print('Threshold: ', fast.getThreshold())
print('nonmaxSuppression: ', fast.getNonmaxSuppression())
print('neighborhood: ', fast.getType())
print('Total Keypoints with nonmaxSuppression: ', len(kp))
#
cv2.imshow('fast_true', img2)
#
# fast.setNonmaxSuppression(False)
# kp = fast.detect(grayImg, None)
#
# print('Total Keypoints without nonmaxSuppression: ', len(kp))
#
# img3 = cv2.drawKeypoints(src, kp, None, (0, 0, 255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

# cv2.imshow('fast_false', img3)

cv2.waitKey()      
  • fast_true
  • fast_false NMS關閉
import cv2

src = cv2.imread("./images/33.jpg")
grayImg = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)

fast = cv2.FastFeatureDetector_create(threshold=35)
# fast.setNonmaxSuppression(False)
kp = fast.detect(grayImg, None)
img2 = cv2.drawKeypoints(src, kp, None, (0, 0, 255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

print('Threshold: ', fast.getThreshold())
print('nonmaxSuppression: ', fast.getNonmaxSuppression())
print('neighborhood: ', fast.getType())
print('Total Keypoints with nonmaxSuppression: ', len(kp))
#
cv2.imshow('fast_true', img2)
#
fast.setNonmaxSuppression(False)
kp = fast.detect(grayImg, None)

print('Total Keypoints without nonmaxSuppression: ', len(kp))

img3 = cv2.drawKeypoints(src, kp, None, (0, 0, 255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

cv2.imshow('fast_false', img3)

cv2.waitKey()      
OpenCV從入門到精通——角點特征點提取比對算法
OpenCV從入門到精通——角點特征點提取比對算法

ORB算法

  • 常用,略菜sift,但是快
  • fast還要快
import cv2
import numpy as np

img = cv2.imread("./images/33.jpg")

img2gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

orb = cv2.ORB_create()
kp = orb.detect(img2gray,None)
kp,des = orb.compute(img2gray,kp)

img2 = cv2.drawKeypoints(img,kp,None,color=(0,0,255),flags=0)


cv2.imshow("dst",img2)
cv2.waitKey(0)      
OpenCV從入門到精通——角點特征點提取比對算法

sift 算法

  • 加了大小比對,解決了尺度不變形
  • 專利算法,慎用
import cv2

src = cv2.imread("./images/33.jpg")
grayImg = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)

sift = cv2.SIFT_create()

kp = sift.detect(grayImg,None)

img = cv2.drawKeypoints(src,kp,None,color=(0,0,255))

cv2.imshow("img",img)
cv2.waitKey()      
OpenCV從入門到精通——角點特征點提取比對算法

SURF算法

  • 比sift快
  • 現有專利保護,需要重新編譯,或者換OpenCV版本
import cv2

src = cv2.imread("./images/33.jpg")
grayImg = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)

surf= cv2.xfeatures2d.SURF_create()

kp = surf.detect(grayImg,None)

img = cv2.drawKeypoints(src,kp,None,color=(0,0,255))

cv2.imshow("img",img)
cv2.waitKey()      
OpenCV從入門到精通——角點特征點提取比對算法

比對算法

import cv2

img1 = cv2.imread("./images/34.jpg")
grayImage1= cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
img2 = cv2.imread("./images/33.jpg")
grayImage2= cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)

orb = cv2.ORB_create()


kp1, des1 = orb.detectAndCompute(grayImage1, None)
kp2, des2 = orb.detectAndCompute(grayImage2, None)

bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1, des2)
matches = sorted(matches, key=lambda x: x.distance)

img3 = cv2.drawMatches(img1, kp1, img2, kp2, matches[:10], None, flags=2)

cv2.imshow("img",img3)
cv2.waitKey(0      

![在這裡插入圖檔描述](https://img-blog.csdnimg.cn/img_convert/1f7e2c34445cc8de306e9c4db05ac3d7.png

  • KD樹比對
import cv2

img1 = cv2.imread('./images/34.jpg')
grayImg1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
img2 = cv2.imread('./images/33.jpg')
grayImg2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

detector = cv2.xfeatures2d.SIFT_create()

kp1, des1 = detector.detectAndCompute(grayImg1, None)
kp2, des2 = detector.detectAndCompute(grayImg2, None)

matcher = cv2.DescriptorMatcher_create(cv2.DescriptorMatcher_FLANNBASED)
matches = matcher.knnMatch(des1, des2, k=2)
matchesMask = [[0, 0] for i in range(len(matches))]

for i, (m, n) in enumerate(matches):
    if m.distance < 0.7 * n.distance:
        matchesMask[i] = [1, 0]

draw_params = dict(matchColor=(0, 255, 0), singlePointColor=(255, 0, 0), matchesMask=matchesMask, flags=0)
img3 = cv2.drawMatchesKnn(img1, kp1, img2, kp2, matches, None, **draw_params)

cv2.imshow("img", img3)
cv2.waitKey(0)      

繼續閱讀