天天看點

OpenCV3.2 雙目攝像頭标定與SGBM算法驗證

雙目标定的目标在于得出兩個攝像頭之間的旋轉矩陣R(rotation matrix)和平移向量T(translation vector),以及各自的旋轉矩陣Rl Rr、投影矩陣Pl Pr和重映射矩陣Q(disparity-to-depth mapping matrix)。經過立體比對(BM、SGBM、GC算法等)後可得出視差圖,根據Q便可計算出實際空間坐标。

跟上一篇一樣,後來的我也将這部分的原理總結了一下,可以看我的這篇

讀書筆記

,但立體比對部分我尚未完全了解,這部分以後會補上。

—— Jacob楊幫幫 11/2/2018

OpenCV3.2 雙目攝像頭标定與SGBM算法驗證
深度圖,灰階越大越遠

1.攝像頭各自标定

見我的另一篇博文

opencv3.3 單目攝像頭的标定與矯正 - 簡書

。各自标定得出内參矩陣和畸變矩陣後再進行雙目标定的結果要好許多。當然也可以直接雙目标定,這裡不進行介紹。

//單目标定得到的内參矩陣和畸變矩陣
cv::Mat cameraMatrixL = (cv::Mat_<double>(3, 3) <<
                         570.853,0,163.936,
                         0,565.62,142.756,
                         0,0,1);

cv::Mat distCoeffL = (cv::Mat_<double>(5, 1) <<-0.1464597668354846, -6.154543533838482,
                      -0.002589887217588616, 0.005985159261180101, 58.40123386205326);


cv::Mat cameraMatrixR = (cv::Mat_<double>(3, 3) <<
                         568.373,0,158.748,
                         0,562.243,114.268,
                         0,0,1);

cv::Mat distCoeffR = (cv::Mat_<double>(5, 1) << -0.2883413485650786, -1.10075802161073,
                      -0.00209556234492967, 0.007351217947355803, 6.544712063275942);
           

2.進行雙目立體标定

這方面是有官方的例程的。但是可能我的用法有問題,得出的結果不盡人意,而且也想自己實作一遍。以下代碼主要參考官方例程和各個部落格大神。

  • 尋找角點并計算投影點
isFindL = cv::findChessboardCorners(imageL, boardSize, imageCornersL);
        isFindR = cv::findChessboardCorners(imageR, boardSize, imageCornersR);
        //如果兩幅圖像都找到了所有的角點 則說明這兩幅圖像是可行的
        if (isFindL == true && isFindR == true)  
        {
            /*
            Size(5,5) 搜尋視窗的一半大小
            Size(-1,-1) 死區的一半尺寸
            TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ASsxITER, 20, 0.1)疊代終止條件
            */
            cv::cornerSubPix(imageL, imageCornersL, cv::Size(5, 5), cv::Size(-1, -1), 
            cv::TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 20, 0.1));
            cv::drawChessboardCorners(imageL, boardSize, imageCornersL, isFindL);
          //  cv::imshow("chessboardL", imageL);
            imagePointL.push_back(imageCornersL);

            cv::cornerSubPix(imageR, imageCornersR, cv::Size(5, 5), cv::Size(-1, -1),
            cv::TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 20, 0.1));
            cv::drawChessboardCorners(imageR, boardSize, imageCornersR, isFindR);
          //  cv::imshow("chessboardR", imageR);
            imagePointR.push_back(imageCornersR);

            goodFrameCount++;
            std::cout << "The image" << goodFrameCount << " is good" << std::endl;
        }
        else
        {
            std::cout << "The image is bad please try again" << std::endl;
            std::cout <<"left image " << isFindL << std::endl;
            std::cout <<"right image " << isFindR << std::endl;
        }

    }
    calRealPoint(objRealPoint, boardWidth, boardHeight, PICS_NUMBER/2, squareSize);
    std::cout << "calculate success" << std::endl;
           

與單目攝像頭标定類似,不多做介紹

  • 調用stereoCalibrate函數,得出R和T矩陣
double rms = cv::stereoCalibrate(objRealPoint, imagePointL, imagePointR,
                                    cameraMatrixL, distCoeffL,
                                    cameraMatrixR, distCoeffR,
                                    cv::Size(imageWidth, imageHeight),
                                    R, T, E, F,cv::CALIB_USE_INTRINSIC_GUESS,
                                    cv::TermCriteria(cv::TermCriteria::COUNT
                                    + cv::TermCriteria::EPS, 100, 1e-5));
std::cout << "Stereo Calibration done with RMS error = " << rms << std::endl;
           

有必要看下函數原型

double stereoCalibrate( InputArrayOfArrays objectPoints,
                        InputArrayOfArrays imagePoints1, 
                        InputArrayOfArrays imagePoints2,
                        InputOutputArray cameraMatrix1, InputOutputArray distCoeffs1,
                        lnputOutputArray cameraMatrix2, InputOutputArray distCoeffs2,
                        Size imageSize, InputOutputArray R,
                        InputOutputArray T, OutputArray E, OutputArray F,
                        OutputArray perViewErrors, int flags = CALIB_FIX_INTRINSIC,
                        TermCriteria criteria = TermCriteria(TermCriteria::COUNT+
                        TermCriteria::EPS, 30, 1e-6) );
           

其中參數E和F為本征矩陣和基礎矩陣,本例中不使用。可以傳入空矩陣

cv::Mat()。criteria為終止條件,一般使用預設參數。flag參數比較重要,源碼中描述如下,我做了下簡單翻譯

§ CV_CALIB_FIX_INTRINSIC 内參矩陣和畸變矩陣不變,是以隻有 R, T, E , 和 F矩陣被估計出來

§ CV_CALIB_USE_INTRINSIC_GUESS 内參矩陣和畸變矩陣初始值由使用者提供,并在疊代中進行優化

§ CV_CALIB_FIX_PRINCIPAL_POINT 在優化過程中确定主點。

§ CV_CALIB_FIX_FOCAL_LENGTH疊代中不改變焦距 .

§ CV_CALIB_FIX_ASPECT_RATIO保持 fx和 fy比值相同.

§ CV_CALIB_SAME_FOCAL_LENGTH強制保持兩個錄影機的焦距相同 .

§ CV_CALIB_ZERO_TANGENT_DIST設定每個相機切向畸變系數為零并且設為固定值。

§ CV_CALIB_FIX_K1,...,CV_CALIB_FIX_K6在優化中不改變相應的徑向畸變系數. 如果設定CV_CALIB_USE_INTRINSIC_GUESS , 使用distCoeffs矩陣提供的系數。否則将其置零.

§ CV_CALIB_RATIONAL_MODEL能夠輸出系數k4,k5,k6。如果FLAG沒有被設定,該函數計算并隻傳回5畸變系數。

由于已經做了單目的标定,使用CV_CALIB_USE_INTRINSIC_GUESS即可

  • 對标定過的攝像頭進行立體矯正
/*
    立體校正的時候需要兩幅圖像共面并且行對準 以使得立體比對更加的可靠
    使得兩幅圖像共面的方法就是把兩個攝像頭的圖像投影到一個公共成像面上,這樣每幅圖像從本圖像平面投影到公共圖像平面都需要一個旋轉矩陣R
    stereoRectify 這個函數計算的就是從圖像平面投影到公共成像平面的旋轉矩陣Rl,Rr。 Rl,Rr即為左右相機平面行對準的校正旋轉矩陣。
    左相機經過Rl旋轉,右相機經過Rr旋轉之後,兩幅圖像就已經共面并且行對準了。
    其中Pl,Pr為兩個相機的投影矩陣,其作用是将3D點的坐标轉換到圖像的2D點的坐标:P*[X Y Z 1]' =[x y w]
    Q矩陣為重投影矩陣,即矩陣Q可以把2維平面(圖像平面)上的點投影到3維空間的點:Q*[x y d 1] = [X Y Z W]。其中d為左右兩幅圖像的時差
    */
    cv::stereoRectify(cameraMatrixL, distCoeffL, cameraMatrixR, distCoeffR, imageSize, R, T, Rl, Rr, Pl, Pr, Q,
    cv::CALIB_ZERO_DISPARITY, -1, imageSize, &validROIL, &validROIR);

    /*
    根據stereoRectify 計算出來的R 和 P 來計算圖像的映射表 mapx,mapy
    mapx,mapy這兩個映射表接下來可以給remap()函數調用,來校正圖像,使得兩幅圖像共面并且行對準
    ininUndistortRectifyMap()的參數newCameraMatrix就是校正後的錄影機矩陣。在openCV裡面,校正後的計算機矩陣Mrect是跟投影矩陣P一起傳回的。
    是以我們在這裡傳入投影矩陣P,此函數可以從投影矩陣P中讀出校正後的錄影機矩陣
    */
    cv::Size newSize(static_cast<int>(imageL.cols*1.2), static_cast<int>(imageL.rows*1.2));

    cv::initUndistortRectifyMap(cameraMatrixL, distCoeffL, Rl, Pl, newSize,
                                CV_32FC1, mapLx, mapLy);
    cv::initUndistortRectifyMap(cameraMatrixR, distCoeffR, Rr, Pr, newSize,
                                CV_32FC1, mapRx, mapRy);
    //使用映射表進行矯正
    cv::Mat rectifyImageL, rectifyImageR;
    cv::remap(imageL, rectifyImageL, mapLx, mapLy, cv::INTER_LINEAR);
    cv::remap(imageR, rectifyImageR, mapRx, mapRy, cv::INTER_LINEAR);
           

函數stereoRectify原型如下

void stereoRectify( InputArray cameraMatrix1, InputArray distCoeffs1,
                    InputArray cameraMatrix2, InputArray distCoeffs2,
                    Size imageSize, InputArray R, InputArray T,
                    OutputArray R1, OutputArray R2,
                    OutputArray P1, OutputArray P2,
                    OutputArray Q, int flags = CALIB_ZERO_DISPARITY,
                    double alpha = -1, Size = Size(),
                    CV_OUT Rect* validPixROI1 = 0, CV_OUT Rect* validPixROI2 = 0 );
           

R1 R2是左右兩個攝像頭的旋轉矩陣,P1 P2是各自的投影矩陣。

alpha為拉伸參數。如果設定為負或忽略,将不進行拉伸。如果設定為0,那麼校正後圖像隻有有效的部分會被顯示(沒有黑色的部分),如果設定為1,那麼就會顯示整個圖像。可設定範圍為為0~1。

flags可選的标志有兩種零或者CV_CALIB_ZERO_DISPARITY ,如果設定 CV_CALIB_ZERO_DISPARITY的話,該函數會讓兩幅校正後的圖像的主點有相同的像素坐标。否則該函數會水準或垂直的移動圖像,以使得其有用的範圍最大

newImageSize 校正後的圖像大小,一般跟原圖像相同。

validPixROI1 validPixROI2為校正後的圖像可選的輸出矩形。這裡不使用。

其他函數單目标定中有使用,不做介紹。可以使用drawMatches函數顯示矯正後的雙目圖檔

//顯示矯正後的圖像
    for(int i=20; i<rectifyImageL.rows; i+=20)
    {
        cv::line(rectifyImageL,cv::Point(0,i),cv::Point(rectifyImageL.cols,i),
                 cv::Scalar(255,255,255));
        cv::line(rectifyImageR,cv::Point(0,i),cv::Point(rectifyImageL.cols,i),
                 cv::Scalar(255,255,255));
    }
    cv::Mat imageMatches;
    cv::drawMatches(rectifyImageL, std::vector<cv::KeyPoint>(),  // 1st image
        rectifyImageR, std::vector<cv::KeyPoint>(),              // 2nd image
        std::vector<cv::DMatch>(),
        imageMatches,                       // the image produced
        cv::Scalar(255, 255, 255),
        cv::Scalar(255, 255, 255),
        std::vector<char>(),
        2);

    cv::imshow("imageMatches", imageMatches);
           

3.使用SGBM算法驗證結果

SGBM是一種立體比對算法,準确度和速度适中,工程中比較常用

cv::Ptr<cv::StereoSGBM> sgbm = cv::StereoSGBM::create(0,16,3);
    int sgbmWinSize = 3;
    int cn = imageL.channels();
    int numberOfDisparities = ((imageSize.width/8) + 15) & -16;

    sgbm->setPreFilterCap(63);
    sgbm->setBlockSize(sgbmWinSize);
    sgbm->setP1(8*cn*sgbmWinSize*sgbmWinSize);
    sgbm->setP2(32*cn*sgbmWinSize*sgbmWinSize);
    sgbm->setMinDisparity(0);
    sgbm->setNumDisparities(numberOfDisparities);
    sgbm->setUniquenessRatio(10);
    sgbm->setSpeckleWindowSize(50);
    sgbm->setSpeckleRange(32);
    sgbm->setDisp12MaxDiff(1);
    sgbm->setMode(cv::StereoSGBM::MODE_SGBM);

    cv::Mat disp, disp8;

    sgbm->compute(rectifyImageL, rectifyImageR, disp);
    disp.convertTo(disp8, CV_8U, 255/(numberOfDisparities*16.));
    cv::imshow("disparity8", disp8);
           

對disp進行轉換就得到可以顯示的視差圖。

cv::reprojectImageTo3D(disp, xyz, Q, true);
    xyz = xyz * 16; // xyz=[X/W Y/W Z/W],乘以16得到真實坐标
    cv::setMouseCallback("disparity8", onMouse, 0);

    void onMouse(int event, int x, int y,int,void*)
    {
        cv::Point origin;
        switch (event)
        {
            case cv::EVENT_LBUTTONDOWN:   //滑鼠左按鈕按下的事件
                origin = cv::Point(x, y);
                std::cout << origin << "in world coordinate is: " << 
                             xyz.at<cv::Vec3f>(origin)<< std::endl;
                break;
        }
    }
           

設定回調函數,這樣在image window上點選的時候就可以得到真實坐标。

4.驗證結果

矯正後的雙目圖檔如下

OpenCV3.2 雙目攝像頭标定與SGBM算法驗證

image.png

OpenCV3.2 雙目攝像頭标定與SGBM算法驗證

視差圖如下

OpenCV3.2 雙目攝像頭标定與SGBM算法驗證

輸出的真實坐标如下。兩個柚子的距離在60cm左右,結果還是比較準确的。

OpenCV3.2 雙目攝像頭标定與SGBM算法驗證

5.經驗總結

  • 标定闆格子的大小要測量準确,對重映射矩陣Q的影響很大,結果準不準确很大程度上是以決定
  • 雙目标定同樣不要偷懶,圖檔數量最好在15對(30張)以上
  • 善于使用官方例程,這方面opencv開源庫的維護者還是做得比較到位的

完整代碼如下

/*
 *  m_stereo_match.cpp
 *  Created by 楊幫傑 on 7/30/18.
 *  Right to use this code in any way you want without warranty,
 *  support or any guarantee of it working
 *  E-mail: [email protected]
 */

#include <iostream>
#include <iomanip>
#include <vector>
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/features2d.hpp>
#include "opencv2/imgproc.hpp"
#include "opencv2/calib3d.hpp"

#define LEFT_CAM_PICS_FILE_PATH   "F:/QtProjects/stero/stereo4/left"  //路徑
#define RIGHT_CAM_PICS_FILE_PATH   "F:/QtProjects/stero/stereo4/right"
#define PICS_NUMBER 34 //标定圖檔的數量

const int imageWidth = 320;                             //攝像頭的分辨率
const int imageHeight = 240;
const int boardWidth = 8;                               //橫向的角點數目
const int boardHeight = 6;                              //縱向的角點資料
const float squareSize = 1.33;                              //标定闆黑白格子的大小 機關cm
const cv::Size imageSize = cv::Size(imageWidth, imageHeight);
const cv::Size boardSize = cv::Size(boardWidth, boardHeight);   //标定闆的總内角點

cv::Mat Rl, Rr, Pl, Pr, Q;                                  //校正旋轉矩陣R,投影矩陣P 重投影矩陣Q
cv::Mat mapLx, mapLy, mapRx, mapRy;                         //映射表
cv::Rect validROIL, validROIR;                              //裁剪之後的區域
cv::Mat xyz;


int goodFrameCount=0;      //可檢測到的圖檔對數

cv::Mat R, T, E, F;                                                  //R 旋轉矢量 T平移矢量 E本征矩陣 F基礎矩陣
std::vector<cv::Mat> rvecs;                                        //旋轉向量
std::vector<cv::Mat> tvecs;                                        //平移向量
std::vector<std::vector<cv::Point2f>> imagePointL;                    //左邊錄影機所有照片角點的坐标集合
std::vector<std::vector<cv::Point2f>> imagePointR;                    //右邊錄影機所有照片角點的坐标集合
std::vector<std::vector<cv::Point3f>> objRealPoint;                   //各副圖像的角點的實際實體坐标集合


//單目标定得到的内參矩陣和畸變矩陣
cv::Mat cameraMatrixL = (cv::Mat_<double>(3, 3) <<
                         570.853,0,163.936,
                         0,565.62,142.756,
                         0,0,1);

cv::Mat distCoeffL = (cv::Mat_<double>(5, 1) <<-0.1464597668354846, -6.154543533838482,
                      -0.002589887217588616, 0.005985159261180101, 58.40123386205326);


cv::Mat cameraMatrixR = (cv::Mat_<double>(3, 3) <<
                         568.373,0,158.748,
                         0,562.243,114.268,
                         0,0,1);

cv::Mat distCoeffR = (cv::Mat_<double>(5, 1) << -0.2883413485650786, -1.10075802161073,
                      -0.00209556234492967, 0.007351217947355803, 6.544712063275942);


/*計算标定闆上子產品的實際實體坐标*/
void calRealPoint(std::vector<std::vector<cv::Point3f>>& obj, int boardwidth, int boardheight, int imgNumber, float squaresize)
{

    std::vector<cv::Point3f> imgpoint;
    for (float rowIndex = 0.; rowIndex < boardheight; rowIndex++)
    {
        for (float colIndex = 0.; colIndex < boardwidth; colIndex++)
        {
            imgpoint.push_back(cv::Point3f(rowIndex * squaresize, colIndex * squaresize, 0));
        }
    }

    for (float imgIndex = 0.; imgIndex < imgNumber; imgIndex++)
    {
        obj.push_back(imgpoint);
    }

}

void onMouse(int event, int x, int y,int,void*)
{
    cv::Point origin;
    switch (event)
    {
        case cv::EVENT_LBUTTONDOWN:   //滑鼠左按鈕按下的事件
            origin = cv::Point(x, y);
            xyz.at<cv::Vec3f>(origin)[2] +=2;
            std::cout << origin << "in world coordinate is: " << xyz.at<cv::Vec3f>(origin)<< std::endl;
            break;
    }
}

int main()
{
    std::vector<std::string> filelistL;
    std::vector<std::string> filelistR;

    //讀取左攝像頭的檔案
    for (int i=1; i<=PICS_NUMBER/2; i++) {

        std::stringstream str;
        str << LEFT_CAM_PICS_FILE_PATH << std::setw(2) << std::setfill('0') << i << ".png";
        std::cout << str.str() << std::endl;

        filelistL.push_back(str.str());

    }

    //讀取右攝像頭的檔案
    for (int i=1; i<=PICS_NUMBER/2; i++) {

        std::stringstream str;
        str << RIGHT_CAM_PICS_FILE_PATH << std::setw(2) << std::setfill('0') << i << ".png";
        std::cout << str.str() << std::endl;

        filelistR.push_back(str.str());

    }

    cv::Mat imageL;
    cv::Mat imageR;

    while (goodFrameCount < PICS_NUMBER/2)
    {

        std::vector<cv::Point2f> imageCornersL;
        std::vector<cv::Point2f> imageCornersR;

        /*讀取左邊的圖像*/
        imageL = cv::imread(filelistL[goodFrameCount], 0);

        /*讀取右邊的圖像*/
        imageR = cv::imread(filelistR[goodFrameCount], 0);

        bool isFindL, isFindR;

        isFindL = cv::findChessboardCorners(imageL, boardSize, imageCornersL);
        isFindR = cv::findChessboardCorners(imageR, boardSize, imageCornersR);
        if (isFindL == true && isFindR == true)  //如果兩幅圖像都找到了所有的角點 則說明這兩幅圖像是可行的
        {
            /*
            Size(5,5) 搜尋視窗的一半大小
            Size(-1,-1) 死區的一半尺寸
            TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ASsxITER, 20, 0.1)疊代終止條件
            */
            cv::cornerSubPix(imageL, imageCornersL, cv::Size(5, 5), cv::Size(-1, -1), cv::TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 20, 0.1));
            cv::drawChessboardCorners(imageL, boardSize, imageCornersL, isFindL);
          //  cv::imshow("chessboardL", imageL);
            imagePointL.push_back(imageCornersL);

            cv::cornerSubPix(imageR, imageCornersR, cv::Size(5, 5), cv::Size(-1, -1), cv::TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 20, 0.1));
            cv::drawChessboardCorners(imageR, boardSize, imageCornersR, isFindR);
          //  cv::imshow("chessboardR", imageR);
            imagePointR.push_back(imageCornersR);

            goodFrameCount++;
            std::cout << "The image" << goodFrameCount << " is good" << std::endl;
        }
        else
        {
            std::cout << "The image is bad please try again" << std::endl;
            std::cout <<"left image " << isFindL << std::endl;
            std::cout <<"right image " << isFindR << std::endl;
        }

    }
    calRealPoint(objRealPoint, boardWidth, boardHeight, PICS_NUMBER/2, squareSize);
    std::cout << "calculate success" << std::endl;

    double rms = cv::stereoCalibrate(objRealPoint, imagePointL, imagePointR,
                                    cameraMatrixL, distCoeffL,
                                    cameraMatrixR, distCoeffR,
                                    cv::Size(imageWidth, imageHeight),
                                    R, T, E, F,cv::CALIB_USE_INTRINSIC_GUESS,
                                    cv::TermCriteria(cv::TermCriteria::COUNT
                                    + cv::TermCriteria::EPS, 100, 1e-5));

    std::cout << "Stereo Calibration done with RMS error = " << rms << std::endl;

    /*
    立體校正的時候需要兩幅圖像共面并且行對準 以使得立體比對更加的可靠
    使得兩幅圖像共面的方法就是把兩個攝像頭的圖像投影到一個公共成像面上,這樣每幅圖像從本圖像平面投影到公共圖像平面都需要一個旋轉矩陣R
    stereoRectify 這個函數計算的就是從圖像平面投影到公共成像平面的旋轉矩陣Rl,Rr。 Rl,Rr即為左右相機平面行對準的校正旋轉矩陣。
    左相機經過Rl旋轉,右相機經過Rr旋轉之後,兩幅圖像就已經共面并且行對準了。
    其中Pl,Pr為兩個相機的投影矩陣,其作用是将3D點的坐标轉換到圖像的2D點的坐标:P*[X Y Z 1]' =[x y w]
    Q矩陣為重投影矩陣,即矩陣Q可以把2維平面(圖像平面)上的點投影到3維空間的點:Q*[x y d 1] = [X Y Z W]。其中d為左右兩幅圖像的時差
    */
    //對标定過的圖像進行校正
    cv::stereoRectify(cameraMatrixL, distCoeffL, cameraMatrixR, distCoeffR, imageSize, R, T, Rl, Rr, Pl, Pr, Q,
    cv::CALIB_ZERO_DISPARITY, -1, imageSize, &validROIL, &validROIR);

    /*
    根據stereoRectify 計算出來的R 和 P 來計算圖像的映射表 mapx,mapy
    mapx,mapy這兩個映射表接下來可以給remap()函數調用,來校正圖像,使得兩幅圖像共面并且行對準
    ininUndistortRectifyMap()的參數newCameraMatrix就是校正後的錄影機矩陣。在openCV裡面,校正後的計算機矩陣Mrect是跟投影矩陣P一起傳回的。
    是以我們在這裡傳入投影矩陣P,此函數可以從投影矩陣P中讀出校正後的錄影機矩陣
    */
    //錄影機校正映射
    cv::Size newSize(static_cast<int>(imageL.cols*1.2), static_cast<int>(imageL.rows*1.2));

    cv::initUndistortRectifyMap(cameraMatrixL, distCoeffL, Rl, Pl, newSize,
                                CV_32FC1, mapLx, mapLy);
    cv::initUndistortRectifyMap(cameraMatrixR, distCoeffR, Rr, Pr, newSize,
                                CV_32FC1, mapRx, mapRy);


    std::cout << "---------------cameraMatrixL & distCoeffL ----------------- "   << std::endl;
    std::cout << "cameraMatrixL" << std::endl  << cameraMatrixL <<std::endl;
    std::cout << "distCoeffL"  << std::endl << distCoeffL <<std::endl;

    std::cout << "---------------cameraMatrixR & distCoeffR ----------------- "   << std::endl;
    std::cout << "cameraMatrixR" << std::endl << cameraMatrixR << std::endl;
    std::cout << "distCoeffR  " << std::endl << distCoeffR << std::endl;

    std::cout << "---------------R & T ----------------- "   << std::endl;
    std::cout << "R " << std::endl << R <<std::endl;
    std::cout << "T " << std::endl << T <<std::endl;

    std::cout << "---------------Pl & Pr ----------------- "   << std::endl;
    std::cout << "Pl " << std::endl << Pl <<std::endl;
    std::cout << "Pr " << std::endl << Pr <<std::endl;

    std::cout << "---------------Rl & Rr ----------------- "   << std::endl;
    std::cout << "Rl " << std::endl << Rl <<std::endl;
    std::cout << "Rr " << std::endl << Rr <<std::endl;

    std::cout << "---------------  Q ----------------- "   << std::endl;
    std::cout << "Q " << std::endl << Q <<std::endl;


    /*************使用SGBM算子驗證結果**************/
#if 0
    imageL = cv::imread(filelistL[13],0);
    imageR = cv::imread(filelistR[13],0);
#else
    imageL = cv::imread("F:\\QtProjects\\stero\\youzi60l.png",0);
    imageR = cv::imread("F:\\QtProjects\\stero\\youzi60r.png",0);
#endif

    cv::Mat rectifyImageL, rectifyImageR;
    cv::remap(imageL, rectifyImageL, mapLx, mapLy, cv::INTER_LINEAR);
    cv::remap(imageR, rectifyImageR, mapRx, mapRy, cv::INTER_LINEAR);


   // cv::imshow("rectifyImageL", rectifyImageL);
   // cv::imshow("rectifyImageR", rectifyImageR);


    cv::Ptr<cv::StereoSGBM> sgbm = cv::StereoSGBM::create(0,16,3);
    int sgbmWinSize = 3;
    int cn = imageL.channels();
    int numberOfDisparities = ((imageSize.width/8) + 15) & -16;

    sgbm->setPreFilterCap(63);
    sgbm->setBlockSize(sgbmWinSize);
    sgbm->setP1(8*cn*sgbmWinSize*sgbmWinSize);
    sgbm->setP2(32*cn*sgbmWinSize*sgbmWinSize);
    sgbm->setMinDisparity(0);
    sgbm->setNumDisparities(numberOfDisparities);
    sgbm->setUniquenessRatio(10);
    sgbm->setSpeckleWindowSize(50);
    sgbm->setSpeckleRange(32);
    sgbm->setDisp12MaxDiff(1);
    sgbm->setMode(cv::StereoSGBM::MODE_SGBM);

    cv::Mat disp, disp8;

    sgbm->compute(rectifyImageL, rectifyImageR, disp);
    disp.convertTo(disp8, CV_8U, 255/(numberOfDisparities*16.));

    cv::imshow("disparity8", disp8);


    cv::reprojectImageTo3D(disp, xyz, Q, true);
    xyz = xyz * 16; // xyz=[X/W Y/W Z/W],乘以16得到真實坐标
    cv::setMouseCallback("disparity8", onMouse, 0);

    //顯示矯正後的圖像
    for(int i=20; i<rectifyImageL.rows; i+=20)
    {
        cv::line(rectifyImageL,cv::Point(0,i),cv::Point(rectifyImageL.cols,i),cv::Scalar(255,255,255));
        cv::line(rectifyImageR,cv::Point(0,i),cv::Point(rectifyImageL.cols,i),cv::Scalar(255,255,255));
    }
    cv::Mat imageMatches;
    cv::drawMatches(rectifyImageL, std::vector<cv::KeyPoint>(),  // 1st image
        rectifyImageR, std::vector<cv::KeyPoint>(),              // 2nd image
        std::vector<cv::DMatch>(),
        imageMatches,                       // the image produced
        cv::Scalar(255, 255, 255),
        cv::Scalar(255, 255, 255),
        std::vector<char>(),
        2);

    cv::imshow("imageMatches", imageMatches);


    cv::waitKey();
    return 0;
}

           

繼續閱讀