天天看點

【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範

作者|| 陸傑

編輯||3D視覺開發者社群

✨如果覺得文章内容不錯,别忘了三連支援下哦😘~

文章目錄

  • 導語
  • 一、環境配置
    • OpenCV在Arm平台上編譯
    • OpenCV+OpenNI2配置
  • 二、資料擷取
    • 擷取深度圖
    • 擷取彩色圖
    • 擷取彩色點雲
  • 三、資料配準
    • 配準代碼
  • 四、姿勢歸一化
    • 軟體處理方法(自主研發)
  • 五、三維重建
    • 資料準備
    • 初始化
    • 非剛配準
    • 結果評價
  • 六、Demo示範

導語

本文為“3D視覺創新應用(三維重建)競賽”作品集系列之一,該作品由來自中國農業大學團隊完成,團隊成員:陸傑、蘇楊、都敖。全文約2054個字,閱讀時長約5分鐘,旨在為更多開發者提供學習參考。

一、環境配置

奧比中光OpenNISDK安裝(Linux)

奧比中光針對ZaroP1開發闆和深度錄影機提供了相關的OpenNI2的SDK,針對Windows/Linux/Android不同平台提供了相關的安裝文檔.根據官方文檔将OpenNI2配置完成,注意官方文檔中說Linux平台内置驅動不需要額外安裝,但是需要仔細閱讀README檔案:

OpenNI
------
Website: http://structure.io/openni
Building Prerequisites(前置依賴)
======================
Linux
-----
- GCC 4.x
    From: http://gcc.gnu.org/releases.html
    Or via apt:
    sudo apt-get install g++
- Python 2.6+/3.x
    From: http://www.python.org/download/
    Or via apt:
    sudo apt-get install python
- LibUSB 1.0.x
    From: http://sourceforge.net/projects/libusb/files/libusb-1.0/
    Or via apt:
    sudo apt-get install libusb-1.0-0-dev
- LibUDEV
    sudo apt-get install libudev-dev
- JDK 6.0
    From: http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u32-downloads-1594644.html
    Or via apt:
    sudo apt-get install openjdk-6-jdk
- FreeGLUT3
    From: http://freeglut.sourceforge.net/index.php#download
    Or via apt:
    sudo apt-get install freeglut3-dev
- Doxygen
    From: http://www.stack.nl/~dimitri/doxygen/download.html#latestsrc
    Or via apt:
    sudo apt-get install doxygen
- GraphViz
    From: http://www.graphviz.org/Download_linux_ubuntu.php
    Or via apt:
    sudo apt-get install graphviz
           

将以上的依賴安裝完成之後,運作其中示例代碼還需要安裝對應的USB驅動,驅動安裝方法如下此方法将同目錄下的USB驅動檔案拷貝到系統目錄中,并自動生成環境檔案指明OpenNI的路徑此方法将同目錄下的USB驅動檔案拷貝到系統目錄中,并自動生成環境檔案指明OpenNI的路徑,進入安裝包OpenNI目錄中:

[email protected]:OpenNI$ ./install.sh
           

最後,在安裝檔案中找到NiViewer運作檔案,若檔案不能運作,檢視chmod權限。

sudo ./NiViewer
           
【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範

結果圖檔

經過以上的配置過程,OpenNI的環境就配置完成了。

OpenCV在Arm平台上編譯

在運用開發闆裝置擷取資料的時候,通常運用OpenNI2擷取資料流,通過OpenCV對資料流進行轉換,生成RGB圖檔和深度圖檔。

源檔案官方位址,本文用的是OpenCV3.4.3版本。OpenCV編譯的方法可通過百度搜尋,編譯完成後對orbbec中OpenNI2SDK和opencv進行配置。

OpenCV+OpenNI2配置

根據官方給出的示例代碼,進行編寫。由于官方的代碼是在Makefile檔案中進行編寫,是以opencv也需要在其中編寫。根據Makefile檔案編寫規則進行編寫。

Includes
CFLAGS = ${shell pkg-config opencv --cflags}
#lib
LDFLAGS = ${shell pkg-config opencv --libs}
           

其中pkg-config需要在系統中配置opencv.pc檔案,檔案内容,在安裝的檔案中可以找到,若沒有則可能預設沒有生成(opencv4之後的版本預設不生成),需要在編譯的時候設定。

opencv環境配置沒有問題後,在orbbec提供的OpenNI2的檔案中示例代碼中進行整合開發。檔案位置在:

/OpenNI/Samples/
           

選擇其中SimpleViewer檔案夾中的示例代碼進行編寫。在CommonCppMakefile檔案夾中找到CFLAGS和LDFLAGS變量位置,在其後加上:

#Includes
CFLAGS += ${shell pkg-config opencv --cflags}
#lib
LDFLAGS += ${shell pkg-config opencv --libs}
           

此外還需要在Makefile檔案中加入OpenNI頭檔案的路徑和Redist的路徑,這就是在配置OpenNI時運作install.sh檔案生成的環境檔案中的内容,也在同級目錄下。其後編譯即可通過。接下來就可以使用opencv對資料進行擷取儲存了。

二、資料擷取

根據上文環境配置的結果,對資料擷取部分的代碼進行調整,由于官方OpenNI2代碼運用Makefile進行管理,是以将OpenCV的配置也運用Makefile進行管理。本文以官方示例代碼SimpleViewer檔案加下的檔案作為基礎,對其中的Makefile進行配置。

首先,“Makefile”檔案内容進行修改,第3、4行為添加部分,内容是官方OpenNI2的相關檔案路徑:

include CommonDefs.mak
BIN_DIR = Bin
OPENNI2_INCLUDE=/home/orbbec/Desktop/OpenNI-Linux-Arm64-2.3.0.65/OpenNI-Linux-Arm64-2.3.0.65/include/
OPENNI2_REDIST=/home/orbbec/Desktop/OpenNI-Linux-Arm64-2.3.0.65/OpenNI-Linux-Arm64-2.3.0.65/Redist/
INC_DIRS = \
    ../../Include \
    ../../ThirdParty/GL/ \
    ../Common
SRC_FILES = *.cpp
ifeq ("$(OSTYPE)","Darwin")
    CFLAGS += -DMACOS
    LDFLAGS += -framework OpenGL -framework GLUT
else
    CFLAGS += -DUNIX -DGLX_GLXEXT_LEGACY
    USED_LIBS += glut GL
endif
USED_LIBS += OpenNI2
EXE_NAME = SimpleViewer
CFLAGS += -Wall
ifndef OPENNI2_INCLUDE
    $(error OPENNI2_INCLUDE is not defined. Please define it or 'source' the OpenNIDevEnvironment file from the installation)
else ifndef OPENNI2_REDIST
    $(error OPENNI2_REDIST is not defined. Please define it or 'source' the OpenNIDevEnvironment file from the installation)
endif
INC_DIRS += $(OPENNI2_INCLUDE)
include CommonCppMakefile
.PHONY: copy-redist
copy-redist:
    cp -R $(OPENNI2_REDIST)/* $(OUT_DIR)
$(OUTPUT_FILE): copy-redist
           

接着對“CommonCppMakefile”檔案進行修改,45、46行為修改内容,添加的内容主要為OpenCV的編譯檔案的路徑,由環境配置可知,在pkgconfig中配置OpenCV.pc檔案後在系統中調用相關指令即可給出OpenCV安裝檔案的路徑:

# take this file's dir
COMMON_CPP_MAKE_FILE_DIR = $(dir $(lastword $(MAKEFILE_LIST)))
include $(COMMON_CPP_MAKE_FILE_DIR)CommonDefs.mak
# define a function to figure .o file for each source file (placed under intermediate directory)
SRC_TO_OBJ = $(addprefix ./$(INT_DIR)/,$(addsuffix .o,$(notdir $(basename $1))))
# create a list of all object files
OBJ_FILES = $(call SRC_TO_OBJ,$(SRC_FILES_LIST))
# define a function to translate any source file to its dependency file (note that the way we create
# dep files, as a side affect of compilation, always puts the files in the INT_DIR with suffix .d)
SRC_TO_DEP = $(addprefix ./$(INT_DIR)/,$(addsuffix .d,$(notdir $(basename $1))))
# create a list of all dependency files
DEP_FILES = $(call SRC_TO_DEP,$(SRC_FILES_LIST))
# older version of gcc doesn't support the '=' symbol in include dirs, so we replace it ourselves with sysroot
INC_DIRS_FROM_SYSROOT = $(patsubst =/%,$(TARGET_SYS_ROOT)/%,$(INC_DIRS))
# append the -I switch to each include directory
INC_DIRS_OPTION = $(foreach dir,$(INC_DIRS_FROM_SYSROOT),-I$(dir))
# append the -L switch to each library directory
LIB_DIRS_OPTION = $(foreach dir,$(LIB_DIRS),-L$(dir)) -L$(OUT_DIR)
# append the -l switch to each library used
USED_LIBS_OPTION = $(foreach lib,$(USED_LIBS),-l$(lib))
# append the -D switch to each define
DEFINES_OPTION = $(foreach def,$(DEFINES),-D$(def))
# tell compiler to use the target system root
ifdef TARGET_SYS_ROOT
    CFLAGS += --sysroot=$(TARGET_SYS_ROOT)
    LDFLAGS += --sysroot=$(TARGET_SYS_ROOT)
endif
# set Debug / Release flags
ifeq "$(CFG)" "Debug"
    CFLAGS += -O0 -g
endif
ifeq "$(CFG)" "Release"
    CFLAGS += -O2 -DNDEBUG
endif
CFLAGS += $(INC_DIRS_OPTION) $(DEFINES_OPTION)
CFLAGS += -fPIC -fvisibility=hidden
ifneq "$(ALLOW_WARNINGS)" "1"
    CFLAGS += -Werror
ifeq ("$(OSTYPE)","Darwin")
    CFLAGS += -Wno-deprecated-declarations -Wno-unused-private-field -Wno-unused-const-variable
endif
endif
LDFLAGS += $(LIB_DIRS_OPTION) $(USED_LIBS_OPTION)
##############this is out method######################
CFLAGS += ${shell pkg-config opencv --cflags}
LDFLAGS += ${shell pkg-config opencv --libs}
#####################################
# some lib / exe specifics
ifneq "$(LIB_NAME)" ""
    OUTPUT_NAME = lib$(LIB_NAME).so
    ifneq ("$(OSTYPE)","Darwin")
        LDFLAGS += -Wl,--no-undefined
        OUTPUT_NAME = lib$(LIB_NAME).so
        OUTPUT_COMMAND = $(CXX) -o $(OUTPUT_FILE) $(OBJ_FILES) $(LDFLAGS) -shared
    else
        LDFLAGS += -undefined error
        OUTPUT_NAME = lib$(LIB_NAME).dylib
        OUTPUT_COMMAND = $(CXX) -o $(OUTPUT_FILE) $(OBJ_FILES) $(LDFLAGS) -dynamiclib -headerpad_max_install_names -install_name $(OUTPUT_NAME)
    endif
endif
ifneq "$(EXE_NAME)" ""
    OUTPUT_NAME = $(EXE_NAME)
    # We want the executables to look for the .so's locally first:
    LDFLAGS += -Wl,-rpath ./
    OUTPUT_COMMAND = $(CXX) -o $(OUTPUT_FILE) $(OBJ_FILES) $(LDFLAGS)
endif
ifneq "$(SLIB_NAME)" ""
    OUTPUT_NAME = lib$(SLIB_NAME).a
    ifneq ("$(OSTYPE)","Darwin")
        OUTPUT_COMMAND = $(AR) -cq $(OUTPUT_FILE) $(OBJ_FILES)
    else
        OUTPUT_COMMAND = libtool -static -o $(OUTPUT_FILE) $(OBJ_FILES)
    endif
endif
define CREATE_SRC_TARGETS
# create a target for the object file (the CXX command creates both an .o file
# and a .d file)
ifneq ("$(OSTYPE)","Darwin")
$(call SRC_TO_OBJ,$1) : $1 | $(INT_DIR)
    $(CXX) -MD -MP -MT "$(call SRC_TO_DEP,$1) $$@" -c $(CFLAGS) -o $$@ $$<
else
$(call SRC_TO_OBJ,$1) : $1 | $(INT_DIR)
    $(CXX) -x c++ -c $(CFLAGS) -o $$@ $$<
endif
endef
#############################################################################
# Targets
#############################################################################
.PHONY: clean-objs clean-defs
include $(COMMON_CPP_MAKE_FILE_DIR)CommonTargets.mak
# create targets for each source file
$(foreach src,$(SRC_FILES_LIST),$(eval $(call CREATE_SRC_TARGETS,$(src))))
# include all dependency files (we don't need them the first time, so we can use -include)
-include $(DEP_FILES)
$(OUTPUT_FILE): $(OBJ_FILES)
    $(OUTPUT_COMMAND)
clean-objs:
    rm -rf $(OBJ_FILES)
clean-defs:
    rm -rf $(DEP_FILES)
clean: clean-objs clean-defs
           

這樣本文以官方SimpleViewer檔案夾為基礎的Makefile相關檔案的編寫就完成了,接着就是對檔案中主函數檔案main.cpp進行代碼編寫,這樣就可以結合openNI和OpenCV進行資料擷取。

擷取深度圖

擷取深度圖主要用到了openni的接口擷取資料流,然後将擷取的資料流通過opencv接口轉換成深度圖像。

#include <OpenNI.h>
#include "Viewer.h"
#include <stdio.h>
#include <iostream>
#include <highgui.h>
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace std;
using namespace openni;
#define SAMPLE_READ_WAIT_TIMEOUT 2000 //2000ms
int wasKeyboardHit();
int main(int argc, char** argv)
{
    openni::Status rc = openni::STATUS_OK;
    openni::Status rc_d = openni::STATUS_OK;
    openni::Device device;
    openni::VideoStream depth, color;
    const char* deviceURI = openni::ANY_DEVICE;
    rc = openni::OpenNI::initialize();
    printf("After initialization:\n%s\n", openni::OpenNI::getExtendedError());
    rc = device.open(deviceURI);
    if (rc != openni::STATUS_OK)
    {
        printf("SimpleViewer: Device open failed:\n%s\n", openni::OpenNI::getExtendedError());
        openni::OpenNI::shutdown();
        return 1;
    }
    rc = depth.create(device, openni::SENSOR_DEPTH);
    if (rc == openni::STATUS_OK)
        rc = depth.start();
    openni::VideoFrameRef frameDepth;
    openni::DepthPixel* pDepth;
    cout<<"stream start, press any key to stop..."<<endl;
    VideoStream* streams[] = {&depth, &color};
    while (!wasKeyboardHit())
    {
        int depthstream = -1;
        rc = OpenNI::waitForAnyStream(streams, 2, &depthstream, SAMPLE_READ_WAIT_TIMEOUT);
        if (rc != STATUS_OK)
        {
            printf("Wait failed! (timeout is %d ms)\n%s\n", SAMPLE_READ_WAIT_TIMEOUT, OpenNI::getExtendedError());
            continue;
        }
        cout<<"waitForAnyStream finished, press any key to stop..."<<endl;
        //get color frame
        rc_d = depth.readFrame(&frameDepth);
        cout<<"read Frame finished, press any key to stop..."<<endl;
        if (rc != STATUS_OK || rc_d != STATUS_OK)
        {
            printf("Read failed!\n%s\n", OpenNI::getExtendedError());
            continue;
        }
        int middleIndex = (frameDepth.getHeight() + 1)*frameDepth.getWidth() / 2;
        pDepth = (DepthPixel*)frameDepth.getData();
            cout  << pDepth[middleIndex]  <<endl;
    }
    const cv::Mat mImageDepth( frameDepth.getHeight(), frameDepth.getWidth(), CV_16UC1, (void*)frameDepth.getData());
    cv::Mat mScaledDepth;
    int iMaxDepth = depth.getMaxPixelValue();
    mImageDepth.convertTo( mScaledDepth, CV_8U, 255.0 / iMaxDepth );
    cv::imwrite( "/home/orbbec/Desktop/createDepthImg/Bin/Arm64-Release/DepthImage.png", mScaledDepth );
}
           

擷取彩色圖

擷取彩色圖不能通過openni接口 readframe(),需要使用opencv的 VideoCapture相關接口。

#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
    VideoCapture videoCapture;
    videoCapture.open(cv::CAP_ANY);
    if (!videoCapture.isOpened())
    {
        printf("open UVC color camera failed.");
    }
    //set codec format
    videoCapture.set(CV_CAP_PROP_FOURCC, CV_FOURCC('M', 'J', 'P', 'G'));
    //set resolution
    videoCapture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
    videoCapture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
    for(int i=0;i<10;i++)
    {
        Mat frame; 
        videoCapture >> frame; 
        Vec3b &bgr = frame.at<Vec3b>(frame.rows/2, frame.cols/2);
        //print the r g b value of the middle pixel of the frame
        printf("r = %02d, g = %02d, b = %02d\n",
            bgr[2] & 0xff,
            bgr[1] & 0xff,
            bgr[0] & 0xff);
        waitKey(30); //delay 30ms
        imwrite("/home/orbbec/Desktop/saveRGBImg/Bin/Arm64-Release/saved_img.png",frame);
    }
    return 0;
}
           

擷取彩色點雲

根據opencv中videoCapture獲得的帶有彩色Mat資料做為參數輸入,最終在檔案寫入時與坐标一起寫入至ply檔案中:

#include <iostream>
#include <sstream>
#include "OpenNI.h"    
#include "OniSampleUtilities.h"
#include <opencv2/opencv.hpp>
using namespace openni;
using namespace cv;
#define MIN_DISTANCE 20  //µ¥Î»ºÁÃ×
#define MAX_DISTANCE 4000 //µ¥Î»ºÁÃ×
#define RESOULTION_X 640.0  //±ê¶šÊ±µÄ·Ö±æÂÊ
#define RESOULTION_Y 480.0  //±ê¶šÊ±µÄ·Ö±æÂÊ
#define MAX_FRAME_COUNT 50
typedef struct xnIntrinsic_Params
{
    xnIntrinsic_Params() :
        c_x(320.0), c_y(240.0), f_x(480.0), f_y(480.0)
    {}
    xnIntrinsic_Params(float c_x_, float c_y_, float f_x_, float f_y_) :
        c_x(c_x_), c_y(c_y_), f_x(f_x_),f_y(f_y_)
    {}
    float c_x; //uÖáÉϵĹéÒ»»¯œ¹Ÿà
    float c_y; //vÖáÉϵĹéÒ»»¯œ¹Ÿà
    float f_x; //Ö÷µãx×ø±ê
    float f_y; //Ö÷µãy×ø±ê
}xIntrinsic_Params;
xIntrinsic_Params g_IntrinsicParam; //ŽæŽ¢Ïà»úÄڲεÄÈ«ŸÖ±äÁ¿
void getCameraParams(openni::Device& Device, xIntrinsic_Params& IrParam)
{
    OBCameraParams cameraParam;
    int dataSize = sizeof(cameraParam);
    memset(&cameraParam, 0, sizeof(cameraParam));
    openni::Status rc = Device.getProperty(openni::OBEXTENSION_ID_CAM_PARAMS, (uint8_t *)&cameraParam, &dataSize);
    if (rc != openni::STATUS_OK)
    {
        std::cout << "Error:" << openni::OpenNI::getExtendedError() << std::endl;
        return;
    }
    std::cout << "IrParam.f_x = " << IrParam.f_x << std::endl;
    std::cout << "IrParam.f_y = " << IrParam.f_y << std::endl;
    std::cout << "IrParam.c_x = " << IrParam.c_x << std::endl;
    std::cout << "IrParam.c_y = " << IrParam.c_y << std::endl;
}
void convertDepthToPointCloud(const uint16_t *pDepth,  Mat color, int width, int height,const char *ply_filename)
{
    if (NULL == pDepth)
    {
        printf("depth frame is NULL!");
        return;
    }
    FILE *fp;
    fp = fopen(ply_filename, "w");
    int valid_count = 0;
    uint16_t max_depth = MAX_DISTANCE;
    uint16_t min_depth = MIN_DISTANCE;
    for (int v = 0; v < height; ++v)
    {
        for (int u = 0; u < width; ++u)
        {
            uint16_t depth = pDepth[v * width + u];
            if (depth <= 0 || depth < min_depth || depth > max_depth)
                continue;

            valid_count += 1;
        }
    }
    fprintf(fp, "ply\n");
    fprintf(fp, "format ascii 1.0\n");
    fprintf(fp, "element vertex %d\n", valid_count);
    fprintf(fp, "property float x\n");
    fprintf(fp, "property float y\n");
    fprintf(fp, "property float z\n");
    fprintf(fp, "property uchar red\n");
    fprintf(fp, "property uchar green\n");
    fprintf(fp, "property uchar blue\n");
    fprintf(fp, "end_header\n");
    float world_x, world_y, world_z;
    for (int v = 0; v < height; ++v)
    {
        for (int u = 0; u < width; ++u)
        {
            uint16_t depth = pDepth[v * width + u];
            Vec3b &bgr = color.at<Vec3b>(v, u);
            int color_b= bgr[0] & 0xff;
            int color_g= bgr[1] & 0xff;
            int color_r= bgr[2] & 0xff;
            if (depth <= 0 || depth < min_depth || depth > max_depth)
                continue;
            float fdx = g_IntrinsicParam.f_x * ((float)(width) / RESOULTION_X);
            float fdy = g_IntrinsicParam.f_y * ((float)(height) / RESOULTION_Y);
            float u0 = g_IntrinsicParam.c_x * ((float)(width)/ RESOULTION_X);
            float v0 = g_IntrinsicParam.c_y * ((float)(height) / RESOULTION_Y);
            float tx = (u - u0) / fdx;
            float ty = (v - v0) / fdy;
            world_x = depth * tx;
            world_y = depth * ty;
            world_z = depth;
            fprintf(fp, "%f %f %f %d %d %d\n", world_x, world_y, world_z, color_r,color_g,color_b);
        }
    }
    fclose(fp);
}
int g_imageCount = 0;
void analyzeFrame(const VideoFrameRef& frame,  Mat color)
{
    DepthPixel* pDepth;

    //¹¹ÔìµãÔÆÎÄŒþÃû
    char plyFileName[256] = "";
    g_imageCount++;
    if (MAX_FRAME_COUNT < g_imageCount)
    {
        return;
    }
    std::stringstream filename;
    filename << "pointcloud_";
    filename << g_imageCount;
    filename << ".ply";
    filename >> plyFileName;
    int middleIndex = (frame.getHeight() + 1)*frame.getWidth() / 2;
    switch (frame.getVideoMode().getPixelFormat())
    {
    case PIXEL_FORMAT_DEPTH_1_MM:
        pDepth = (DepthPixel*)frame.getData();
        printf("[%08llu] %8d\n", (long long)frame.getTimestamp(),pDepth[middleIndex]);
        convertDepthToPointCloud(pDepth, color, frame.getWidth(), frame.getHeight(), plyFileName);
        break;
    default:
        printf("Unknown format\n");
    }
}
int main(int argc, char* argv[])
{
    //initialize openNI sdk
    Status rc = OpenNI::initialize();
    if (rc != STATUS_OK)
    {
        printf("Initialize failed\n%s\n", OpenNI::getExtendedError());
        return 1;
    }
    //open deivce
    Device device;
    rc = device.open(ANY_DEVICE);
    if (rc != STATUS_OK)
    {
        printf("Couldn't open device\n%s\n", OpenNI::getExtendedError());
        return 2;
    }
    VideoCapture videoCapture;
    videoCapture.open(cv::CAP_ANY);
    if (!videoCapture.isOpened())
    {
        printf("open UVC color camera failed.");
    }
    //set codec format
    videoCapture.set(CV_CAP_PROP_FOURCC, CV_FOURCC('M', 'J', 'P', 'G'));
    //set resolution
    videoCapture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
    videoCapture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
    VideoStream depth;
    //create depth stream
    if (device.getSensorInfo(SENSOR_DEPTH) != NULL)
    {
        rc = depth.create(device, SENSOR_DEPTH);
        if (rc != STATUS_OK)
        {
            printf("Couldn't create depth stream\n%s\n", OpenNI::getExtendedError());
        }
    }
    rc = depth.start();
    if (rc != STATUS_OK)
    {
        printf("Couldn't start the depth stream\n%s\n", OpenNI::getExtendedError());
    }
    openni::VideoFrameRef frameDepth;
    for(int i=0;i<10;i++)
    {
        Mat color_frame; 
        videoCapture >> color_frame; 
        Vec3b &bgr = color_frame.at<Vec3b>(color_frame.rows/2, color_frame.cols/2);
        //print the r g b value of the middle pixel of the frame
        printf("r = %02d, g = %02d, b = %02d\n",
            bgr[2] & 0xff,
            bgr[1] & 0xff,
            bgr[0] & 0xff);
        rc = depth.readFrame(&frameDepth);
        analyzeFrame(frameDepth, color_frame);
        if (rc != openni::STATUS_OK)
            continue;
        waitKey(30); 
    }
    depth.stop();
    depth.destroy();
    //close device
    device.close();
    //shutdown OpenNI
    OpenNI::shutdown();
    return 0;
           

三、資料配準

根據上文資料擷取的準備,在Zora P1開發闆上進行資料擷取工作,作為三維重建的基礎,本文隻擷取了兩側的點雲資料進行三維重建,在資料方面要求較低,容易擷取。

配準代碼

擷取得到牲畜的兩側資料後,接下來需要對資料進行配準。本文配準方法主要運用兩側點雲資料的同名點,對應點的擷取通過點雲庫手動選擇得出,通過對應點計算得出旋轉矩陣進行配準。配準函數代碼如下:

#include <iostream>
#include <string>
#include <pcl/io/ply_io.h>
#include <pcl/registration/icp.h>
#include <pcl/visualization/pcl_visualizer.h>
#include <pcl/console/time.h>   // TicToc
#include <pcl/common/common.h>  
#include <pcl/common/angles.h>  
#include <pcl/common/transforms.h>  
#include <pcl/point_cloud.h>  
#include <pcl/point_types.h>  
#include <pcl/io/pcd_io.h>  
#include <pcl/registration/transformation_estimation_svd.h>  
typedef pcl::PointXYZRGB PointT;
typedef pcl::PointCloud<PointT> PointCloudT;
using namespace pcl;
int main(int argc,char* argv[])
{
    // The point clouds we will be using
    PointCloudT::Ptr cloud_in(new PointCloudT);  // Original point cloud
    PointCloudT::Ptr cloud_in2(new PointCloudT);  // Transformed point cloud
    PointCloudT::Ptr cloud_icp(new PointCloudT);  // ICP output point cloud
    PointCloudT::Ptr corrspond_in(new PointCloudT);  // Original point cloud
    PointCloudT::Ptr corrspond_out(new PointCloudT);  // Transformed point cloud
    PointCloudT::Ptr show_transformed_clouds(new PointCloudT);
    std::mutex cloud_mutex_;
    // Checking program arguments
    if (argc < 3)
    {
        printf("Usage :\n");
        printf("\t\t%s file.ply number_of_ICP_iterations\n", argv[0]);
        PCL_ERROR("Provide one ply file.\n");
        return (-1);
    }
    pcl::console::TicToc time;
    time.tic();
    if (pcl::io::loadPLYFile(argv[1], *cloud_in) < 0)
    {
        PCL_ERROR("Error loading cloud %s.\n", argv[1]);
        return (-1);
    }
    if (pcl::io::loadPLYFile(argv[2], *cloud_in2) < 0)
    {
        PCL_ERROR("Error loading cloud %s.\n", argv[2]);
        return (-1);
    }
    std::cout << "\nLoaded file " << argv[1] << " (" << cloud_in->size() << " points) in " << time.toc() << " ms\n" << std::endl;
    Eigen::Matrix4d transformation_matrix = Eigen::Matrix4d::Identity();
    PointT point_in_1(-0.201973, -0.326478, 1.578000, 0, 0, 0);
    PointT point_in_2(-0.328673, -0.325839, 1.616000, 0, 0, 0);
    PointT point_in_3(-0.297260, -0.269961, 1.730000, 0, 0, 0);
    PointT point_in_4(-0.169498, -0.276550, 1.696000, 0, 0, 0);
    corrspond_in->push_back(point_in_1);
    corrspond_in->push_back(point_in_2);
    corrspond_in->push_back(point_in_3);
    corrspond_in->push_back(point_in_4);
    PointT point_in_5(0.165862, -0.074352, 1.631000, 0, 0, 0);
    PointT point_in_6(0.290257, -0.068296, 1.623000, 0, 0, 0);
    PointT point_in_7(0.298880, -0.133429, 1.522000, 0, 0, 0);
    PointT point_in_8(0.175316, -0.138128, 1.515000, 0, 0, 0);
    corrspond_out->push_back(point_in_5);
    corrspond_out->push_back(point_in_6);
    corrspond_out->push_back(point_in_7);
    corrspond_out->push_back(point_in_8);
    cout << corrspond_in->size() << endl;
    cout << corrspond_out->size() << endl;
    //利用SVD方法求解變換矩陣  
    pcl::registration::TransformationEstimationSVD<pcl::PointXYZRGB, pcl::PointXYZRGB> TESVD;
    pcl::registration::TransformationEstimationSVD<pcl::PointXYZRGB, pcl::PointXYZRGB>::Matrix4 transformation2;
    TESVD.estimateRigidTransformation(*corrspond_in, *corrspond_out, transformation2);
    cout << transformation2 << endl;
    pcl::transformPointCloud(*cloud_in, *show_transformed_clouds, transformation2);
    *show_transformed_clouds += *cloud_in2;
    pcl::PCDWriter writer;
    writer.write("registered.pcd", *show_transformed_clouds, false);

    return (0);
}
CMakeLists.txt檔案内容如下
cmake_minimum_required(VERSION 2.6 FATAL_ERROR)
project(transform_twoside)
find_package(PCL 1.5 REQUIRED)
include_directories(${PCL_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS})
add_executable (transform_twoside transform_twoside.cpp)
target_link_libraries (transform_twoside ${PCL_LIBRARIES})
           

擷取的原始資料:

【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範
【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範

配準後資料:

【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範
【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範

四、姿勢歸一化

本方法中三維重建需要先将雙側配準的點雲資料中的從場景中分割出來,并将牲畜按照統一的方向放置于相同的坐标系統中,是以在進行三維重建前,需要将資料進行姿勢歸一化處理,以下給出姿勢歸一化方法。

軟體處理方法(自主研發)

軟體實作方法論文參考:

@article{GUO201760,
title = "LSSA_CAU: An interactive 3d point clouds analysis software for body measurement of livestock with similar forms of cows or pigs",
journal = "Computers and Electronics in Agriculture",
volume = "138",
pages = "60 - 68",
year = "2017",
issn = "0168-1699",
doi = "https://doi.org/10.1016/j.compag.2017.04.014",
url = "http://www.sciencedirect.com/science/article/pii/S0168169917301746",
}
@article{GUO201959,
title = "A bilateral symmetry based pose normalization framework applied to livestock body measurement in point clouds",
journal = "Computers and Electronics in Agriculture",
volume = "160",
pages = "59 - 70",
year = "2019",
issn = "0168-1699",
doi = "https://doi.org/10.1016/j.compag.2019.03.010",
url = "http://www.sciencedirect.com/science/article/pii/S0168169918314893",
}
           

使用軟體處理雙側點雲資料,具體操作方法将在視訊中給出:

【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範

處理結果:

【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範

五、三維重建

本文參考Leonid Pishchulinp論文[Building Statistical Shape Spaces for 3D Human Modeling]的方法,使用基于模闆的非剛配準來實作三維重建。該方法包括初始化和非剛配準兩個步驟。本項目在原有算法的基礎上,通過修改其中算法參數和模型,将原有算法從人體轉移到牲畜上,并達到一定的精度。

資料準備

模闆資料:普通豬的網格資料

掃描資料:第四章擷取的歸一化後豬的點雲資料

【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範
【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範

另外還需要選取一些特征點,使配準效果更好。是以,我們根據農業上豬的體尺測量,在模闆資料和掃描資料上分别選取了37個對應的特征點。

初始化

為了使非剛配準成功進行,模闆資料需要和掃描資料預對齊。這裡使用對應的特征點集基于Horn等人提出的機關四元數法,實作模闆資料和掃描資料間的剛性配準。

非剛配準

在變換中,對于模闆資料的每個點均有一個4x4的變換矩陣。計算目标即找到這一系列變換矩陣使得模闆資料中的頂點盡可能地變換到掃描資料中的對應點上,更好地比對掃描資料。其中對應點尋找基于K最鄰近算法實作,并且考慮兩個控制條件:距離<50mm、法線夾角<60°。

【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範
【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範

結果評價

非剛配準結果與原始點雲間的比較。

(1)定量評價

【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範

(2)定性評價

【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建,代碼詳盡導語一、環境配置二、資料擷取三、資料配準四、姿勢歸一化五、三維重建六、Demo示範

六、Demo示範

視訊詳情可見如下連結::

牲畜體表資訊的三維重建

想要了解該文章的更具體詳情,可點選:

【3D視覺創新應用競賽作品系列​】牲畜體表資訊的三維重建跳轉閱讀原文~

版權聲明:本文為奧比中光3D視覺開發者社群特約作者授權原創釋出,未經授權不得轉載,本文僅做學術分享,版權歸原作者所有,若涉及侵權内容請聯系删文

3D視覺開發者社群是由奧比中光給所有開發者打造的分享與交流平台,旨在将3D視覺技術開放給開發者。平台為開發者提供3D視覺領域免費課程、奧比中光獨家資源與專業技術支援。點選加入3D視覺開發者社群,和開發者們一起讨論分享吧~

也可移步微信關注官方公衆号 3D視覺開發者社群 ,擷取更多幹貨知識哦~

繼續閱讀