天天看點

hdl_localization試讀安裝實驗效果hdl_localization包

hdl_localization試讀

  • 安裝
  • 實驗效果
  • hdl_localization包
    • 總覽
    • launch
    • apps(程式實作)
      • globalmap_server_nodelet
        • `globalmap_server_nodelet::onInit() `
        • `globalmap_server_nodelet::initialize_params() `
      • hdl_localization_nodelet
        • `hdl_localization_nodelet::onInit()`
        • `hdl_localization_nodelet::initialize_params()`
        • `HdlLocalizationNodelet::globalmap_callback`
        • `HdlLocalizationNodelet::imu_callback`
        • `HdlLocalizationNodelet::points_callback`
        • `downsample(const pcl::PointCloud::ConstPtr& cloud)`
        • `publish_odometry`
        • `matrix2transform`
    • include(狀态估計器及ukf)
      • hdl_localization/pose_estimator.hpp
        • `PoseEstimator`(構造函數)
        • `pose_estimator->predict`(預測)
        • `pose_estimator->correct`(觀測)
      • hdl_localization/pose_system.hpp
        • `f`(系統狀态方程)
        • `h` (觀測方程)
      • kkl/unscented_kalman_filter.hpp
        • `UnscentedKalmanFilterX`[^2]
        • `ukf->predict`
        • `ukf->correct`
        • `computeSigmaPoints`
        • `ensurePositiveFinite`(未實際應用)
    • 總結

安裝

簡單易行,環境友好。首先附上網址:koide3/hdl_localization.

前置環境我直接用的apt-get

//安裝pcl

sudo apt-get install libpcl-dev
           

//安裝其他依賴(記得替換指令中的kinetic為自己的版本,一共4個地方。忘記那個自動的怎麼寫了。。。)

sudo apt-get install ros-kinetic-geodesy ros-kinetic-pcl-ros ros-kinetic-nmea-msgs ros-kinetic-libg2o
           

安裝好環境之後,按照readme直接走就可以。

建立工作空間→git clone hdl_localization→git clone ndt_omp→編譯1

實驗效果

總的來說效果很好,從官方給的資料集和實驗室之前的包都跑過,定位效果頗好。不用imu也可。

等會跑一個

hdl_localization包

總覽

該軟體是使用nodelet統一管理的(第一次接觸,百度一下很高端的樣子)包内檔案夾很多,apps為定義的兩個類,也就是程式實作。include内為狀态估計器和無迹卡爾曼的實作。launch不用多說。rviz内為rviz的配置檔案。data為執行個體的定位用點雲地圖。

launch

定義了幾個參數,使用nodelet運作了

velodyne_nodelet_manager

globalmap_server_nodelet

hdl_localization_nodelet

三個節點。如果隻用于仿真,可以在 arguments 前面加上。

<param name="use_sim_time" value="true"/>
           

apps(程式實作)

本檔案夾是隻有兩個cpp檔案,直接繼承了nodelet的類。代碼量就較少。

globalmap_server_nodelet

GlobalmapServerNodelet

繼承了

nodelet::Nodelet

關于ros,聲明了三個句柄,1個釋出,1個計時器,1個globalmap的變量。

ros::NodeHandle nh;
  ros::NodeHandle mt_nh;
  ros::NodeHandle private_nh;

  ros::Publisher globalmap_pub;

  ros::WallTimer globalmap_pub_timer;
  pcl::PointCloud<PointT>::Ptr globalmap;
           

globalmap_server_nodelet::onInit()

這裡是在重寫了初始化函數。同時利用計時器出發回調函數。

void onInit() override {
  //定義三個節點,
    nh = getNodeHandle();
    mt_nh = getMTNodeHandle();
    private_nh = getPrivateNodeHandle();

    initialize_params();

    // publish globalmap with "latched" publisher
    globalmap_pub = nh.advertise<sensor_msgs::PointCloud2>("/globalmap", 5, true);
    globalmap_pub_timer = nh.createWallTimer(ros::WallDuration(0.05), &GlobalmapServerNodelet::pub_once_cb, this, true, true); //20Hz
  }
           

globalmap_server_nodelet::initialize_params()

在程式

initialize_params()

中,完成了讀取地圖pcd檔案的功能,并對該地圖進行下采樣,最終的globalmap是下采樣的地圖。

void initialize_params() {
    // read globalmap from a pcd file
    std::string globalmap_pcd = private_nh.param<std::string>("globalmap_pcd", "");
    globalmap.reset(new pcl::PointCloud<PointT>());
    pcl::io::loadPCDFile(globalmap_pcd, *globalmap);
    globalmap->header.frame_id = "map";

	//TODO:這個實際上是沒有到這裡來的,初步想法是沒有utm檔案。類似于經緯度的坐标檔案。
    std::ifstream utm_file(globalmap_pcd + ".utm");
    if (utm_file.is_open() && private_nh.param<bool>("convert_utm_to_local", true)) {
        std::cout << "now utf_file is open" << std::endl;
      double utm_easting;
      double utm_northing;
      double altitude;
      utm_file >> utm_easting >> utm_northing >> altitude;
      for(auto& pt : globalmap->points) {
        pt.getVector3fMap() -= Eigen::Vector3f(utm_easting, utm_northing, altitude);
      }
      ROS_INFO_STREAM("Global map offset by UTM reference coordinates (x = " 
                      << utm_easting << ", y = " << utm_northing << ") and altitude (z = " << altitude << ")");
    }
    //endTODO

    // downsample globalmap
    double downsample_resolution = private_nh.param<double>("downsample_resolution", 0.1);
    boost::shared_ptr<pcl::VoxelGrid<PointT>> voxelgrid(new pcl::VoxelGrid<PointT>());
    voxelgrid->setLeafSize(downsample_resolution, downsample_resolution, downsample_resolution);
    voxelgrid->setInputCloud(globalmap);

    pcl::PointCloud<PointT>::Ptr filtered(new pcl::PointCloud<PointT>());
    voxelgrid->filter(*filtered);

    globalmap = filtered;
  }
           

同時,每隔0.05s釋出一次。(onInit定義的)

void pub_once_cb(const ros::WallTimerEvent& event) {
    globalmap_pub.publish(globalmap);
  }
           

完了

hdl_localization_nodelet

HdlLocalizationNodelet

繼承了

nodelet::Nodelet

這次我要先看初始化了。

hdl_localization_nodelet::onInit()

void onInit() override {
    //依然是三個句柄
    nh = getNodeHandle();
    mt_nh = getMTNodeHandle();
    private_nh = getPrivateNodeHandle();
	//這裡的時間用了boost庫裡的 circular_buffer<double>。感興趣的可以自己百度一下,畢竟……我也沒用過。
    processing_time.resize(16);
    //這些參數,又來了。
    initialize_params();
    
    //這個預設的base_link, launch裡覆寫了。實際上是velodyne。參數類的在launch裡改寫了一部分,這裡就不一一贅述了。可以自己對比來看,比較容易。
    odom_child_frame_id = private_nh.param<std::string>("odom_child_frame_id", "base_link");
    //是否使用imu
    use_imu = private_nh.param<bool>("use_imu", true);
    //imu是否倒置
    invert_imu = private_nh.param<bool>("invert_imu", false); 
    if(use_imu) {//如果使用imu,則定義訂閱函數。
      NODELET_INFO("enable imu-based prediction");
      imu_sub = mt_nh.subscribe("/gpsimu_driver/imu_data", 256, &HdlLocalizationNodelet::imu_callback, this);
    }
    //點雲資料、全局地圖、初始位姿的訂閱。initialpose_sub隻是用于rviz劃點用的。
    points_sub = mt_nh.subscribe("/velodyne_points", 5, &HdlLocalizationNodelet::points_callback, this);
    globalmap_sub = nh.subscribe("/globalmap", 1, &HdlLocalizationNodelet::globalmap_callback, this);
    initialpose_sub = nh.subscribe("/initialpose", 8, &HdlLocalizationNodelet::initialpose_callback, this);
	//釋出裡程計資訊,以及對齊後的點雲資料。
    pose_pub = nh.advertise<nav_msgs::Odometry>("/odom", 5, false);
    aligned_pub = nh.advertise<sensor_msgs::PointCloud2>("/aligned_points", 5, false);
  }
           

hdl_localization_nodelet::initialize_params()

初始化參數

void initialize_params() {
    // intialize scan matching method
    double downsample_resolution = private_nh.param<double>("downsample_resolution", 0.1);
    std::string ndt_neighbor_search_method = private_nh.param<std::string>("ndt_neighbor_search_method", "DIRECT7");

    double ndt_resolution = private_nh.param<double>("ndt_resolution", 1.0);
    boost::shared_ptr<pcl::VoxelGrid<PointT>> voxelgrid(new pcl::VoxelGrid<PointT>());
    voxelgrid->setLeafSize(downsample_resolution, downsample_resolution, downsample_resolution);
    downsample_filter = voxelgrid;
	//定義了ndt和glcp
    pclomp::NormalDistributionsTransform<PointT, PointT>::Ptr ndt(new pclomp::NormalDistributionsTransform<PointT, PointT>());
    pclomp::GeneralizedIterativeClosestPoint<PointT, PointT>::Ptr gicp(new pclomp::GeneralizedIterativeClosestPoint<PointT, PointT>());
	//ndt參數與搜尋方法。預設DIRECT7,作者說效果不好可以嘗試改為DIRECT1.
    ndt->setTransformationEpsilon(0.01);
    ndt->setResolution(ndt_resolution);
    if(ndt_neighbor_search_method == "DIRECT1") {
      NODELET_INFO("search_method DIRECT1 is selected");
      ndt->setNeighborhoodSearchMethod(pclomp::DIRECT1);
      registration = ndt;
    } else if(ndt_neighbor_search_method == "DIRECT7") {
      NODELET_INFO("search_method DIRECT7 is selected");
      ndt->setNeighborhoodSearchMethod(pclomp::DIRECT7);
      registration = ndt;
    } else if(ndt_neighbor_search_method == "GICP_OMP"){
      NODELET_INFO("search_method GICP_OMP is selected");
      registration = gicp;
    }
     else {
      if(ndt_neighbor_search_method == "KDTREE") {
        NODELET_INFO("search_method KDTREE is selected");
      } else {
        NODELET_WARN("invalid search method was given");
        NODELET_WARN("default method is selected (KDTREE)");
      }
      ndt->setNeighborhoodSearchMethod(pclomp::KDTREE);
      registration = ndt;
    }
    
    // initialize pose estimator
    //設定起點用。
    if(private_nh.param<bool>("specify_init_pose", true)) {
      NODELET_INFO("initialize pose estimator with specified parameters!!");
      pose_estimator.reset(new hdl_localization::PoseEstimator(registration,
        ros::Time::now(),
        Eigen::Vector3f(private_nh.param<double>("init_pos_x", 0.0), private_nh.param<double>("init_pos_y", 0.0), private_nh.param<double>("init_pos_z", 0.0)),
        Eigen::Quaternionf(private_nh.param<double>("init_ori_w", 1.0), private_nh.param<double>("init_ori_x", 0.0), private_nh.param<double>("init_ori_y", 0.0), private_nh.param<double>("init_ori_z", 0.0)),
        private_nh.param<double>("cool_time_duration", 0.5)
      ));
    }
  }
           

-----------下面就是回調函數的處理---------

實際使用的回調函數就是

HdlLocalizationNodelet::imu_callback

HdlLocalizationNodelet::points_callback

以及

HdlLocalizationNodelet::globalmap_callback

三個。分别訂閱了

"/gpsimu_driver/imu_data"

"/velodyne_points"

以及

"/globalmap"

兩個話題。

首先看

HdlLocalizationNodelet::globalmap_callback

。完成了對全局地圖的訂閱以及從ros消息到點雲的轉化。

HdlLocalizationNodelet::globalmap_callback

void globalmap_callback(const sensor_msgs::PointCloud2ConstPtr& points_msg) {
    NODELET_INFO("globalmap received!");
    pcl::PointCloud<PointT>::Ptr cloud(new pcl::PointCloud<PointT>());
    pcl::fromROSMsg(*points_msg, *cloud);
    globalmap = cloud;
	//釋出出來的全局地圖用作配準用的目标點雲。這裡就是globalmap_server_nodelet發出來的。
    registration->setInputTarget(globalmap);
  }
           

HdlLocalizationNodelet::imu_callback

接收imu并扔到imu_data裡去。會在

HdlLocalizationNodelet::points_callback

裡用到。

void imu_callback(const sensor_msgs::ImuConstPtr& imu_msg) {
    std::lock_guard<std::mutex> lock(imu_data_mutex);
    imu_data.push_back(imu_msg);
  }
           

HdlLocalizationNodelet::points_callback

輸入點雲輸出位姿。

void points_callback(const sensor_msgs::PointCloud2ConstPtr& points_msg) {
  	//加鎖
    std::lock_guard<std::mutex> estimator_lock(pose_estimator_mutex);
    if(!pose_estimator) {//等待位姿估計器初始化
      NODELET_ERROR("waiting for initial pose input!!");
      return;
    }

    if(!globalmap) {//等待全局地圖
      NODELET_ERROR("globalmap has not been received!!");
      return;
    }
	//将ros消息轉化為點雲
    const auto& stamp = points_msg->header.stamp;
    pcl::PointCloud<PointT>::Ptr pcl_cloud(new pcl::PointCloud<PointT>());
    pcl::fromROSMsg(*points_msg, *pcl_cloud);
	//檢查
    if(pcl_cloud->empty()) {
      NODELET_ERROR("cloud is empty!!");
      return;
    }
	//将點雲轉換到odom_child_frame_id 坐标系。
	//但是這個tf是自己發的,這一個還要再看一下。TODO。雞生蛋蛋生雞的問題一直搞不太懂。
    // transform pointcloud into odom_child_frame_id  
    pcl::PointCloud<PointT>::Ptr cloud(new pcl::PointCloud<PointT>());  
    if(!pcl_ros::transformPointCloud(odom_child_frame_id, *pcl_cloud, *cloud, this->tf_listener)) {
        NODELET_ERROR("point cloud cannot be transformed into target frame!!");
        return;
    } 
	//對點雲下采樣。這裡用到的是同一檔案下的函數。後面會放上。
    auto filtered = downsample(cloud);

    // predict
    if(!use_imu) {//不用imu則用0。
      pose_estimator->predict(stamp, Eigen::Vector3f::Zero(), Eigen::Vector3f::Zero());
    } else {
      std::lock_guard<std::mutex> lock(imu_data_mutex);
      auto imu_iter = imu_data.begin();
      //利用imu資料疊代。
      for(imu_iter; imu_iter != imu_data.end(); imu_iter++) {
        //若目前點雲時間早于imu雷達,則跳出。imu做預測,點雲觀測。
        if(stamp < (*imu_iter)->header.stamp) {
          break;
        }
        //讀取線加速度和角速度。判斷是否倒置imu
        const auto& acc = (*imu_iter)->linear_acceleration;
        const auto& gyro = (*imu_iter)->angular_velocity;
        double gyro_sign = invert_imu ? -1.0 : 1.0;
        //利用imu資料做位姿的預測。這裡用了pose_estimator→predict,進一步調用了ukf進行估計。還沒具體看ukf。 TODO
        pose_estimator->predict((*imu_iter)->header.stamp, Eigen::Vector3f(acc.x, acc.y, acc.z), gyro_sign * Eigen::Vector3f(gyro.x, gyro.y, gyro.z));
      }
      //删除用過的imu資料
      imu_data.erase(imu_data.begin(), imu_iter);
    }

    // correct
    auto t1 = ros::WallTime::now();
    //用pose_estimator 來矯正點雲。pcl庫配準。擷取到結果後利用ukf矯正位姿。
    auto aligned = pose_estimator->correct(filtered);
    auto t2 = ros::WallTime::now();

    processing_time.push_back((t2 - t1).toSec());
    double avg_processing_time = std::accumulate(processing_time.begin(), processing_time.end(), 0.0) / processing_time.size();
    // NODELET_INFO_STREAM("processing_time: " << avg_processing_time * 1000.0 << "[msec]");
    
	//如果有訂閱才釋出
    if(aligned_pub.getNumSubscribers()) {
      aligned->header.frame_id = "map";
      aligned->header.stamp = cloud->header.stamp;
      aligned_pub.publish(aligned);
    }
	//釋出裡程計。時間戳為目前幀雷達時間,裡程計位姿為ukf校正後位姿。同時也會釋出從map到odom_child_frame_id的tf
    publish_odometry(points_msg->header.stamp, pose_estimator->matrix());
  }
           

----------主要流程到此結束,下面是其他的一些功能函數-----------

downsample(const pcl::PointCloud<PointT>::ConstPtr& cloud)

目前幀點雲資料下采樣

pcl::PointCloud<PointT>::ConstPtr downsample(const pcl::PointCloud<PointT>::ConstPtr& cloud) const {
  	//在函數initialize_params()裡聲明了。0.1,0.1,0.1網格
    if(!downsample_filter) {
      return cloud;
    }
    pcl::PointCloud<PointT>::Ptr filtered(new pcl::PointCloud<PointT>());
    downsample_filter->setInputCloud(cloud);
    downsample_filter->filter(*filtered);
    filtered->header = cloud->header;
    return filtered;
  }
           

publish_odometry

釋出裡程計的tf和msg。輸入為目前幀點雲時間戳與pose_estimator的結果矩陣。這裡還用到了

matrix2transform

這個函數,用于做eigen矩陣到tf的轉化(取值)。

void publish_odometry(const ros::Time& stamp, const Eigen::Matrix4f& pose) {
    // broadcast the transform over tf
    geometry_msgs::TransformStamped odom_trans = matrix2transform(stamp, pose, "map", odom_child_frame_id);
    pose_broadcaster.sendTransform(odom_trans);

    // publish the transform
    nav_msgs::Odometry odom;
    odom.header.stamp = stamp;
    odom.header.frame_id = "map";

    odom.pose.pose.position.x = pose(0, 3);
    odom.pose.pose.position.y = pose(1, 3);
    odom.pose.pose.position.z = pose(2, 3);
    odom.pose.pose.orientation = odom_trans.transform.rotation;

    odom.child_frame_id = odom_child_frame_id;
    odom.twist.twist.linear.x = 0.0;
    odom.twist.twist.linear.y = 0.0;
    odom.twist.twist.angular.z = 0.0;

    pose_pub.publish(odom);
  }
           

matrix2transform

matrix2transform(const ros::Time& stamp, const Eigen::Matrix4f& pose, const std::string& frame_id, const std::string& child_frame_id)

從matrix 到 geometry_msgs::TransformStamped。

geometry_msgs::TransformStamped matrix2transform(const ros::Time& stamp, const Eigen::Matrix4f& pose, const std::string& frame_id, const std::string& child_frame_id) {
    Eigen::Quaternionf quat(pose.block<3, 3>(0, 0));
    quat.normalize();
    geometry_msgs::Quaternion odom_quat;
    odom_quat.w = quat.w();
    odom_quat.x = quat.x();
    odom_quat.y = quat.y();
    odom_quat.z = quat.z();

    geometry_msgs::TransformStamped odom_trans;
    odom_trans.header.stamp = stamp;
    odom_trans.header.frame_id = frame_id;
    odom_trans.child_frame_id = child_frame_id;

    odom_trans.transform.translation.x = pose(0, 3);
    odom_trans.transform.translation.y = pose(1, 3);
    odom_trans.transform.translation.z = pose(2, 3);
    odom_trans.transform.rotation = odom_quat;

    return odom_trans;
  }
           

include(狀态估計器及ukf)

apps裡的兩個cpp大緻内容均為以上。可以看到在

points_callback

裡使用了

pose_estimator

作為位姿的估計。而該類又使用了ukf作為位姿的解算。二者的實作都在include檔案夾内。

hdl_localization/pose_estimator.hpp

該檔案定義了類

PoseEstimator

PoseEstimator

(構造函數)

首先看構造函數。可以看到在初始化之後,最重要的是進入了ukf的處理。

PoseEstimator(pcl::Registration<PointT, PointT>::Ptr& registration, const ros::Time& stamp, const Eigen::Vector3f& pos, const Eigen::Quaternionf& quat, double cool_time_duration = 1.0)
    : init_stamp(stamp),
      registration(registration),
      cool_time_duration(cool_time_duration)
  {
  	//機關陣初始化,随後給過程噪聲。
    process_noise = Eigen::MatrixXf::Identity(16, 16);
    process_noise.middleRows(0, 3) *= 1.0;
    process_noise.middleRows(3, 3) *= 1.0;
    process_noise.middleRows(6, 4) *= 0.5;
    process_noise.middleRows(10, 3) *= 1e-6;
    process_noise.middleRows(13, 3) *= 1e-6;
	//測量噪聲,機關陣
    Eigen::MatrixXf measurement_noise = Eigen::MatrixXf::Identity(7, 7);
    measurement_noise.middleRows(0, 3) *= 0.01;
    measurement_noise.middleRows(3, 4) *= 0.001;
	//權重平均的位姿。
    Eigen::VectorXf mean(16);
    mean.middleRows(0, 3) = pos;
    mean.middleRows(3, 3).setZero();
    mean.middleRows(6, 4) = Eigen::Vector4f(quat.w(), quat.x(), quat.y(), quat.z());
    mean.middleRows(10, 3).setZero();
    mean.middleRows(13, 3).setZero();
	//初始化協方差
    Eigen::MatrixXf cov = Eigen::MatrixXf::Identity(16, 16) * 0.01;
    
	//聲明posesystem。
    PoseSystem system;
    //初始化ukf
    ukf.reset(new kkl::alg::UnscentedKalmanFilterX<float, PoseSystem>(system, 16, 6, 7, process_noise, measurement_noise, mean, cov));
  }
           

pose_estimator->predict

(預測)

另外在

hdl_localization.cpp

中用到的

pose_estimator->predict

等也在本檔案進行了解釋。

void predict(const ros::Time& stamp, const Eigen::Vector3f& acc, const Eigen::Vector3f& gyro) {
  	//目前與初始化的時間間隔小于設定的時間,或prev_stamp(上次更新時間)為0(未更新),或prev_stamp等于目前時間。則更新prev_stamp并跳出。
    if((stamp - init_stamp).toSec() < cool_time_duration || prev_stamp.is_zero() || prev_stamp == stamp) {
      prev_stamp = stamp;
      return;
    }
	//正常處理,首先計算dt,更新prev_stamp。
    double dt = (stamp - prev_stamp).toSec();
    prev_stamp = stamp;
	//對ukf設定噪聲和處理間隔。
    ukf->setProcessNoiseCov(process_noise * dt);
    ukf->system.dt = dt;
	//利用imu資料定義控制量
    Eigen::VectorXf control(6);
    control.head<3>() = acc;
    control.tail<3>() = gyro;
	//利用ukf預測。
    ukf->predict(control);
  }
           

pose_estimator->correct

(觀測)

pcl::PointCloud<PointT>::Ptr correct(const pcl::PointCloud<PointT>::ConstPtr& cloud) {
  	//機關陣來初始化
    Eigen::Matrix4f init_guess = Eigen::Matrix4f::Identity();
    init_guess.block<3, 3>(0, 0) = quat().toRotationMatrix();
    init_guess.block<3, 1>(0, 3) = pos();
	//點雲的配準。ndt
    pcl::PointCloud<PointT>::Ptr aligned(new pcl::PointCloud<PointT>());
    registration->setInputSource(cloud);
    registration->align(*aligned, init_guess);
	//讀取資料
    Eigen::Matrix4f trans = registration->getFinalTransformation();
    Eigen::Vector3f p = trans.block<3, 1>(0, 3);
    Eigen::Quaternionf q(trans.block<3, 3>(0, 0));

    if(quat().coeffs().dot(q.coeffs()) < 0.0f) {
      q.coeffs() *= -1.0f;
    }
	//填充至觀測矩陣observation
    Eigen::VectorXf observation(7);
    observation.middleRows(0, 3) = p;
    observation.middleRows(3, 4) = Eigen::Vector4f(q.w(), q.x(), q.y(), q.z());
	//ukf更新
    ukf->correct(observation);
    return aligned;
  }
           

----------還有一些簡單的函數不再說明了。直接怼上,很好了解。-----------

/* getters */
  Eigen::Vector3f pos() const {
    return Eigen::Vector3f(ukf->mean[0], ukf->mean[1], ukf->mean[2]);
  }

  Eigen::Vector3f vel() const {
    return Eigen::Vector3f(ukf->mean[3], ukf->mean[4], ukf->mean[5]);
  }

  Eigen::Quaternionf quat() const {
    return Eigen::Quaternionf(ukf->mean[6], ukf->mean[7], ukf->mean[8], ukf->mean[9]).normalized();
  }

  Eigen::Matrix4f matrix() const {
    Eigen::Matrix4f m = Eigen::Matrix4f::Identity();
    m.block<3, 3>(0, 0) = quat().toRotationMatrix();
    m.block<3, 1>(0, 3) = pos();
    return m;
  }
           

hdl_localization/pose_system.hpp

本檔案定義了完成了類

PoseSystem

的實作。主要是實作了ukf裡 矩陣

f(定義了系統)

h(觀測)

代碼實作。這是要扔到ukf中去的。

系統狀态量16位,分别是位姿(3)、速度(3)、四元數(4)、加速度偏差(3)、陀螺儀偏差(3)。另還有6位控制量,加速度(3)和陀螺儀(3)。

狀态量 表示
位置 p t pt pt = [ p x , p y , p z ] T px, \quad py, \quad pz]^{T} px,py,pz]T
速度 v t vt vt = [ v x , v y , v z ] T vx, \quad vy, \quad vz]^{T} vx,vy,vz]T
四元數 q t qt qt = [ q w , q x , q y , q z ] T qw, \quad qx, \quad qy, \quad qz]^{T} qw,qx,qy,qz]T
加速度偏差 a c c _ b i a s acc\_bias acc_bias = [ a c c _ b i a s x , a c c _ b i a s y , a c c _ b i a s z ] T acc\_bias_x, \quad acc\_bias_y, \quad acc\_bias_z]^{T} acc_biasx​,acc_biasy​,acc_biasz​]T
陀螺儀偏差 g y r o _ b i a s gyro\_bias gyro_bias = [ g y r o _ b i a s x , g y r o _ b i a s y , g y r o _ b i a s z ] T gyro\_bias_x, \quad gyro\_bias_y, \quad gyro\_bias_z]^{T} gyro_biasx​,gyro_biasy​,gyro_biasz​]T
控制量 表示
加速度 r a w _ a c c raw\_acc raw_acc = [ r a w _ a c c x , r a w _ a c c y , r a w _ a c c z ] T raw\_acc_x, \quad raw\_acc_y, \quad raw\_acc_z]^{T} raw_accx​,raw_accy​,raw_accz​]T
陀螺儀 r a w _ g y r o raw\_gyro raw_gyro = [ g y r o _ b i a s x , g y r o _ b i a s y , g y r o _ b i a s z ] T gyro\_bias_x, \quad gyro\_bias_y, \quad gyro\_bias_z]^{T} gyro_biasx​,gyro_biasy​,gyro_biasz​]T

f

(系統狀态方程)

VectorXt f(const VectorXt& state, const VectorXt& control) const {
    VectorXt next_state(16);

    Vector3t pt = state.middleRows(0, 3);  //位置
    Vector3t vt = state.middleRows(3, 3);  //速度
    Quaterniont qt(state[6], state[7], state[8], state[9]);
    qt.normalize(); // 歸一化四元數

    Vector3t acc_bias = state.middleRows(10, 3);  //加速度偏差
    Vector3t gyro_bias = state.middleRows(13, 3); //陀螺儀偏差

    Vector3t raw_acc = control.middleRows(0, 3);  //加速度控制
    Vector3t raw_gyro = control.middleRows(3, 3);  //陀螺儀控制
	
	//下一時刻狀态
    // position 。 首先更新位置
    next_state.middleRows(0, 3) = pt + vt * dt;					//

    // velocity。 更新速度,實際上并沒有利用加速度矯正速度,原因是認為加速度噪聲較大,對最終的精度并沒有貢獻。
    Vector3t g(0.0f, 0.0f, -9.80665f);
    Vector3t acc_ = raw_acc - acc_bias;
    Vector3t acc = qt * acc_;
    next_state.middleRows(3, 3) = vt; // + (acc - g) * dt;		// acceleration didn't contribute to accuracy due to large noise

    // orientation。首先完成了陀螺儀的增量計算并歸一化(直接轉化為四元數形式),将其轉換為下一時刻的四元數。
    Vector3t gyro = raw_gyro - gyro_bias;
    Quaterniont dq(1, gyro[0] * dt / 2, gyro[1] * dt / 2, gyro[2] * dt / 2);
    dq.normalize();
    Quaterniont qt_ = (qt * dq).normalized();
    next_state.middleRows(6, 4) << qt_.w(), qt_.x(), qt_.y(), qt_.z();
	//将目前控制量傳入下一時刻的狀态向量。認為加速度和角速度上偏差不變
    next_state.middleRows(10, 3) = state.middleRows(10, 3);		// constant bias on acceleration
    next_state.middleRows(13, 3) = state.middleRows(13, 3);		// constant bias on angular velocity

    return next_state;
  }
           

h

(觀測方程)

觀測方程直接将目前輸入狀态量作為觀測量。這裡的輸入是在更新階段(correct)生成的帶誤差方差的(

error variances

)的擴充狀态空間下的(

extended state space

)狀态量,也就是

ext_sigma_points

// observation equation
  VectorXt h(const VectorXt& state) const {
    VectorXt observation(7);
    observation.middleRows(0, 3) = state.middleRows(0, 3);
    observation.middleRows(3, 4) = state.middleRows(6, 4).normalized();

    return observation;
  }
           

kkl/unscented_kalman_filter.hpp

本檔案中主要的函數也就構造函數、預測、矯正、計算sigma點、使協方差矩陣正有限(不太清楚)五個。

UnscentedKalmanFilterX

2

首先,構造函數。可以看到輸入了一系列包括待估計系統、狀态向量次元、輸入次元、觀測次元、兩個噪聲、參數等等。完成了初始化操作。

UnscentedKalmanFilterX(const System& system, int state_dim, int input_dim, int measurement_dim, const MatrixXt& process_noise, const MatrixXt& measurement_noise, const VectorXt& mean, const MatrixXt& cov)
    : state_dim(state_dim),
    input_dim(input_dim),
    measurement_dim(measurement_dim),
    N(state_dim),
    M(input_dim),
    K(measurement_dim),
    S(2 * state_dim + 1),
    mean(mean),
    cov(cov),
    system(system),
    process_noise(process_noise),
    measurement_noise(measurement_noise),
    lambda(1),
    normal_dist(0.0, 1.0)
  {
  	//設定長度。
    weights.resize(S, 1);
    sigma_points.resize(S, N);
    ext_weights.resize(2 * (N + K) + 1, 1);
    ext_sigma_points.resize(2 * (N + K) + 1, N + K);
    expected_measurements.resize(2 * (N + K) + 1, K);

    // initialize weights for unscented filter
    weights[0] = lambda / (N + lambda);
    for (int i = 1; i < 2 * N + 1; i++) {
      weights[i] = 1 / (2 * (N + lambda));
    }

    // weights for extended state space which includes error variances
    ext_weights[0] = lambda / (N + K + lambda);
    for (int i = 1; i < 2 * (N + K) + 1; i++) {
      ext_weights[i] = 1 / (2 * (N + K + lambda));
    }
  }
           

ukf->predict

通過

pose_estimator->predict調用

void predict(const VectorXt& control) {

    // calculate sigma points. ukf的sigma點
    ensurePositiveFinite(cov);
    computeSigmaPoints(mean, cov, sigma_points);
    //sigma_points更新。用在posesystem中定義的f函數來進行。
    for (int i = 0; i < S; i++) {
      sigma_points.row(i) = system.f(sigma_points.row(i), control);
    }
    
	/*----至此,sigma_points裡存儲的就是目前時刻的由ukf輸出的系統狀态。-----*/
	
	//過程噪聲,即ukf中的矩陣R
    const auto& R = process_noise;
    
    // unscented transform。定義目前的平均狀态和協方差矩陣,并設定為0矩陣。
    VectorXt mean_pred(mean.size());
    MatrixXt cov_pred(cov.rows(), cov.cols());
    mean_pred.setZero();
    cov_pred.setZero();
    //權重平均,預測狀态
    for (int i = 0; i < S; i++) {
      mean_pred += weights[i] * sigma_points.row(i);
    }
    //根據狀态預測協方差。
    for (int i = 0; i < S; i++) {
      VectorXt diff = sigma_points.row(i).transpose() - mean_pred;
      cov_pred += weights[i] * diff * diff.transpose();
    }
    //附加過程噪聲R,在pose_estimator中給出初值
    cov_pred += R;
	//更新mean和cov
    mean = mean_pred;
    cov = cov_pred;
  }
           

ukf->correct

通過

pose_estimator->correct調用

void correct(const VectorXt& measurement) {
  	//N-狀态方程次元。K-觀測次元
    // create extended state space which includes error variances
    VectorXt ext_mean_pred = VectorXt::Zero(N + K, 1);
    MatrixXt ext_cov_pred = MatrixXt::Zero(N + K, N + K);
    //左上角N行1列
    ext_mean_pred.topLeftCorner(N, 1) = VectorXt(mean);
    //左上角N行N列
    ext_cov_pred.topLeftCorner(N, N) = MatrixXt(cov);
    //右下角K行K列。初始化為在pose_estimator輸入的噪聲。位置噪聲0.01,四元數0.001
    ext_cov_pred.bottomRightCorner(K, K) = measurement_noise;
    
  /*---------------- 經過以上操作,現在擴充狀态變量前N項為mean,擴充協方差左上角為N*N的cov,右下角為K*K的觀測噪聲--------------*/
  
	//驗證并計算
    ensurePositiveFinite(ext_cov_pred);
    //利用擴充狀态空間的參數計算sigma點
    computeSigmaPoints(ext_mean_pred, ext_cov_pred, ext_sigma_points);

    // unscented transform
    //這裡使用了 ukf 的h 函數來計算觀測。
    //ext_sigma_points、expected_measurements是(2 * (N + K) + 1, K)的矩陣
    //沒太看明白 TODO
    //取左上角前N個量,加上右下角K個量。
    expected_measurements.setZero();
    for (int i = 0; i < ext_sigma_points.rows(); i++) {
      expected_measurements.row(i) = system.h(ext_sigma_points.row(i).transpose().topLeftCorner(N, 1));
      expected_measurements.row(i) += VectorXt(ext_sigma_points.row(i).transpose().bottomRightCorner(K, 1));
    }
    
	//權重平均。同predict函數相似。
    VectorXt expected_measurement_mean = VectorXt::Zero(K);
    for (int i = 0; i < ext_sigma_points.rows(); i++) {
      expected_measurement_mean += ext_weights[i] * expected_measurements.row(i);
    }
    MatrixXt expected_measurement_cov = MatrixXt::Zero(K, K);
    for (int i = 0; i < ext_sigma_points.rows(); i++) {
      VectorXt diff = expected_measurements.row(i).transpose() - expected_measurement_mean;
      expected_measurement_cov += ext_weights[i] * diff * diff.transpose();
    }

    // calculated transformed covariance
    //轉換方差。用于計算sigama,進而計算卡爾曼增益
    MatrixXt sigma = MatrixXt::Zero(N + K, K);
    for (int i = 0; i < ext_sigma_points.rows(); i++) {
      auto diffA = (ext_sigma_points.row(i).transpose() - ext_mean_pred);
      auto diffB = (expected_measurements.row(i).transpose() - expected_measurement_mean);
      sigma += ext_weights[i] * (diffA * diffB.transpose());
    }

    kalman_gain = sigma * expected_measurement_cov.inverse();
    const auto& K = kalman_gain;

	//更新最後的ukf。
    VectorXt ext_mean = ext_mean_pred + K * (measurement - expected_measurement_mean);
    MatrixXt ext_cov = ext_cov_pred - K * expected_measurement_cov * K.transpose();

    mean = ext_mean.topLeftCorner(N, 1);
    cov = ext_cov.topLeftCorner(N, N);
  }
           

computeSigmaPoints

通過mean和cov計算sigma點。思路是将cov做Cholesky分解,用下三角矩陣L對mean做處理。得到一系列sigma_points.

void computeSigmaPoints(const VectorXt& mean, const MatrixXt& cov, MatrixXt& sigma_points) {
    const int n = mean.size();
    assert(cov.rows() == n && cov.cols() == n);

	//llt分解。求Cholesky分解A=LL^*=U^*U。L是下三角矩陣
    Eigen::LLT<MatrixXt> llt;
    llt.compute((n + lambda) * cov);
    MatrixXt l = llt.matrixL();
    
    //mean是列向量。這裡會自動轉置處理。
    sigma_points.row(0) = mean;
    for (int i = 0; i < n; i++) {
      sigma_points.row(1 + i * 2) = mean + l.col(i); //奇數1357
      sigma_points.row(1 + i * 2 + 1) = mean - l.col(i); //偶數2468
    }
  }
           

ensurePositiveFinite

(未實際應用)

保證協方差的正有限。未實際應用。

void ensurePositiveFinite(MatrixXt& cov) {
    return;
    //就到這裡了,在上面就return掉了。
    const double eps = 1e-9;

    Eigen::EigenSolver<MatrixXt> solver(cov);
    MatrixXt D = solver.pseudoEigenvalueMatrix(); //特征值
    MatrixXt V = solver.pseudoEigenvectors(); //特征向量
    for (int i = 0; i < D.rows(); i++) {
      if (D(i, i) < eps) {
        D(i, i) = eps;
      }
    }
    cov = V * D * V.inverse();
  }
           

總結

其實看到這裡我已經忘了整個的架構了。是以再捋一遍。搞完再加。

  1. 可能會出現

    No rule to make target '/usr/lib/x86_64-linux-gnu/libproj.so

    運作下面指令即可

    sudo ln -s /usr/lib/x86_64-linux-gnu/libproj.so.9 /usr/lib/x86_64-linux-gnu/libproj.so

    ↩︎
  2. 關于ukf,我看的這個部落客的内容。https://blog.csdn.net/l2014010671/article/details/93305871 ↩︎

繼續閱讀