天天看點

Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

這篇文章其實就是幾篇文章湊在了一起.

fusion的部分用的是

ElasticFusion: Dense SLAM without a pose graph

這篇文章實作了local loop closure and global loop closure

然後他的segmentation有兩種, 一種是MOTION SEGMENTATION, 它是基于超像素來做這種segmentation. 對于超像素,它用這篇文章來劃分

gSLICr: SLIC superpixels at over 250Hz

劃分好超像素後就用這篇文章來做segmentation

Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials

這篇文章主要需要提供unary potential and pairwise potential, 是以在co-fusion文章的VI部分有具體的提及

然後就是另一種segmentation, OBJECT INSTANCE SEGMENTATION, 這個是用deep learning來做的, 主要有兩篇文章, 一篇是Learning to Segment Object Candidates

Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

另一篇是Learning to Refine Object Segments

Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

是它的提高版

具體的代碼用圖表示為

Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

繼續閱讀