DL之DeepLabv3:DeepLab v3和DeepLab v3+算法的簡介(論文介紹)、架構詳解、案例應用等配圖集合之詳細攻略
目錄
DeepLab v3和DeepLab v3+算法的簡介(論文介紹)
DeepLab v3
DeepLab v3+
0、實驗結果
DeepLab v3算法的架構詳解
DeepLab v3算法的案例應用
相關文章
DL之DeepLabv1:DeepLabv1算法的簡介(論文介紹)、架構詳解、案例應用等配圖集合之詳細攻略
DL之DeepLabv1:DeepLabv1算法的架構詳解
DL之DeepLabv2:DeepLab v2算法的簡介(論文介紹)、架構詳解、案例應用等配圖集合之詳細攻略
DL之DeepLabv2:DeepLab v2算法的架構詳解
DL之DeepLabv3:DeepLab v3和DeepLab v3+算法的架構詳解
Abstract
In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed ‘DeepLabv3’ system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.
在本工作中,我們重新讨論了atrous convolution,這是一個強大的工具,可以顯式調整濾波器的視野,并控制深度卷積神經網絡計算的特征響應的分辨率,這是在語義圖像分割中的應用。針對多尺度目标分割問題,設計了采用級聯或并行的無級卷積子產品,采用多尺度速率捕獲多尺度上下文。此外,我們建議增加先前提出的Atrous空間金字塔池子產品,該子產品在多個尺度上探測卷積特征,并使用圖像級特征編碼全局上下文,進一步提高性能。我們也詳細闡述了系統的實施細節,并分享了我們在訓練系統方面的經驗。提出的“DeepLabv3”系統在沒有經過DenseCRF後處理的情況下,大大改進了我們之前的DeepLab版本,并在PASCAL VOC 2012語義圖像分割基準上取得了與其他先進模型相當的性能。
Conclusion
Our proposed model “DeepLabv3” employs atrous convolution with upsampled filters to extract dense feature maps and to capture long range context. Specifically, to encode multi-scale information, our proposed cascaded module gradually doubles the atrous rates while our proposed atrous spatial pyramid pooling module augmented with image-level features probes the features with filters at multiple sampling rates and effective field-of-views. Our experimental results show that the proposed model significantly improves over previous DeepLab versions and achieves comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.
我們提出的“DeepLabv3”模型利用上采樣濾波器的卷積來提取密集的特征圖,并捕獲長範圍的上下文。具體來說,為了對多尺度資訊進行編碼,我們提出的級聯子產品逐漸将atrous速率提高一倍,而我們提出的atrous空間金字塔池子產品使用圖像級特征增強,探測具有多個采樣速率和有效視場的過濾器的特征。實驗結果表明,該模型較之前的DeepLab版本有了明顯的改進,并在PASCAL VOC 2012語義圖像分割基準上取得了與其他現有模型相當的性能。
論文
Liang-ChiehChen, George Papandreou, Florian Schroff, HartwigAdam.
Rethinking AtrousConvolution for Semantic Image Segmentation. CVPR, 2017
https://arxiv.org/abs/1706.05587
Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on the PASCAL VOC 2012 semantic image segmentation dataset and achieve a performance of 89% on the test set without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow.
深度神經網絡采用空間金字塔彙聚子產品或編碼解碼器結構進行語義分割。前者通過濾光器探測輸入特征或以多種速率和多個有效視場彙聚操作來編碼多尺度上下文資訊,後者通過逐漸恢複空間資訊來捕捉更清晰的對象邊界。在這項工作中,我們建議結合這兩種方法的優點。具體來說,我們提出的模型DeepLabv3+擴充了DeepLabv3,添加了一個簡單而有效的解碼器子產品來細化分割結果,尤其是沿着對象邊界的分割結果。我們進一步探讨了Xception模型,并将深度可分離卷積應用于無源空間金字塔池和解碼器子產品中,得到了一個更快、更強的編解碼器網絡。我們在PASCAL VOC 2012語義圖像分割資料集上驗證了該模型的有效性,在沒有任何後處理的情況下,測試集的性能達到89%。我們的論文附帶了Tensorflow中提出的模型的公開參考實作。
Our proposed model “DeepLabv3+” employs the encoderdecoder structure where DeepLabv3 is used to encode the rich contextual information and a simple yet effective decoder module is adopted to recover the object boundaries. One could also apply the atrous convolution to extract the encoder features at an arbitrary resolution, depending on the available computation resources. We also explore the Xception model and atrous separable convolution to make the proposed model faster and stronger. Finally, our experimental results show that the proposed model sets a new state-of-the-art performance on the PASCAL VOC 2012 semantic image segmentation benchmark.
我們提出的模型“DeepLabv3+”采用了encoderdecoder結構,其中DeepLabv3用于編碼豐富的上下文資訊,并采用一個簡單而有效的解碼器子產品來恢複對象邊界。根據可用的計算資源,還可以應用無源卷積以任意分辨率提取編碼器的特性。同時,我們還研究了Xception模型和atrous可分離卷積,使所提出的模型更快、更強。最後,我們的實驗結果表明,該模型在PASCAL VOC 2012語義圖像分割基準上設定了一個新的最先進的性能。
Liang-ChiehChen, YukunZhu, George Papandreou, Florian Schroff, Hartwig Adam.
Encoder-Decoder with AtrousSeparable Convolution for Semantic Image Segmentation. Feb. 2018
https://arxiv.org/abs/1802.02611v1
1、Performance on PASCAL VOC 2012 test
DeepLab v3 | DeepLab v3+ |
![]() | |
2、 DeepLabv3+算法PASCAL VOC 2012
Visualization results on the PASCAL VOC 2012 valset
更新……