語義分割學習筆記3——運作語義分割程式
- 一、linux安裝依賴環境
-
- 1、建立open-mmlab環境
- 2、安裝pytorch
- 3、 安裝 MMCV
- 4、安裝 MMSegmentation
- 三、測試是否安裝成功
- 二、windows安裝依賴環境
準備運作經典的open-mmlab mmsegmentation架構
源碼位址:https://github.com/open-mmlab/mmsegmentation
安裝說明:https://github.com/open-mmlab/mmsegmentation/blob/master/README_zh-CN.md
cat /usr/local/cuda/version.txt
or
deepstream-app --version-all
deepstream-app version 5.0.0
CUDA Runtime Version: 10.2
這裡我省略了。
語義分割學習筆記2——Jetson Xavier安裝pytorch
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.6.0/index.html

pip install mmsegmentation
三、測試是否安裝成功
下載下傳需要的模型
https://github.com/open-mmlab/mmsegmentation/tree/master/configs
下載下傳下文源碼中對應的cfg和checkpoint檔案。
from mmseg.apis import inference_segmentor, init_segmentor
import mmcv
config_file = 'configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py'
checkpoint_file = 'checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth'
# build the model from a config file and a checkpoint file
model = init_segmentor(config_file, checkpoint_file, device='cuda:0')
# test a single image and show the results
img = 'test.jpg' # or img = mmcv.imread(img), which will only load it once
result = inference_segmentor(model, img)
# visualize the results in a new window
model.show_result(img, result, show=True)
# or save the visualization results to image files
# you can change the opacity of the painted segmentation map in (0, 1].
model.show_result(img, result, out_file='result.jpg', opacity=0.5)
# test a video and show the results
video = mmcv.VideoReader('video.mp4')
for frame in video:
result = inference_segmentor(model, frame)
model.show_result(frame, result, wait_time=1)
python demo/image_demo.py demo/demo.jpg configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \
checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth --device cuda:0 --palette cityscapes