天天看點

Pytorch-maskrcnn(seg)前景分割準備工作資料集定義模型訓練模型測試模型

近來在學習圖像分割的相關算法,準備試試看Mask R-CNN的效果。

關于Mask R-CNN的詳細理論說明,可以參見原作論文https://arxiv.org/abs/1703.06870,網上也有大量解讀的文章。本篇部落客要是參考了PyTorch官方給出的訓練教程,将如何在自己的資料集上訓練Mask R-CNN模型的過程記錄下來,希望能為感興趣的讀者提供一些幫助。

PyTorch官方教程(Object Detection finetuning tutorial):

https://github.com/pytorch/tutorials/blob/master/_static/torchvision_finetuning_instance_segmentation.ipynb

或:

https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html

需要注意的是,TorchVision需要0.3之後的版本才可以使用。

目錄

準備工作

資料集

定義模型

訓練模型

1. 準備工作

2. 資料增強/轉換

3. 訓練

測試模型

準備工作

安裝coco的api,主要用到其中的IOU計算的庫來評價模型的性能。

git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
python setup.py build_ext install
           

API的安裝也可以參考另一篇:

https://blog.csdn.net/u013685264/article/details/100331064

資料集

本教程使用Penn-Fudan的行人檢測和分割資料集來訓練Mask R-CNN執行個體分割模型。Penn-Fudan資料集中有170張圖像,包含345個行人的執行個體。圖像中場景主要是校園和城市街景,每張圖中至少有一個行人,具體的介紹和下載下傳位址如下:

https://www.cis.upenn.edu/~jshi/ped_html/

# 下載下傳Penn-Fudan dataset
wget https://www.cis.upenn.edu/~jshi/ped_html/PennFudanPed.zip
# 解壓到目前目錄
unzip PennFudanPed.zip
           

解壓後的目錄結構如下:

Pytorch-maskrcnn(seg)前景分割準備工作資料集定義模型訓練模型測試模型

先看看Penn-Fudan資料集中的圖像和mask:

from PIL import Image

Image.open('PennFudanPed/PNGImages/FudanPed00001.png')

mask = Image.open('PennFudanPed/PedMasks/FudanPed00001_mask.png')

mask.putpalette([
    0, 0, 0, # black background
    255, 0, 0, # index 1 is red
    255, 255, 0, # index 2 is yellow
    255, 153, 0, # index 3 is orange
])

mask
           
Pytorch-maskrcnn(seg)前景分割準備工作資料集定義模型訓練模型測試模型
Pytorch-maskrcnn(seg)前景分割準備工作資料集定義模型訓練模型測試模型

每一張圖像都有對應的mask标注,不同的顔色表示不同的執行個體。在訓練模型之前,需要寫好資料集的載入接口。

import os
import torch
import numpy as np
import torch.utils.data
from PIL import Image


class PennFudanDataset(torch.utils.data.Dataset):
    def __init__(self, root, transforms=None):
        self.root = root
        self.transforms = transforms
        # load all image files, sorting them to ensure that they are aligned
        self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages"))))
        self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks"))))

    def __getitem__(self, idx):
        # load images ad masks
        img_path = os.path.join(self.root, "PNGImages", self.imgs[idx])
        mask_path = os.path.join(self.root, "PedMasks", self.masks[idx])
        img = Image.open(img_path).convert("RGB")
        # note that we haven't converted the mask to RGB,
        # because each color corresponds to a different instance with 0 being background
        mask = Image.open(mask_path)

        mask = np.array(mask)
        # instances are encoded as different colors
        obj_ids = np.unique(mask)
        # first id is the background, so remove it
        obj_ids = obj_ids[1:]

        # split the color-encoded mask into a set of binary masks
        masks = mask == obj_ids[:, None, None]

        # get bounding box coordinates for each mask
        num_objs = len(obj_ids)
        boxes = []
        for i in range(num_objs):
            pos = np.where(masks[i])
            xmin = np.min(pos[1])
            xmax = np.max(pos[1])
            ymin = np.min(pos[0])
            ymax = np.max(pos[0])
            boxes.append([xmin, ymin, xmax, ymax])

        boxes = torch.as_tensor(boxes, dtype=torch.float32)
        # there is only one class
        labels = torch.ones((num_objs,), dtype=torch.int64)
        masks = torch.as_tensor(masks, dtype=torch.uint8)

        image_id = torch.tensor([idx])
        area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
        # suppose all instances are not crowd
        iscrowd = torch.zeros((num_objs,), dtype=torch.int64)

        target = {}
        target["boxes"] = boxes
        target["labels"] = labels
        target["masks"] = masks
        target["image_id"] = image_id
        target["area"] = area
        target["iscrowd"] = iscrowd

        if self.transforms is not None:
            img, target = self.transforms(img, target)

        return img, target

    def __len__(self):
        return len(self.imgs)
           

檢查一下上面接口傳回的dataset的内部結構

dataset = PennFudanDataset('PennFudanPed/')

dataset[0]
           
Pytorch-maskrcnn(seg)前景分割準備工作資料集定義模型訓練模型測試模型

可以看到,dataset傳回了一個PIL.Image以及一個dictionary,包含boxes、labels和masks等域,這都是訓練的時候網絡需要用到的。

定義模型

Mask R-CNN是基于Faster R-CNN改造而來的。Faster R-CNN用于預測圖像中潛在的目标框和分類得分,而Mask R-CNN在此基礎上加了一個額外的分支,用于預測每個執行個體的分割mask。

有兩種方式來修改torchvision modelzoo中的模型,以達到預期的目的。第一種,采用預訓練的模型,在修改網絡最後一層後finetune。第二種,根據需要替換掉模型中的骨幹網絡,如将ResNet替換成MobileNet等。

1. Finetune預訓練的模型

場景:利用COCO上預訓練的模型,為指定類别的任務進行finetune。

import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor

# load a model pre-trained on COCO
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)

# replace the classifier with a new one, that has num_classes which is user-defined
num_classes = 2  # 1 class (person) + background

# get number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features

# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
           

2. 替換模型的骨幹網絡

場景:替換掉模型的骨幹網絡。舉例來說,預設的骨幹網絡(ResNet-50)對于某些應用來說可能參數過多不易部署,可以考慮将其替換成更輕量的網絡(如MobileNet)。

import torchvision
from torchvision.models.detection import FasterRCNN
from torchvision.models.detection.rpn import AnchorGenerator

# load a pre-trained model for classification and return only the features
backbone = torchvision.models.mobilenet_v2(pretrained=True).features

# FasterRCNN needs to know the number of output channels in a backbone. 
# For mobilenet_v2, it's 1280. So we need to add it here
backbone.out_channels = 1280

# let's make the RPN generate 5 x 3 anchors per spatial
# location, with 5 different sizes and 3 different aspect
# ratios. We have a Tuple[Tuple[int]] because each feature
# map could potentially have different sizes and aspect ratios 
anchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),),
                                   aspect_ratios=((0.5, 1.0, 2.0),))

# let's define what are the feature maps that we will use to perform the region of 
# interest cropping, as well as the size of the crop after rescaling.
# if your backbone returns a Tensor, featmap_names is expected to
# be [0]. More generally, the backbone should return an OrderedDict[Tensor], 
# and in featmap_names you can choose which feature maps to use.
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=[0],
                                                output_size=7,
                                                sampling_ratio=2)

# put the pieces together inside a FasterRCNN model
model = FasterRCNN(backbone,
                   num_classes=2,
                   rpn_anchor_generator=anchor_generator,
                   box_roi_pool=roi_pooler)
           

3. 定義Mask R-CNN模型

言歸正傳,本文的目的是在PennFudan資料集上訓練Mask R-CNN執行個體分割模型,即上述第一種情況。在torchvision.models.detection中有官方的網絡定義和接口的檔案,可以直接使用。

import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor

      
def get_instance_segmentation_model(num_classes):
    # load an instance segmentation model pre-trained on COCO
    model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)

    # get the number of input features for the classifier
    in_features = model.roi_heads.box_predictor.cls_score.in_features

    # replace the pre-trained head with a new one
    model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)

    # now get the number of input features for the mask classifier
    in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels
    hidden_layer = 256

    # and replace the mask predictor with a new one
    model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask,
                                                       hidden_layer,
                                                       num_classes)

    return model
           

至此,模型就定義好了,接下來可以在PennFudan資料集進行訓練和測試了。

訓練模型

1. 準備工作

在PyTorch官方的references/detection/中,有一些封裝好的用于模型訓練和測試的函數,其中references/detection/engine.py、references/detection/utils.py、references/detection/transforms.py是我們需要用到的。首先,将這些檔案拷貝過來

# Download TorchVision repo to use some files from references/detection
git clone https://github.com/pytorch/vision.git
cd vision
git checkout v0.4.0

cp references/detection/utils.py ../
cp references/detection/transforms.py ../
cp references/detection/coco_eval.py ../
cp references/detection/engine.py ../
cp references/detection/coco_utils.py ../
           

2. 資料增強/轉換

在圖像輸入到網絡前,需要對其進行旋轉操作(資料增強)。這裡需要注意的是,由于Mask R-CNN模型本身可以處理歸一化及尺度變化的問題,因而無需在這裡進行mean/std normalization或圖像縮放的操作。

import utils
import transforms as T
from engine import train_one_epoch, evaluate


def get_transform(train):
    transforms = []
    # converts the image, a PIL image, into a PyTorch Tensor
    transforms.append(T.ToTensor())
    if train:
        # during training, randomly flip the training images
        # and ground-truth for data augmentation
        transforms.append(T.RandomHorizontalFlip(0.5))

    return T.Compose(transforms)
           

3. 訓練

至此,資料集、模型、資料增強的部分都已經寫好。在模型初始化、優化器及學習率調整政策標明後,就可以開始訓練了。這裡,設定模型訓練10個epochs,并且在每個epoch完成後在測試集上對模型的性能進行評價。

# use the PennFudan dataset and defined transformations
dataset = PennFudanDataset('PennFudanPed', get_transform(train=True))
dataset_test = PennFudanDataset('PennFudanPed', get_transform(train=False))

# split the dataset in train and test set
torch.manual_seed(1)
indices = torch.randperm(len(dataset)).tolist()
dataset = torch.utils.data.Subset(dataset, indices[:-50])
dataset_test = torch.utils.data.Subset(dataset_test, indices[-50:])

# define training and validation data loaders
data_loader = torch.utils.data.DataLoader(
    dataset, batch_size=2, shuffle=True, num_workers=4,
    collate_fn=utils.collate_fn)

data_loader_test = torch.utils.data.DataLoader(
    dataset_test, batch_size=1, shuffle=False, num_workers=4,
    collate_fn=utils.collate_fn)

device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')

# the dataset has two classes only - background and person
num_classes = 2

# get the model using the helper function
model = get_instance_segmentation_model(num_classes)
# move model to the right device
model.to(device)

# construct an optimizer
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005,
                            momentum=0.9, weight_decay=0.0005)

# the learning rate scheduler decreases the learning rate by 10x every 3 epochs
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
                                               step_size=3,
                                               gamma=0.1)

# training
num_epochs = 10
for epoch in range(num_epochs):
    # train for one epoch, printing every 10 iterations
    train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)

    # update the learning rate
    lr_scheduler.step()

    # evaluate on the test dataset
    evaluate(model, data_loader_test, device=device)
           

測試模型

現在,模型已經訓練好了,來檢查一下模型在測試圖像上預測的結果。

# pick one image from the test set
img, _ = dataset_test[0]

# put the model in evaluation mode
model.eval()
with torch.no_grad():
    prediction = model([img.to(device)])
           

這裡輸出的prediction中,包含了在圖像中預測出的boxes、labels、masks和scores等資訊。

Pytorch-maskrcnn(seg)前景分割準備工作資料集定義模型訓練模型測試模型

接下來,将測試圖像及對應的預測結果可視化出來,看看效果如何。

Image.fromarray(img.mul(255).permute(1, 2, 0).byte().numpy())

Image.fromarray(prediction[0]['masks'][0, 0].mul(255).byte().cpu().numpy())
           
Pytorch-maskrcnn(seg)前景分割準備工作資料集定義模型訓練模型測試模型
Pytorch-maskrcnn(seg)前景分割準備工作資料集定義模型訓練模型測試模型

可以看到,分割的結果還是不錯的。到此,訓練自己的Mask R-CNN模型就完成了。

Bug解決

在測試模型性能的時候,如果出現ValueError: Does not understand character buffer dtype format string ('?'):

File "build/bdist.linux-x86_64/egg/pycocotools/mask.py", line 82, in encode
  File "pycocotools/_mask.pyx", line 137, in pycocotools._mask.encode
ValueError: Does not understand character buffer dtype format string ('?')
           

通過修改coco_eval.py中mask_util.encode一行,添加dtype=np.uint8,即可搞定。

In coco_eval.py:

rles = [
mask_util.encode(np.array(mask[0, :, :, np.newaxis], dtype=np.uint8, order="F"))[0]
for mask in masks
]
           

繼續閱讀