天天看點

跟着指南學PyTorch—遷移學習教程(Transfer Learning tutorial)

在這個教程,你将學習如何通過遷移學習訓練神經網絡。你可以在 cs231n notes 了解更多關于遷移學習的内容。

引用這些筆記 實踐中,很少有人從頭開始訓練整個卷積網絡,因為擁有足夠大小的資料集是比較少見的。替代的是, 通常會從一個大的資料集(例如 ImageNet, 包含120萬的圖檔和1000個分類)預訓練一個卷積網絡, 然後将這個卷積網絡作為初始化的網絡, 或者是感興趣任務的固定的特征提取器。

如下是兩種主要的遷移學習的使用場景:

  • 微調卷積網絡: 取代随機初始化網絡, 我們從一個預訓練的網絡初始化, 比如從 imagenet 1000 資料集預訓練的網絡. 其餘的訓練就像往常一樣.
  • 卷積網絡作為固定的特征提取器: 在這裡, 我們固定網絡中的所有權重, 最後的全連接配接層除外. 最後的全連接配接層被新的随機權重替換, 并且, 隻有這一層是被訓練的.
# License: BSD
# Author: Sasank Chilamkurthy

from __future__ import print_function, division

import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy

plt.ion()   # interactive mode           

一、加載資料

我們将使用torchvision和torch.utils.data包來加載資料。

今天我們要解決的問題是訓練一個模型來區分 ants (螞蟻) 和 bees (蜜蜂)。

用于訓練的 ants 和 bees 圖檔各120張。每一類用于驗證的圖檔各75張。通常, 如果從頭開始訓練, 這個非常小的資料集不足以進行泛化。但是, 因為我們使用遷移學習, 應該可以取得很好的泛化效果。

這個資料集是一個非常小的 imagenet 子集。

### 下載下傳圖檔資料

import os
import os.path
import errno
url ='https://download.pytorch.org/tutorial/hymenoptera_data.zip'
filename='hymenoptera_data.zip'

def download(root):
    '''
    下載下傳資料用于訓練和測試的ants和bees的圖檔壓縮包。
    使用zipfile包減壓壓縮包。
    '''
    root = os.path.expanduser(root)
    import zipfile
    
    #下載下傳圖檔壓縮包到指定路徑
    download_url(url,root,filename)
    
    #獲得目前路徑
    cwd = os.getcwd()    
    path = os.path.join(root, filename)
    tar = zipfile.ZipFile(path, "r")
    #解壓檔案
    tar.extractall(root)
    tar.close()
    #切換到目前工作路徑
    os.chdir(cwd)

def download_url(url, root, filename):
    from six.moves import urllib
    root = os.path.expanduser(root)
    fpath = os.path.join(root, filename)
    
    try:
        os.makedirs(root)
    except OSError as e:
        if e.errno == errno.EEXIST:
            pass
        else:
            raise
    
    # downloads file
    if os.path.isfile(fpath) :
        print('使用已下載下傳檔案: ' + fpath)
    else:
        try:
            print('下載下傳 ' + url + ' 到 ' + fpath)
            urllib.request.urlretrieve(url, fpath)
        except:
            if url[:5] == 'https':
                url = url.replace('https:', 'http:')
                print('Failed download. Trying https -> http instead.'
                      ' Downloading ' + url + ' to ' + fpath)
                urllib.request.urlretrieve(url, fpath)

download('./root')           
# Data augmentation and normalization for training
# Just normalization for validation
data_transforms = {
    'train': transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
    'val': transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
}

data_dir = './root/hymenoptera_data'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
                                          data_transforms[x])
                  for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
                                             shuffle=True, num_workers=4)
              for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")           

二、訓練模型

現在, 讓我們寫一個通用的函數來訓練模型. 這裡, 我們将會舉例說明:

  • 排程學習率
  • 儲存最佳的學習模型

下面函數中, scheduler 參數是torch.optim.lr_scheduler 中的 LR scheduler 對象。

def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
    since = time.time()

    best_model_wts = copy.deepcopy(model.state_dict())
    best_acc = 0.0

    for epoch in range(num_epochs):
        print('Epoch {}/{}'.format(epoch, num_epochs - 1))
        print('-' * 10)

        # Each epoch has a training and validation phase
        for phase in ['train', 'val']:
            if phase == 'train':
                scheduler.step()
                model.train()  # Set model to training mode
            else:
                model.eval()   # Set model to evaluate mode

            running_loss = 0.0
            running_corrects = 0

            # Iterate over data.
            for inputs, labels in dataloaders[phase]:
                inputs = inputs.to(device)
                labels = labels.to(device)

                # zero the parameter gradients
                optimizer.zero_grad()

                # forward
                # track history if only in train
                with torch.set_grad_enabled(phase == 'train'):
                    outputs = model(inputs)
                    _, preds = torch.max(outputs, 1)
                    loss = criterion(outputs, labels)

                    # backward + optimize only if in training phase
                    if phase == 'train':
                        loss.backward()
                        optimizer.step()

                # statistics
                running_loss += loss.item() * inputs.size(0)
                running_corrects += torch.sum(preds == labels.data)

            epoch_loss = running_loss / dataset_sizes[phase]
            epoch_acc = running_corrects.double() / dataset_sizes[phase]

            print('{} Loss: {:.4f} Acc: {:.4f}'.format(
                phase, epoch_loss, epoch_acc))

            # deep copy the model
            if phase == 'val' and epoch_acc > best_acc:
                best_acc = epoch_acc
                best_model_wts = copy.deepcopy(model.state_dict())

        print()

    time_elapsed = time.time() - since
    print('Training complete in {:.0f}m {:.0f}s'.format(
        time_elapsed // 60, time_elapsed % 60))
    print('Best val Acc: {:4f}'.format(best_acc))

    # load best model weights
    model.load_state_dict(best_model_wts)
    return model           

1.顯示部分圖像

讓我們顯示一些訓練中的圖檔, 以便了解資料增強。

def imshow(inp, title=None):
    """Imshow for Tensor."""
    inp = inp.numpy().transpose((1, 2, 0))
    mean = np.array([0.485, 0.456, 0.406])
    std = np.array([0.229, 0.224, 0.225])
    inp = std * inp + mean
    inp = np.clip(inp, 0, 1)
    plt.imshow(inp)
    if title is not None:
        plt.title(title)
    plt.pause(0.001)  # pause a bit so that plots are updated


# Get a batch of training data
inputs, classes = next(iter(dataloaders['train']))

# Make a grid from batch
out = torchvision.utils.make_grid(inputs)

imshow(out, title=[class_names[x] for x in classes])           

2.顯示模型的預測結果

寫一個處理少量圖檔, 并顯示預測結果的通用函數。

def visualize_model(model, num_images=6):
    was_training = model.training
    model.eval()
    images_so_far = 0
    fig = plt.figure()

    with torch.no_grad():
        for i, (inputs, labels) in enumerate(dataloaders['val']):
            inputs = inputs.to(device)
            labels = labels.to(device)

            outputs = model(inputs)
            _, preds = torch.max(outputs, 1)

            for j in range(inputs.size()[0]):
                images_so_far += 1
                ax = plt.subplot(num_images//2, 2, images_so_far)
                ax.axis('off')
                ax.set_title('predicted: {}'.format(class_names[preds[j]]))
                imshow(inputs.cpu().data[j])

                if images_so_far == num_images:
                    model.train(mode=was_training)
                    return
        model.train(mode=was_training)           

三、調整卷積網絡

加載一個預訓練的網絡, 并重置最後一個全連接配接層。

model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)

model_ft = model_ft.to(device)

criterion = nn.CrossEntropyLoss()

# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)

# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
           

訓練和評估

CPU模式下将花費20—30分鐘。在GPU環境下,花費時間少于1分鐘(官方給的資料,我沒有環境測試)。

model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
                       num_epochs=25




           
跟着指南學PyTorch—遷移學習教程(Transfer Learning tutorial)
visualize_model(model_ft)           
跟着指南學PyTorch—遷移學習教程(Transfer Learning tutorial)
跟着指南學PyTorch—遷移學習教程(Transfer Learning tutorial)

四、卷積神經網絡作為固定特征提取器

ConvNet as fixed feature extractor

這裡, 我們固定網絡中除最後一層外的所有權重. 為了固定這些參數, 我們需要設定 requires_grad == False , 然後在 backward() 中就不會計算梯度.

你可以在這裡(

http://t.cn/EafTQ8T)

閱讀更多相關資訊.

model_conv = torchvision.models.resnet18(pretrained=True)
for param in model_conv.parameters():
    param.requires_grad = False

# Parameters of newly constructed modules have requires_grad=True by default
num_ftrs = model_conv.fc.in_features
model_conv.fc = nn.Linear(num_ftrs, 2)

model_conv = model_conv.to(device)

criterion = nn.CrossEntropyLoss()

# Observe that only parameters of final layer are being optimized as
# opoosed to before.
optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)

# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)           

在使用 CPU 的情況下, 和前一個方案相比, 這将花費的時間是它的一半。期望中, 網絡的大部分是不需要計算梯度的. 前向傳遞依然要計算梯度。

model_conv = train_model(model_conv, criterion, optimizer_conv,
                         exp_lr_scheduler, num_epochs=25)           
跟着指南學PyTorch—遷移學習教程(Transfer Learning tutorial)
visualize_model(model_conv)

plt.ioff()
plt.show()           
跟着指南學PyTorch—遷移學習教程(Transfer Learning tutorial)
跟着指南學PyTorch—遷移學習教程(Transfer Learning tutorial)

Type Markdown and LaTeX: α²

繼續閱讀