天天看點

優化器的使用————PyTorch

哔哩大學的PyTorch深度學習快速入門教程(絕對通俗易懂!)【小土堆】

的P24講講述了神經網絡優化器的使用。

首先優化器的簡單例子注釋:

for input, target in dataset:
    optimizer.zero_grad()
    # 接最後一步,到第一步繼續循環,需要把上一步的loss。backword求出來的每一個參數對應的梯度清零,以防上一步造成影響
    output = model(input)
    # 輸入經過一個模型得到輸出
    loss = loss_fn(output, target)
    # 輸出和真實的target計算出loss,即誤差
    loss.backward()
    # 調用誤差的反向傳播,得到每個要更新參數得到的梯度
    optimizer.step()
    # 每個參數根據得到的梯度進行優化,到這步後卷積核中的參數就會有一個調整
           

對上一講的網絡模型進行梯度優化,代碼注釋如下:

import torch
import torchvision.datasets
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter


dataset = torchvision.datasets.CIFAR10("data", train=False, transform=torchvision.transforms.ToTensor(),
                                       download=True)

dataloader = DataLoader(dataset, batch_size=1)


class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        # self.conv1 = Conv2d(3, 32, 5, padding=2)        # 前三個參數見圖檔,圖中是32*32變成32*32,padding的算法為圖檔
        # self.maxpool1 = MaxPool2d(2)
        # self.conv2 = Conv2d(3, 32, 5, padding=2)
        # self.maxpool2 = MaxPool2d(2)
        # self.conv3 = Conv2d(32, 64, 5, padding=2)
        # self.maxpool3 = MaxPool2d(2)
        # self.flatten = Flatten()
        # self.linear1 = Linear(1024, 64)
        # self.linear2 = Linear(64, 10)

        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self, x):
        # x = self.conv1(x)
        # x = self.maxpool1(x)
        # x = self.conv2(x)
        # x = self.maxpool2(x)
        # x = self.conv3(x)
        # x = self.maxpool3(x)
        # x = self.flatten(x)
        # self.linear1(x)
        # self.linear2(x)
        x = self.model1(x)
        return x


loss = nn.CrossEntropyLoss()
tudui = Tudui()

optim = torch.optim.SGD(tudui.parameters(), lr=0.01)
# 設定優化器

# 再套一個循環,多次學習,循環20次
for epoch in range(20):
    running_loss = 0.0
    for data in dataloader:
        imgs, targets = data
        outputs = tudui(imgs)
        result_loss = loss(outputs, targets)
        # 計算出輸出和真實網絡的差距
        optim.zero_grad()
        # 梯度設定為0
        result_loss.backward()
        optim.step()
        running_loss = running_loss + result_loss
        # 求整體誤差的總和

        print(running_loss)

           

結果為:

優化器的使用————PyTorch

每次的結果都優化一下,共20次,代碼中有具體注釋,可以結合第23講來看。

繼續閱讀