天天看點

【Pytorch】損失函數與反向傳播(5)一、損失函數二、反向傳播三、優化器如何調整學習速率

目錄

一、損失函數

二、反向傳播

三、優化器

如何調整學習速率

一、損失函數

https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss

L1Loss和MSELoss

注意loss function輸入shape和輸出shape即可。

import torch
from torch.nn import L1Loss
from torch import nn

inputs = torch.tensor([1, 2, 3], dtype=torch.float32) # float才能進行loss運算
targets = torch.tensor([1, 2, 5], dtype=torch.float32)
print("inputs:{}".format(inputs)) # inputs:tensor([1., 2., 3.])
print("targets:{}".format(targets)) # targets:tensor([1., 2., 5.])


inputs = torch.reshape(inputs, (1, 1, 1, 3))
targets = torch.reshape(targets, (1, 1, 1, 3))
print("inputs:{}".format(inputs)) # inputs:tensor([[[[1., 2., 3.]]]])
print("targets:{}".format(targets)) # targets:tensor([[[[1., 2., 5.]]]])


loss = L1Loss(reduction='sum')
result = loss(inputs, targets)

loss_mse = nn.MSELoss()
result_mse = loss_mse(inputs, targets)

print(result) # tensor(2.)
print(result_mse) # tensor(1.3333)
           

Layers層輸入多了C,代表類别。

【Pytorch】損失函數與反向傳播(5)一、損失函數二、反向傳播三、優化器如何調整學習速率

計算交叉熵

【Pytorch】損失函數與反向傳播(5)一、損失函數二、反向傳播三、優化器如何調整學習速率
x = torch.tensor([0.1, 0.2, 0.3])
y = torch.tensor([1])
x = torch.reshape(x, (1, 3)) # (N,C)
print("x:{}".format(x)) # x:tensor([[0.1000, 0.2000, 0.3000]])
print("y:{}".format(y)) # y:tensor([1])

loss_cross = nn.CrossEntropyLoss()
result_cross = loss_cross(x, y)
print(result_cross) # tensor(1.1019)
           

完整代碼如下,

import torchvision
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.data import DataLoader

dataset = torchvision.datasets.CIFAR10("../data", train=False, transform=torchvision.transforms.ToTensor(),
                                       download=True)

dataloader = DataLoader(dataset, batch_size=1)

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self, x):
        x = self.model1(x)
        return x

# 定義交叉熵
loss = nn.CrossEntropyLoss()
tudui = Tudui()
for data in dataloader:
    imgs, targets = data
    outputs = tudui(imgs)
    result_loss = loss(outputs, targets)
    print(result_loss) 
           

二、反向傳播

# 定義交叉熵
loss = nn.CrossEntropyLoss()
tudui = Tudui()
for data in dataloader:
    imgs, targets = data
    outputs = tudui(imgs)
    result_loss = loss(outputs, targets)
    print(result_loss) 
    
    # 反向傳播要使用loss
    result_loss.backward()
           

三、優化器

https://pytorch.org/docs/stable/optim.html

減小loss的作用。優化器根據反向傳播的loss來對參數進行優化,達到對loss降低的目的。

  • step()方法:利用grad來對參數進行更新。
for input, target in dataset:
    optimizer.zero_grad() # 清零上一步産生的資料
    output = model(input)
    loss = loss_fn(output, target)
    loss.backward()
    optimizer.step() # 參數調整
           

完整的代碼,

# -*- coding: utf-8 -*-
import torch
import torchvision
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
from torch.optim.lr_scheduler import StepLR
from torch.utils.data import DataLoader

dataset = torchvision.datasets.CIFAR10("../data", train=False, transform=torchvision.transforms.ToTensor(),
                                       download=True)

dataloader = DataLoader(dataset, batch_size=1)

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self, x):
        x = self.model1(x)
        return x


loss = nn.CrossEntropyLoss()
# 搭建網絡,網絡的參數tudui.parameters()
tudui = Tudui()

# 建立or定義一個優化器
optim = torch.optim.SGD(tudui.parameters(), lr=0.01)
scheduler = StepLR(optim, step_size=5, gamma=0.1) # 調整學習速率
 
for epoch in range(20):
    running_loss = 0.0
    # 沒有外層,則模型隻看了資料一次。
    for data in dataloader:
        imgs, targets = data
        outputs = tudui(imgs)
        result_loss = loss(outputs, targets)

        # 網絡中參數梯度清零
        optim.zero_grad()

        # 反向傳播,求出節點的梯度
        result_loss.backward()
 
        # 調用優化器,對參數進行調優
        scheduler.step()
        # 每輪訓練,在整體資料,每一個圖檔上的loss求和
        running_loss = running_loss + result_loss
    print(running_loss) # loss不斷減小
           
【Pytorch】損失函數與反向傳播(5)一、損失函數二、反向傳播三、優化器如何調整學習速率

總結

【Pytorch】損失函數與反向傳播(5)一、損失函數二、反向傳播三、優化器如何調整學習速率

如何調整學習速率

前期設定比較大,快速找到。後期設定的比較小,尋找更好的結果。

scheduler = StepLR(optim, step_size=5, gamma=0.1) # 調整學習速率