模型是處理輸入以生成輸出的連接配接層的集合。你可以使用nn包來定義模型。nn包提供了一般深度學習層的子產品集合。nn的一個子產品或層接收輸入張量,計算輸出張量,并獲得權重。在PyTorch中,我們可以使用兩種方法定義模型:nn.Sequential和 nn.Module。
定義一個線性層
讓我們建立一個線性層并且列印輸出尺寸
from torch import nn
import torch
# input tensor dimension 64*1000
input_tensor = torch.randn(64, 1000)
# linear layer with 1000 inputs and 100 outputs
linear_layer = nn.Linear(1000, 100)
# output of the linear layer
output = linear_layer(input_tensor)
print(output.size())
# torch.Size([64, 100])
使用nn.Sequential定義模型
我們可以使用nn.Sequential通過按順序建立層來建構一個深度學習模型。
- 使用nn.Sequential實作模型:
from torch import nn
# define a two-layer model
model = nn.Sequential(
nn.Linear(4, 5),
nn.ReLU(),
nn.Linear(5, 1),
)
print(model)
# Sequential(
# (0): Linear(in_features=4, out_features=5, bias=True)
# (1): ReLU()
# (2): Linear(in_features=5, out_features=1, bias=True)
#)
使用nn.Module定義模型
PyTorch中,使用nn.Module的子類也可以建立模型。首先在類的__init__方法中指定要定義的層,然後在forward方法中,把輸入應用于這些層,該方法對于建構定制的模型更靈活。
- 首先,實作類的大概架構
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
def forward(self, x):
pass
2.我們定義__init__函數:
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, 500)
self.fc2 = nn.Linear(500, 10)
3.然後,我們定義forward函數:
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
4.然後,我們将重寫這兩個類函數,init 和 forward
Net.__init__ = __init__
Net.forward = forward
5.下一步,我們将建立一個Net類對象并且列印模型
model = Net()
print(model)
# Net(
# (conv1): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1))
# (conv2): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1))
# (fc1): Linear(in_features=800, out_features=500, bias=True)
# (fc2): Linear(in_features=500, out_features=10, bias=True)
# )
将模型移到GPU上
一個模型就是參數的集合,模型預設建構在CPU上:
- 擷取模型裝置
print(next(model.parameters()).device)
# cpu
- 然後,把模型移到GPU上
device = torch.device("cuda:0")
model.to(device)
print(next(model.parameters()).device)
# cuda:0
列印模型摘要
通過列印模型摘要,我們可以獲得模型每層的輸出形狀以及參數量。
- 安裝torchsummary包
pip install torchsummary
- 使用torchsummary來獲得模型摘要
from torchsummary import summary
summary(model, input_size=(1, 28, 28))
定義損失函數和優化器
損失函數用來計算模型輸出與标簽之間的距離,也叫做objective function(目标函數)、cost function(代價函數)以及criterion(評判标準)。對于分類問題,一般使用交叉熵損失。
在訓練期間,使用optimizer(優化器)來更新模型參數(也稱作權重)。PyTorch的optim包提供了各種優化算法,包括SGD以及它的變體Adam, RMSprop等。
定義損失函數
- 首先,定義負對數損失
from torch import nn
loss_func = nn.NLLLoss(reduction="sum")
-
在小批量資料上測試損失函數
train_dl來自于Pytorch基礎知識(2)資料的導入與預處理
# train_dl來自于Pytorch基礎知識(2)資料的導入與預處理
for xb, yb in train_dl:
# move batch to cuda device
xb = xb.type(torch.float).to(device)
yb = yb.to(device)
# get model output
out = model(xb)
# calculate loss value
loss = loss_func(out, yb)
print(loss.item())
break
# 72.04580688476562
- 計算模型參數的梯度
# compute gradients
loss.backward()
定義優化器
- 定義Adam優化器
from torch import optim
opt = optim.Adam(model.parameters(), lr=1e-4)
- 設定梯度為0
# set gradients to zero
opt.zero_grad()
- 更新模型參數
# update model parameters
opt.step()
訓練和評估
- 計算每小批次損失值的函數
def loss_batch(loss_func, xb, yb, yb_h, opt=None):
# obtain loss
loss = loss_func(yb_h, yb)
# obtain performance metric
metric_b = metrics_batch(yb, yb_h)
if opt is not None:
loss.backward()
opt.step()
opt.zero_grad()
return loss.item(), metric_b
- 計算每小批次的準确率
def metrics_batch(target, output):
# obtain output class
pred = output.argmax(dim=1, keepdim=True)
# compare output class with target class
corrects = pred.eq(target.view_as(pred)).sum().item()
return corrects
- 計算整個資料上的損失和準确率
def loss_epoch(model, loss_func, dataset_dl, opt=None):
loss = 0.0
metric=0.0
len_data = len(dataset_dl.dataset)
for xb, yb in dataset_dl:
xb = xb.type(torch.float).to(device)
yb = yb.to(device)
# obtain model output
yb_h = model(xb)
loss_b, metric_b = loss_batch(loss_func, xb, yb, yb_h, opt)
loss += loss_b
if metric_b is not None:
metric+=metric_b
loss/=len_data
metric/=len_data
return loss, metric
- 最後,定義train_val函數
def train_val(epochs, model, loss_func, opt, train_dl, val_dl):
for epoch in range(epochs):
model.train()
train_loss, train_metric = loss_epoch(model, loss_func, train_dl, opt)
model.eval()
with torch.no_grad():
val_loss, val_metric= loss_epoch(model, loss_func, val_dl)
accuracy = 100 * val_metric
print("epoch: %d, train loss: %.6f, val loss: %.6f, accuracy: %.2f" % (epoch, train_loss, val_loss, accuracy))
- 訓練模型
# call train_val function
num_epochs=5
train_val(num_epochs, model, loss_func, opt, train_dl, val_dl)
獲得如下結果:
儲存和導入模型
方法一:
- 首先,我們儲存模型參數或者字典到檔案:
# define path2weights
path2weights="./models/weights.pt"
# store state_dict to file
torch.save(model.state_dict(), path2weights)
- 導入模型參數之前,建立一個模型執行個體
# define model: weights are randomly initiated
_model = Net()
- 從檔案中導入模型參數
- 将參數設定到模型中
方法二:
- 首先把模型存儲到檔案中
# define a path2model
path2model = "./models/model.pt"
# store model and weights into a file
torch.save(model, path2model)
- 導入模型
pytorch模型導入綜述:
# Load all tensors onto the CPU
>>> torch.load('tensors.pt', map_location=torch.device('cpu'))
# Load all tensors onto the CPU, using a function
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage)
# Load all tensors onto GPU 1
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))
# Map tensors from GPU 1 to GPU 0
>>> torch.load('tensors.pt', map_location={'cuda:1':'cuda:0'})
# Load tensor from io.BytesIO object
>>> with open('tensor.pt', 'rb') as f:
buffer = io.BytesIO(f.read())
>>> torch.load(buffer)
# Load a module with 'ascii' encoding for unpickling
>>> torch.load('module.pt', encoding='ascii')
根據作業系統自動選擇num_worker數值:
注意:nvidia-smi檢視到的CUDA Version是cuda driver version,也是表明目前顯示卡驅動支援的最高版本的CUDA版本号。我們自己安裝的的CUDA叫做cuda runtime version。