一、簡化前饋網絡LeNet
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | |
二、優化器基本使用方法
- 建立優化器執行個體
- 循環:
- 清空梯度
- 向前傳播
- 計算Loss
- 反向傳播
- 更新參數
|
三、網絡子產品參數定制
為不同的子網絡參數不同的學習率,finetune常用,使分類器學習率參數更高,學習速度更快(理論上)。
1.經由建構網絡時劃分好的模組進行學習率設定,
|
2.以網絡層對象為機關進行分組,并設定學習率
|
四、在訓練中動态的調整學習率
|
可以看到optimizer.param_groups結構,[{'params','lr', 'momentum', 'dampening', 'weight_decay', 'nesterov'},{……}],集合了優化器的各項參數。
-
torch.optim的靈活使用
- 重寫sgd優化器
import torch
from torch.optim.optimizer import Optimizer, required
class SGD(Optimizer):
def __init__(self, params, lr=required, momentum=0, dampening=0, weight_decay1=0, weight_decay2=0, nesterov=False):
defaults = dict(lr=lr, momentum=momentum, dampening=dampening,
weight_decay1=weight_decay1, weight_decay2=weight_decay2, nesterov=nesterov)
if nesterov and (momentum <= 0 or dampening != 0):
raise ValueError("Nesterov momentum requires a momentum and zero dampening")
super(SGD, self).__init__(params, defaults)
def __setstate__(self, state):
super(SGD, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('nesterov', False)
def step(self, closure=None):
"""Performs a single optimization step. Arguments: closure (callable, optional): A closure that reevaluates the model and returns the loss. """
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
weight_decay1 = group['weight_decay1']
weight_decay2 = group['weight_decay2']
momentum = group['momentum']
dampening = group['dampening']
nesterov = group['nesterov']
for p in group['params']:
if p.grad is None:
continue
d_p = p.grad.data
if weight_decay1 != 0:
d_p.add_(weight_decay1, torch.sign(p.data))
if weight_decay2 != 0:
d_p.add_(weight_decay2, p.data)
if momentum != 0:
param_state = self.state[p]
if 'momentum_buffer' not in param_state:
buf = param_state['momentum_buffer'] = torch.zeros_like(p.data)
buf.mul_(momentum).add_(d_p)
else:
buf = param_state['momentum_buffer']
buf.mul_(momentum).add_(1 - dampening, d_p)
if nesterov:
d_p = d_p.add(momentum, buf)
else:
d_p = buf
p.data.add_(-group['lr'], d_p)
return loss
C/C++基本文法學習
STL
C++ primer