天天看點

Pytorch 0.3加載0.4模型及其之間版本的變化

1. 0.4中使用裝置:.to(device)

2. 0.4中删除了Variable,直接tensor就可以

3. with torch.no_grad():的使用代替volatile;棄用volatile,測試中不需要計算梯度的話,用with torch.no_grad():

4. data改用.detach;x.detach()傳回一個requires_grad=False的共享資料的Tensor,并且,如果反向傳播中需要x,那麼x.detach傳回的Tensor的變動會被autograd追蹤。相反,x.data()傳回的Tensor,其變動不會被autograd追蹤,如果反向傳播需要用到x的話,值就不對了。

5. torchvision

  • PyTorch | 0.3到0.4不完整遷移手冊
  • pytorch0.3和0.4對比總結

- pytorch0.4有一些接口已經改變,且模型向下版本相容,不向上相容。

  • In PyTorch 0.4, is it recommended to use `reshape` than `view` when it is possible?
  • Question about 'rebuild_tensor_v2'?

使用pytorch0.3導入pytorch0.4儲存的模型時候:

Monkey-patch because I trained with a newer version.
# This can be removed once PyTorch 0.4.x is out.
# See https://discuss.pytorch.org/t/question-about-rebuild-tensor-v2/14560
import torch._utils
try:
    torch._utils._rebuild_tensor_v2
except AttributeError:
    def _rebuild_tensor_v2(storage, storage_offset, size, stride, requires_grad, backward_hooks):
        tensor = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)
        tensor.requires_grad = requires_grad
        tensor._backward_hooks = backward_hooks
        return tensor
    torch._utils._rebuild_tensor_v2 = _rebuild_tensor_v2      
  • 拷貝一些權重到新的模型方法,感覺不能直接抽取sequential裡面的某一層,除非重新構模組化型,forward得到該層的内容,或者使用hook操作;
  • pytorch在fintune時将sequential中的層輸出,以vgg為例

C/C++基本文法學習

STL

C++ primer

繼續閱讀