天天看點

pytorch------cpu與gpu load時互相轉化 torch.load(map_location=)

将gpu改為cpu時,遇到一個報錯:

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.

此時改為:

torch.load("0.9472_0048.weights",map_location='cpu')      

就可以解決問題了。

友善查閱,整理:

假設我們隻儲存了模型的參數(model.state_dict())到檔案名為modelparameters.pth, model = Net()

1. cpu -> cpu或者gpu -> gpu:

checkpoint = torch.load('modelparameters.pth')

model.load_state_dict(checkpoint)      

2. cpu -> gpu 1

torch.load('modelparameters.pth', map_location=lambda storage, loc: storage.cuda(1))      

3. gpu 1 -> gpu 0

torch.load('modelparameters.pth', map_location={'cuda:1':'cuda:0'})      

4. gpu -> cpu

torch.load('modelparameters.pth', map_location=lambda storage, loc: storage)      

原文:https://blog.csdn.net/bc521bc/article/details/85623515

繼續閱讀