編寫代碼,pytorch出現了這個問題:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 64, 92, 122]], which is output 0 of LeakyReluBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
在網絡forward函數中,不要出現x+=1,使用x=x+11
比如我的代碼原來是這麼寫的:
error
![](https://img.laitimes.com/img/9ZDMuAjOiMmIsIjOiQnIsIyZuBnLwYDO5UTMzAjM5EjMxAjMwIzLc52YucWbp5GZzNmLn9Gbi1yZtl2Lc9CX6MHc0RHaiojIsJye.png)
改成這樣之後就可以運作了
- https://blog.csdn.net/DuinoDu/article/details/80435127 ↩︎