天天看點

torch.nn.functional中softmax的作用及其參數說明

 參考:https://pytorch-cn.readthedocs.io/zh/latest/package_references/functional/#_1

class torch.nn.Softmax(input, dim)      

或:

torch.nn.functional.softmax(input, dim)      

對n維輸入張量運用Softmax函數,将張量的每個元素縮放到(0,1)區間且和為1。Softmax函數定義如下:

torch.nn.functional中softmax的作用及其參數說明

參數:

  dim:指明次元,dim=0表示按列計算;dim=1表示按行計算。預設dim的方法已經棄用了,最好聲明dim,否則會警告:

UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.      

shape:

  • 輸入:(N, L)
  • 輸出:(N, L)

傳回結果是一個與輸入次元dim相同的張量,每個元素的取值範圍在(0,1)區間。

例子:

import torch

from torch import nn
from torch import autograd

m = nn.Softmax()
input = autograd.Variable(torch.randn(2, 3))
print(input)
print(m(input))      

傳回:

(deeplearning) userdeMBP:pytorch user$ python test.py 
tensor([[ 0.2854,  0.1708,  0.4308],
        [-0.1983,  2.0705,  0.1549]])
test.py:9: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  print(m(input))
tensor([[0.3281, 0.2926, 0.3794],
        [0.0827, 0.7996, 0.1177]])      

可見預設按行計算,即dim=1

更明顯的例子:

import torch

import torch.nn.functional as F

x= torch.Tensor( [ [1,2,3,4],[1,2,3,4],[1,2,3,4]])

y1= F.softmax(x, dim = 0) #對每一列進行softmax
print(y1)

y2 = F.softmax(x,dim =1) #對每一行進行softmax
print(y2)

x1 = torch.Tensor([1,2,3,4])
print(x1)

y3 = F.softmax(x1,dim=0) #一維時使用dim=0,使用dim=1報錯
print(y3)      

傳回:

(deeplearning) userdeMBP:pytorch user$ python test.py 
tensor([[0.3333, 0.3333, 0.3333, 0.3333],
        [0.3333, 0.3333, 0.3333, 0.3333],
        [0.3333, 0.3333, 0.3333, 0.3333]])
tensor([[0.0321, 0.0871, 0.2369, 0.6439],
        [0.0321, 0.0871, 0.2369, 0.6439],
        [0.0321, 0.0871, 0.2369, 0.6439]])
tensor([1., 2., 3., 4.])
tensor([0.0321, 0.0871, 0.2369, 0.6439])      

因為列的值相同,是以按列計算時每一個所占的比重都是0.3333;行都是[1,2,3,4],是以按行計算,比重結果都為[0.0321, 0.0871, 0.2369, 0.6439]

一維使用dim=1報錯:

RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)      

轉載于:https://www.cnblogs.com/wanghui-garcia/p/10675588.html