天天看點

PyTorch學習筆記(17)損失函數(二)損失函數

損失函數

nn.L2Loss

功能 計算inputs 與 target 之差的絕對值

l n = ∣ x n − y n ∣ l_{n}=\left|x_{n}-y_{n}\right| ln​=∣xn​−yn​∣

nn.MSELoss

功能 計算inputs 與 target 之差的平方

l n = ( x n − y n ) 2 l_{n}=\left(x_{n}-y_{n}\right)^{2} ln​=(xn​−yn​)2

主要參數

reduction 計算模式,可為none/sum/mean

none -逐個元素計算

sum -所有元素求和,傳回标量

mean -權重平均,傳回标量

SmoothL1Loss

功能 平滑的L1Loss

主要參數

reduction 計算模式,可為none/sum/mean

none -逐個元素計算

sum -所有元素求和,傳回标量

mean -權重平均,傳回标量

loss ⁡ ( x , y ) = 1 n ∑ i z i \operatorname{loss}(x, y)=\frac{1}{n} \sum_{i} z_{i} loss(x,y)=n1​i∑​zi​

z i = { 0.5 ( x i − y i ) 2 ,  if  ∣ x i − y i ∣ < 1 ∣ x i − y i ∣ − 0.5 ,  otherwise  z_{i}=\left\{\begin{array}{ll} 0.5\left(x_{i}-y_{i}\right)^{2}, & \text { if }\left|x_{i}-y_{i}\right|<1 \\ \left|x_{i}-y_{i}\right|-0.5, & \text { otherwise } \end{array}\right. zi​={0.5(xi​−yi​)2,∣xi​−yi​∣−0.5,​ if ∣xi​−yi​∣<1 otherwise ​

PoissonNLLLoss

功能 泊松分布的負對數似然損失函數

主要參數

log_input 輸入是否為對數形式,決定計算公式

full 計算所有loss 預設為False

eps 修正項,避免log(input) 為nan

l o g i n p u t = T r u e log_input = True logi​nput=True

l o s s ( i n p u t , t a r g e t ) = e x p ( i n p u t ) − t a r g e t ∗ i n p u t loss(input, target) = exp(input) - target * input loss(input,target)=exp(input)−target∗input

l o g i n p u t = F a l s e log_input = False logi​nput=False

l o s s ( i n p u t , t a r g e t ) = i n p u t − t a r g e t ∗ l o g ( i n p u t + e p s ) loss (input, target) = input - target * log(input+eps) loss(input,target)=input−target∗log(input+eps)

nn.KLDivLoss

功能:計算KLD(divergence),KL散度,相對熵 注意事項:需提前将輸入計算log-probabilities,如通過nn.logsoftmax()

主要參數

reduction : none/sum/mean/batchmean

batchmean- batchsize次元求平均值

none -逐個元素計算

sum -所有元素求和,傳回标量

mean -權重平均,傳回标量

D K L ( P ∥ Q ) = E x − p [ log ⁡ P ( x ) Q ( x ) ] = E x − p [ log ⁡ P ( x ) − log ⁡ Q ( x ) ] = ∑ i = 1 N P ( x i ) ( log ⁡ P ( x i ) − log ⁡ Q ( x i ) ) \begin{aligned} D_{K L}(P \| Q)=E_{x-p}\left[\log \frac{P(x)}{Q(x)}\right] &=E_{x-p}[\log P(x)-\log Q(x)] \\ &=\sum_{i=1}^{N} \mathrm{P}\left(x_{i}\right)\left(\log P\left(x_{i}\right)-\log Q\left(x_{i}\right)\right) \end{aligned} DKL​(P∥Q)=Ex−p​[logQ(x)P(x)​]​=Ex−p​[logP(x)−logQ(x)]=i=1∑N​P(xi​)(logP(xi​)−logQ(xi​))​

l n = y n ⋅ ( log ⁡ y n − x n ) l_{n}=y_{n} \cdot\left(\log y_{n}-x_{n}\right) ln​=yn​⋅(logyn​−xn​)

nn.MarginRankingLoss

功能 計算兩個向量之間的相似度,用于排序任務

特殊說明 該方法計算兩組資料之間的差異,傳回一個n*n的loss矩陣

主要參數

margin 邊界值,x1與x2之間的差異值

reduction 計算模式,可為none/sum/mean

loss ⁡ ( x , y ) = max ⁡ ( 0 , − y ∗ ( x 1 − x 2 ) +  margin  ) \operatorname{loss}(x, y)=\max (0,-y *(x 1-x 2)+\text { margin }) loss(x,y)=max(0,−y∗(x1−x2)+ margin )

y = 1時,希望x1比x2大,當x1>x2時,不産生loss

y = -1時,希望x2比x1大,當x2>x1時,不産生loss

nn.MultiLabelMarginLoss

功能 多标簽邊界損失函數

主要參數

reduction 計算模式,可為none/sum/mean

loss ⁡ ( x , y ) = ∑ i j max ⁡ ( 0 , 1 − ( x [ y [ j ] ] − x [ i ] ) ) x ⋅ size ⁡ ( 0 ) \operatorname{loss}(x, y)=\sum_{i j} \frac{\max (0,1-(x[y[j]]-x[i]))}{x \cdot \operatorname{size}(0)} loss(x,y)=ij∑​x⋅size(0)max(0,1−(x[y[j]]−x[i]))​

nn.SoftMarginLoss

功能 計算二分類的logistic損失

主要參數

reduction 計算模式,可為none/sum/mean

loss ⁡ ( x , y ) = ∑ i log ⁡ ( 1 + exp ⁡ ( − y [ i ] ∗ x [ i ] ) ) x ⋅ nelement ⁡ 0 \operatorname{loss}(x, y)=\sum_{i} \frac{\log (1+\exp (-y[i] * x[i]))}{x \cdot \operatorname{nelement} 0} loss(x,y)=i∑​x⋅nelement0log(1+exp(−y[i]∗x[i]))​

reduction 計算模式,可為none/sum/mean

loss ⁡ ( x , y ) = − 1 C ∗ ∑ i y [ i ] ∗ log ⁡ ( ( 1 + exp ⁡ ( − x [ i ] ) ) − 1 ) + ( 1 − y [ i ] ) ∗ log ⁡ ( exp ⁡ ( − x [ i ] ) ( 1 + exp ⁡ ( − x [ i ] ) ) ) \operatorname{loss}(x, y)=-\frac{1}{C} * \sum_{i} y[i] * \log \left((1+\exp (-x[i]))^{-1}\right)+(1-y[i]) * \log \left(\frac{\exp (-x[i])}{(1+\exp (-x[i]))}\right) loss(x,y)=−C1​∗i∑​y[i]∗log((1+exp(−x[i]))−1)+(1−y[i])∗log((1+exp(−x[i]))exp(−x[i])​)

nn.MultiMarginLoss

功能 計算多分類的折頁損失

主要參數

p 可選1或2

weight 各類别的loss設定權值

margin 邊界值

reduction 計算模式,可為none/sum/mean

loss ⁡ ( x , y ) = ∑ i max ⁡ ( 0 , margin ⁡ − x [ y ] + x [ i ] ) ) p x ⋅ size ⁡ ( 0 ) \operatorname{loss}(x, y)=\frac{\left.\sum_{i} \max (0, \operatorname{margin}-x[y]+x[i])\right)^{p}}{x \cdot \operatorname{size}(0)} loss(x,y)=x⋅size(0)∑i​max(0,margin−x[y]+x[i]))p​

nn.TripletMarginLoss

功能 計算三元組損失,人臉驗證中常用

主要參數

p 範數的階 預設為2

margin 邊界值

reduction 計算模式,可為none/sum/mean

L ( a , p , n ) = max ⁡ { d ( a i , p i ) − d ( a i , n i ) +  margin,  0 } d ( x i , y i ) = ∥ x i − y i ∥ p \begin{array}{c} L(a, p, n)=\max \left\{d\left(a_{i}, p_{i}\right)-d\left(a_{i}, n_{i}\right)+\text { margin, } 0\right\} \\ \qquad d\left(x_{i}, y_{i}\right)=\left\|\mathbf{x}_{i}-\mathbf{y}_{i}\right\|_{p} \end{array} L(a,p,n)=max{d(ai​,pi​)−d(ai​,ni​)+ margin, 0}d(xi​,yi​)=∥xi​−yi​∥p​​

nn.HingeEmbeddingLoss

功能 計算兩個輸入的相似性,常用于非線性embedding和半監督學習

特别注意 輸入x應為兩個輸入之差的絕對值

主要參數

margin 邊界值

reduction 計算模式,可為none/sum/mean

l n = { x n ,  if  y n = 1 max ⁡ { 0 , Δ − x n } ,  if  y n = − 1 l_{n}=\left\{\begin{array}{ll} x_{n}, & \text { if } y_{n}=1 \\ \max \left\{0, \Delta-x_{n}\right\}, & \text { if } y_{n}=-1 \end{array}\right. ln​={xn​,max{0,Δ−xn​},​ if yn​=1 if yn​=−1​

nn.CosineEmbeddingLoss

功能 采用餘弦相似度計算兩個輸入的相似性

主要參數

margin 可取值[-1,1],推薦為[0.0,5]

reduction 計算模式,可為none/sum/mean

loss ⁡ ( x , y ) = { 1 − cos ⁡ ( x 1 , x 2 ) ,  if  y = 1 max ⁡ ( 0 , cos ⁡ ( x 1 , x 2 ) − margin ⁡ ) ,  if  y = − 1 \operatorname{loss}(x, y)=\left\{\begin{array}{ll} 1-\cos \left(x_{1}, x_{2}\right), & \text { if } y=1 \\ \max \left(0, \cos \left(x_{1}, x_{2}\right)-\operatorname{margin}\right), & \text { if } y=-1 \end{array}\right. loss(x,y)={1−cos(x1​,x2​),max(0,cos(x1​,x2​)−margin),​ if y=1 if y=−1​

cos ⁡ ( θ ) = A ⋅ B ∥ A ∥ ∥ B ∥ = ∑ i = 1 n A i × B i ∑ i = 1 n ( A i ) 2 × ∑ i = 1 n ( B i ) 2 \cos (\theta)=\frac{A \cdot B}{\|A\|\|B\|}=\frac{\sum_{i=1}^{n} A_{i} \times B_{i}}{\sqrt{\sum_{i=1}^{n}\left(A_{i}\right)^{2}} \times \sqrt{\sum_{i=1}^{n}\left(B_{i}\right)^{2}}} cos(θ)=∥A∥∥B∥A⋅B​=∑i=1n​(Ai​)2

​×∑i=1n​(Bi​)2

​∑i=1n​Ai​×Bi​​

nn.CTCLoss

功能 計算CTC損失 解決時序類資料的分類

(Connectionist Temporal Classification)

主要參數

blank blank label

zero_infinity 無窮大的值 或 梯度置0

reduction 計算模式,可為none/sum/mean

# -*- coding: utf-8 -*-
"""
nn.L1Loss
nn.MSELoss
nn.SmoothL1Loss
nn.PoissonNLLLoss
nn.KLDivLoss
nn.MarginRankingLoss
nn.MultiLabelMarginLoss
nn.SoftMarginLoss
nn.MultiLabelSoftMarginLoss
nn.MultiMarginLoss
nn.TripletMarginLoss
nn.HingeEmbeddingLoss
nn.CosineEmbeddingLoss
nn.CTCLoss
"""

import torch
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
import numpy as np
from tools.common_tools import set_seed

set_seed(1)  # 設定随機種子

# ------------------------------------------------- 5 L1 loss ----------------------------------------------
flag = 0
# flag = 1
if flag:

    inputs = torch.ones((2, 2))
    target = torch.ones((2, 2)) * 3

    loss_f = nn.L1Loss(reduction='none')
    loss = loss_f(inputs, target)

    print("input:{}\ntarget:{}\nL1 loss:{}".format(inputs, target, loss))

# ------------------------------------------------- 6 MSE loss ----------------------------------------------

    loss_f_mse = nn.MSELoss(reduction='none')
    loss_mse = loss_f_mse(inputs, target)

    print("MSE loss:{}".format(loss_mse))

# ------------------------------------------------- 7 Smooth L1 loss ----------------------------------------------
flag = 0
# flag = 1
if flag:
    inputs = torch.linspace(-3, 3, steps=500)
    target = torch.zeros_like(inputs)

    loss_f = nn.SmoothL1Loss(reduction='none')

    loss_smooth = loss_f(inputs, target)

    loss_l1 = np.abs(inputs.numpy())

    plt.plot(inputs.numpy(), loss_smooth.numpy(), label='Smooth L1 Loss')
    plt.plot(inputs.numpy(), loss_l1, label='L1 loss')
    plt.xlabel('x_i - y_i')
    plt.ylabel('loss value')
    plt.legend()
    plt.grid()
    plt.show()


# ------------------------------------------------- 8 Poisson NLL Loss ----------------------------------------------
flag = 0
# flag = 1
if flag:

    inputs = torch.randn((2, 2))
    target = torch.randn((2, 2))

    loss_f = nn.PoissonNLLLoss(log_input=True, full=False, reduction='none')
    loss = loss_f(inputs, target)
    print("input:{}\ntarget:{}\nPoisson NLL loss:{}".format(inputs, target, loss))

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    idx = 0

    loss_1 = torch.exp(inputs[idx, idx]) - target[idx, idx]*inputs[idx, idx]

    print("第一個元素loss:", loss_1)


# ------------------------------------------------- 9 KL Divergence Loss ----------------------------------------------
flag = 0
# flag = 1
if flag:

    inputs = torch.tensor([[0.5, 0.3, 0.2], [0.2, 0.3, 0.5]])
    inputs_log = torch.log(inputs)
    target = torch.tensor([[0.9, 0.05, 0.05], [0.1, 0.7, 0.2]], dtype=torch.float)

    loss_f_none = nn.KLDivLoss(reduction='none')
    loss_f_mean = nn.KLDivLoss(reduction='mean')
    loss_f_bs_mean = nn.KLDivLoss(reduction='batchmean')

    loss_none = loss_f_none(inputs, target)
    loss_mean = loss_f_mean(inputs, target)
    loss_bs_mean = loss_f_bs_mean(inputs, target)

    print("loss_none:\n{}\nloss_mean:\n{}\nloss_bs_mean:\n{}".format(loss_none, loss_mean, loss_bs_mean))

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    idx = 0

    loss_1 = target[idx, idx] * (torch.log(target[idx, idx]) - inputs[idx, idx])

    print("第一個元素loss:", loss_1)


# ---------------------------------------------- 10 Margin Ranking Loss --------------------------------------------
flag = 0
# flag = 1
if flag:

    x1 = torch.tensor([[1], [2], [3]], dtype=torch.float)
    x2 = torch.tensor([[2], [2], [2]], dtype=torch.float)

    target = torch.tensor([1, 1, -1], dtype=torch.float)

    loss_f_none = nn.MarginRankingLoss(margin=0, reduction='none')

    loss = loss_f_none(x1, x2, target)

    print(loss)

# ---------------------------------------------- 11 Multi Label Margin Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    x = torch.tensor([[0.1, 0.2, 0.4, 0.8]])
    y = torch.tensor([[0, 3, -1, -1]], dtype=torch.long)

    loss_f = nn.MultiLabelMarginLoss(reduction='none')

    loss = loss_f(x, y)

    print(loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    x = x[0]
    item_1 = (1-(x[0] - x[1])) + (1 - (x[0] - x[2]))    # [0]
    item_2 = (1-(x[3] - x[1])) + (1 - (x[3] - x[2]))    # [3]

    loss_h = (item_1 + item_2) / x.shape[0]

    print(loss_h)

# ---------------------------------------------- 12 SoftMargin Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    inputs = torch.tensor([[0.3, 0.7], [0.5, 0.5]])
    target = torch.tensor([[-1, 1], [1, -1]], dtype=torch.float)

    loss_f = nn.SoftMarginLoss(reduction='none')

    loss = loss_f(inputs, target)

    print("SoftMargin: ", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    idx = 0

    inputs_i = inputs[idx, idx]
    target_i = target[idx, idx]

    loss_h = np.log(1 + np.exp(-target_i * inputs_i))

    print(loss_h)


# ---------------------------------------------- 13 MultiLabel SoftMargin Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    inputs = torch.tensor([[0.3, 0.7, 0.8]])
    target = torch.tensor([[0, 1, 1]], dtype=torch.float)

    loss_f = nn.MultiLabelSoftMarginLoss(reduction='none')

    loss = loss_f(inputs, target)

    print("MultiLabel SoftMargin: ", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    i_0 = torch.log(torch.exp(-inputs[0, 0]) / (1 + torch.exp(-inputs[0, 0])))

    i_1 = torch.log(1 / (1 + torch.exp(-inputs[0, 1])))
    i_2 = torch.log(1 / (1 + torch.exp(-inputs[0, 2])))

    loss_h = (i_0 + i_1 + i_2) / -3

    print(loss_h)

# ---------------------------------------------- 14 Multi Margin Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    x = torch.tensor([[0.1, 0.2, 0.7], [0.2, 0.5, 0.3]])
    y = torch.tensor([1, 2], dtype=torch.long)

    loss_f = nn.MultiMarginLoss(reduction='none')

    loss = loss_f(x, y)

    print("Multi Margin Loss: ", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    x = x[0]
    margin = 1

    i_0 = margin - (x[1] - x[0])
    # i_1 = margin - (x[1] - x[1])
    i_2 = margin - (x[1] - x[2])

    loss_h = (i_0 + i_2) / x.shape[0]

    print(loss_h)

# ---------------------------------------------- 15 Triplet Margin Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    anchor = torch.tensor([[1.]])
    pos = torch.tensor([[2.]])
    neg = torch.tensor([[0.5]])

    loss_f = nn.TripletMarginLoss(margin=1.0, p=1)

    loss = loss_f(anchor, pos, neg)

    print("Triplet Margin Loss", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    margin = 1
    a, p, n = anchor[0], pos[0], neg[0]

    d_ap = torch.abs(a-p)
    d_an = torch.abs(a-n)

    loss = d_ap - d_an + margin

    print(loss)

# ---------------------------------------------- 16 Hinge Embedding Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    inputs = torch.tensor([[1., 0.8, 0.5]])
    target = torch.tensor([[1, 1, -1]])

    loss_f = nn.HingeEmbeddingLoss(margin=1, reduction='none')

    loss = loss_f(inputs, target)

    print("Hinge Embedding Loss", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:
    margin = 1.
    loss = max(0, margin - inputs.numpy()[0, 2])

    print(loss)


# ---------------------------------------------- 17 Cosine Embedding Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    x1 = torch.tensor([[0.3, 0.5, 0.7], [0.3, 0.5, 0.7]])
    x2 = torch.tensor([[0.1, 0.3, 0.5], [0.1, 0.3, 0.5]])

    target = torch.tensor([[1, -1]], dtype=torch.float)

    loss_f = nn.CosineEmbeddingLoss(margin=0., reduction='none')

    loss = loss_f(x1, x2, target)

    print("Cosine Embedding Loss", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:
    margin = 0.

    def cosine(a, b):
        numerator = torch.dot(a, b)
        denominator = torch.norm(a, 2) * torch.norm(b, 2)
        return float(numerator/denominator)

    l_1 = 1 - (cosine(x1[0], x2[0]))

    l_2 = max(0, cosine(x1[0], x2[0]))

    print(l_1, l_2)


# ---------------------------------------------- 18 CTC Loss -----------------------------------------
# flag = 0
flag = 1
if flag:
    T = 50      # Input sequence length
    C = 20      # Number of classes (including blank)
    N = 16      # Batch size
    S = 30      # Target sequence length of longest target in batch
    S_min = 10  # Minimum target length, for demonstration purposes

    # Initialize random batch of input vectors, for *size = (T,N,C)
    inputs = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()

    # Initialize random batch of targets (0 = blank, 1:C = classes)
    target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long)

    input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
    target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long)

    ctc_loss = nn.CTCLoss()
    loss = ctc_loss(inputs, target, input_lengths, target_lengths)

    print("CTC loss: ", loss)

           

繼續閱讀