天天看點

深度學習論文: Refining activation downsampling with SoftPool及其PyTorch實作

深度學習論文: Refining activation downsampling with SoftPool及其PyTorch實作

Refining activation downsampling with SoftPool

PDF: ​​​https://arxiv.org/pdf/2101.00440v3.pdf​​​ PyTorch代碼:​​https://github.com/shanglianlm0525/CvPytorch​​ PyTorch代碼:​​https://github.com/shanglianlm0525/PyTorch-Networks​​

1 概述

基于核的SoftPool方法,使用了激活值的 softmax 權重和。

深度學習論文: Refining activation downsampling with SoftPool及其PyTorch實作

特點:

  • SoftPool 操作可導
  • SoftPool 可以保留大量的激活描述特征,而計算量和記憶體消耗都很低。
  • 對小目标友好,對大目标性能下降

2 Pool操作的各種變體

深度學習論文: Refining activation downsampling with SoftPool及其PyTorch實作

3 SoftPool

SoftPool實作比較簡單,指數權重後經過MaxPool,最後歸一化。

提出的SoftPool計算過程如下:

深度學習論文: Refining activation downsampling with SoftPool及其PyTorch實作
import torch
import torch.nn as nn

class SoftPool1D(torch.nn.Module):
    def __init__(self,kernel_size=2,stride=2):
        super(SoftPool1D, self).__init__()
        self.avgpool = torch.nn.AvgPool1d(kernel_size,stride)

    def forward(self, x):
        x_exp = torch.exp(x)
        x_exp_pool = self.avgpool(x_exp)
        x = self.avgpool(x_exp*x)
        return x/x_exp_pool


class SoftPool2D(nn.Module):
    def __init__(self, kernel_size=2, stride=2):
        super(SoftPool2D, self).__init__()
        self.avgpool = nn.AvgPool2d(kernel_size, stride)

    def forward(self, x):
        x_exp = torch.exp(x)
        x_exp_pool = self.avgpool(x_exp)
        x = self.avgpool(x_exp * x)
        return x / x_exp_pool


class SoftPool3D(torch.nn.Module):
    def __init__(self,kernel_size,stride=2):
        super(SoftPool3D, self).__init__()
        self.avgpool = nn.AvgPool3d(kernel_size,stride)

    def forward(self, x):
        x_exp = torch.exp(x)
        x_exp_pool = self.avgpool(x_exp)
        x = self.avgpool(x_exp*x)
        return x/x_exp_pool


if __name__=='__main__':
    model = SoftPool2D()
    print(model)

    input = torch.randn(1, 64, 56, 56)
    out = model(input)
    print(out.shape)      

繼續閱讀