天天看点

关于ncnn转换focus模块报错Unsupported slice step !

前言

本文阐述了你ncnn转换时focus报错的原因。 

转换和实现focus模块

$ onnx2ncnn yolov5s-sim.onnx yolov5s.param yolov5s.bin
           

转换为 ncnn 模型,会输出很多 Unsupported slice step,这是focus模块转换的报错

Unsupported slice step !
Unsupported slice step !
Unsupported slice step !
Unsupported slice step !
Unsupported slice step !
Unsupported slice step !
Unsupported slice step !
Unsupported slice step !
           

好多人遇到这种情况,便不知所措,这些警告表明focus模块这里要手工修复下

打开 yolov5/models/common.py 看看focus在做些什么

class Focus(nn.Module):
    # Focus wh information into c-space
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups
        super(Focus, self).__init__()
        self.conv = Conv(c1 * 4, c2, k, s, p, g, act)

    def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)
        return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))
           

这其实是一次 col-major space2depth 操作,pytorch 似乎并没有对应上层api实现(反向的 depth2space 可以用 nn.PixelShuffle),yolov5 用 stride slice 再 concat 方式实现,实乃不得已而为之的骚操作

用netron工具打开param,找到对应focus的部分

关于ncnn转换focus模块报错Unsupported slice step !

把这堆骚操作用个自定义op YoloV5Focus代替掉,修改param

关于ncnn转换focus模块报错Unsupported slice step !
  • 找准输入输出 blob 名字,用一个自定义层 YoloV5Focus 连接
  • param 开头第二行,layer_count 要对应修改,但 blob_count 只需确保大于等于实际数量即可
  • 修改后使用 ncnnoptimize 工具,自动修正为实际 blob_count
关于ncnn转换focus模块报错Unsupported slice step !

替换后用 ncnnoptimize 过一遍模型,顺便转为 fp16 存储减小模型体积

$ ncnnoptimize yolov5s.param yolov5s.bin yolov5s-opt.param yolov5s-opt.bin 65536
           

接下来要实现这个自定义op YoloV5Focus,wiki上的步骤比较繁多

https://github.com/Tencent/ncnn/wiki/how-to-implement-custom-layer-step-by-step​github.com

针对 focus 这样,没有权重,也无所谓参数加载的 op,继承 ncnn::Layer 实现 forward 就可以用,注意要用 DEFINE_LAYER_CREATOR 宏定义 YoloV5Focus_layer_creator

#include "layer.h"

class YoloV5Focus : public ncnn::Layer
{
public:
    YoloV5Focus()
    {
        one_blob_only = true;
    }

    virtual int forward(const ncnn::Mat& bottom_blob, ncnn::Mat& top_blob, const ncnn::Option& opt) const
    {
        int w = bottom_blob.w;
        int h = bottom_blob.h;
        int channels = bottom_blob.c;

        int outw = w / 2;
        int outh = h / 2;
        int outc = channels * 4;

        top_blob.create(outw, outh, outc, 4u, 1, opt.blob_allocator);
        if (top_blob.empty())
            return -100;

        #pragma omp parallel for num_threads(opt.num_threads)
        for (int p = 0; p < outc; p++)
        {
            const float* ptr = bottom_blob.channel(p % channels).row((p / channels) % 2) + ((p / channels) / 2);
            float* outptr = top_blob.channel(p);

            for (int i = 0; i < outh; i++)
            {
                for (int j = 0; j < outw; j++)
                {
                    *outptr = *ptr;

                    outptr += 1;
                    ptr += 2;
                }

                ptr += w;
            }
        }

        return 0;
    }
};

DEFINE_LAYER_CREATOR(YoloV5Focus)
           

加载模型前先注册 YoloV5Focus,否则会报错找不到 YoloV5Focus

ncnn::Net yolov5;

yolov5.opt.use_vulkan_compute = true;
// yolov5.opt.use_bf16_storage = true;

yolov5.register_custom_layer("YoloV5Focus", YoloV5Focus_layer_creator);

yolov5.load_param("yolov5s-opt.param");
yolov5.load_model("yolov5s-opt.bin");