天天看點

錯誤:customWinogradConvActLayer.cpp

customWinogradConvActLayer.cpp:159: std::unique_ptr<dit::Convolution> nvinfer1::cudnn::WinogradConvActLayer::createConvolution(const nvinfer1::cudnn::CommonContext&, bool, const int8_t*) const: Assertion `configIsValid(context)' failed.      

It turns out that the first computer had a NVIDIA 1080 Ti GPU and the engine had been created for it. The second computer had a NVIDIA K80 GPU. Though, TensorRT documentation is vague about this, it seems like an engine created on a specific GPU can only be used for inference on the same model of GPU!

When I created a plan file on the K80 computer, inference worked fine.

Tried with: TensorRT 2.1, cuDNN 6.0 and CUDA 8.0

原因:

1,驅動重新安裝之後,CUDA/CUDNN/TENSORRT沒有重新安裝.

2,每個系列的GPU,要安裝相應的版本.比如2080必須CUDA 10

繼續閱讀