I am using TensorRT python API. I was able to import a Caffe model of SSD network using the built-in TensorRT plugin layers and following user instructions. Unfortunately, when inferring the outputs are incorrect compared to the original Caffe implementation using the OpenCV DNN libraries: PriorBox layers seem to provide different outputs for the same inputs between the two implementations.
I decided to reproduce the problem on the well known TensorRT use case “sample SSD” located in “/usr/src/tensorrt/samples/sampleSSD”. This use case uses the TensorRT C ++ API and provides results comparable to the original Caffe implementation. But after translating the use case to use the TensorRT python API, the network outputs are again incorrect.
Here is the python implementation of the “Sample SSD” that I used : sampleSSD_python.tar.gz (685.0 KB)
I am stuck and would like to know if this is a bug in the TensorRT python API regarding the handling of PriorBox layers or if I misunderstood something important.
Thanks for reading and helping me to solve this issue.
Hardware : Jetson Nano
JetPack : 4.5
OS: Ubuntu 18.04
Cuda : version 10.2
cuDNN : version 8.0