Linux distro and version: Ubuntu 18.04
GPU type: Titan X
nvidia driver version: 410.79
CUDA version: V10.0.130
CUDNN version: 7.4.2.24
Python version: 3.6.7
Caffe version: Anaconda python 3.6 installation with “conda install -c anaconda caffe-gpu”
Tensorflow version -
TensorRT version: GA_5.0.2.6
If Jetson, OS, hw versions -
I have been running inference on a custom trained skin detector in caffe for a while. I attempted to port the implementation to tensorrt 5 to get a speed increase in inference. However my predictions are considerably worse (not totally of that it doesnt work, just more error prone). I’m having trouble understanding what is causing this problem. I’m wondering whether any unsupported parameters or layer fusions might be the issue. Any help would be appriciated. I have tried inference at different image sizes to no avail.
In addition one major difference between the two inference methods is that, while the caffe one can handle inputs of any size perfectly well, in the tensorrt one, unless the image dimensions fit 10+(16*x), the mask appears a few pixels shifted from its original location.
I have tried both the cpp and the python interfaces. Since the python one is briefer, i’m adding the tensorrt and caffe scripts i run to obtain inference from both images.
Thanks,
Alp
Files.zip (280 KB)