How to deploy caffe parser

I followed the instructions on Github to compile and install 5.1.5.0 and the sample googlenet passed. However it doesn’t seem to use the compiled code. I just print to the console to test it.

The binary lib has:
libnvcaffe_parser.so*

The compiled lib has:
libnvcaffeparser.so*

One has the “_” and the other doesn’t.

Please shed some lights.

Hi,

It seems to be due to ENV variable setting issue.
Could you please check if “LD_LIBRARY_PATH” and other ENV variables has been updated to point to new release/code.

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$TRT_RELEASE/lib

Also, check the installed TRT packages using below command:

dpkg -l | grep TensorRT

Thanks

Hi,

The LD_LIBRARY_PATH is set correctly.
I followed the instructions here: https://github.com/NVIDIA/TensorRT/tree/release/5.1 which uses tar package instead of deb.
My question was why the compiled lib name is different than the binary version? Is the additional “_” intentional or that’s actually the issue I’m having?

On your installation guide it says:

The libnvcaffe_parser.so library functionality from previous versions is included in libnvparsers.so since TensorRT 5.0. The installed symbolic link for libnvcaffe_parser.so is updated to point to the new libnvparsers.so library. The static library libnvcaffe_parser.a is also symbolically linked to libnvparsers_static.a.

However, when I built 5.1.5.0 from source. The /build/out directory only contains these caffe parser files:
libnvcaffeparser.so
libnvcaffeparser.so.5.1.5
libnvcaffeparser.so.5.1.5.0
libnvcaffeparser_static.a

I can’t figure out how to overwrite the caffeparser in the lib.

Hi,

Can you provide the following information so we can better help?
Provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version

Thanks

Linux distro and version: Ubuntu 18.04.3 LTS

GPU type: GeForce RTX 2080 Ti

Nvidia driver version: 418.87.00

CUDA version: release 10.1, V10.1.243

CUDNN version: 7.5.0

Python version: 3.6.9

Tensorflow version: 1.15.0

TensorRT version: 5.1.5.0

So I’m trying to apply the patch to CaffeParser:
https://devtalk.nvidia.com/default/topic/1068181/deepstream-sdk/error-trying-to-create-engine-from-digit-trained-model-files/2

I use TensorRT Python API to build the cuda engine, but my changes doesn’t take effect.

Hi,

Can you retry fresh installation using the latest TRT version?

Thanks

I tried the latest version 7.0.0.11 and the changes still doesn’t take effort when I use Python API. C++ API seems to work. Is this expected? Do you have any workaround?

Hi,

Could you please try your application directly on TRT 7 without applying the patch?

If issue persist, could you please share the script and model file to reproduce the issue so we can help better?

Thanks