import tensorrt as trt fails

Hi,

My current versions:

CUDA 9.0
Tensorflow-gpu==1.5
TensorRT 4.0.0.2
CudNN 7

using python 3.5.2

I am having issues with running the following code:
import uff
import tensorflow as tf
import tensorrt as trt

As presented in your guide:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/topics/topics/workflows/tf_to_tensorrt.html

The problem occurs on the 3rd line (import tensorrrt…), and it reads as such:

Traceback (most recent call last):
File “”, line 1, in
File “/home/isl/.local/lib/python3.5/site-packages/tensorrt/init.py”, line 77, in
from tensorrt import infer, parsers, utils, lite, plugins
File “/home/isl/.local/lib/python3.5/site-packages/tensorrt/parsers/init.py”, line 54, in
from . import caffeparser
File “/home/isl/.local/lib/python3.5/site-packages/tensorrt/parsers/caffeparser/init.py”, line 51, in
from ._nv_caffe_parser_bindings import *
ImportError: libnvparsers.so.4.0.2: cannot open shared object file: No such file or directory

We have tried downloading multiple versions of TensorRT and the problem doesn’t stop, could you please help with figuring out how we can overcome this issue?

Thanks!

Was having a similar issue but it failed on _nv_infer_bindings instead of ._nv_caffe_parser_bindings.

It got solved when we put the correct LD_LIBRARY_PATH to TensorRT-x.x.x.x/lib, something like this:

export LD_LIBRARY_PATH="/usr/local/cuda-9.0/lib64:/opt/TensorRT-4.0.0.3/lib"

Hope it helps.

Hey eknic, sorry for the late reply, thank you so much! we managed to work it out thanks to your help.
We have made great progress, and we are missing only a single step in the process of running our model on the Drive-PX2, maybe you might have an idea for that as well?

https://devtalk.nvidia.com/default/topic/1036424/undefined-reference-to-symbol-createinferbuilder_internal-/?offset=1#5265030

Thanks!