tensorflow.python.framework.errors_impl.NotFoundError: libnvinfer.so.5: cannot open shared object file: No such file or directory

Hi. I am on Jetson Nano. I installed TensorFlow 1.13 using pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==1.13.1+nv19.3. I installed it in a Python3 virtual environment along with other dependencies. Now when I am trying to run inference on my Jetson Nano it is causing me:

**** Failed to initialize TensorRT. This is either because the TensorRT installation path is not in LD_LIBRARY_PATH, or because you do not have it installed. If not installed, please go to https://developer.nvidia.com/tensorrt to download and install TensorRT ****
Traceback (most recent call last):
  File "untitled.py", line 7, in <module>
    import tensorflow.contrib.tensorrt as trt
  File "/home/pyimagesearch/.virtualenvs/py3cv4/lib/python3.6/site-packages/tensorflow/contrib/tensorrt/__init__.py", line 34, in <module>
    raise e
  File "/home/pyimagesearch/.virtualenvs/py3cv4/lib/python3.6/site-packages/tensorflow/contrib/tensorrt/__init__.py", line 25, in <module>
    from tensorflow.contrib.tensorrt.python import *
  File "/home/pyimagesearch/.virtualenvs/py3cv4/lib/python3.6/site-packages/tensorflow/contrib/tensorrt/python/__init__.py", line 22, in <module>
    from tensorflow.contrib.tensorrt.python.ops import trt_engine_op
  File "/home/pyimagesearch/.virtualenvs/py3cv4/lib/python3.6/site-packages/tensorflow/contrib/tensorrt/python/ops/trt_engine_op.py", line 32, in <module>
    resource_loader.get_path_to_datafile("_trt_engine_op.so"))
  File "/home/pyimagesearch/.virtualenvs/py3cv4/lib/python3.6/site-packages/tensorflow/contrib/util/loader.py", line 56, in load_op_library
    ret = load_library.load_op_library(path)
  File "/home/pyimagesearch/.virtualenvs/py3cv4/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 61, in load_op_library
    lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: libnvinfer.so.5: cannot open shared object file: No such file or directory

Although I understand that it has got to do something TensorRT installation but I thought the TensorFlow installation comes loaded with that. Kindly assist if I am missing something here.

When I run locate tensorrt I get:

/usr/lib/python2.7/dist-packages/tensorrt
/usr/lib/python2.7/dist-packages/tensorrt-6.0.1.10.dist-info
/usr/lib/python2.7/dist-packages/tensorrt-6.0.1.10.dist-info/METADATA
/usr/lib/python2.7/dist-packages/tensorrt-6.0.1.10.dist-info/RECORD
/usr/lib/python2.7/dist-packages/tensorrt-6.0.1.10.dist-info/WHEEL
/usr/lib/python2.7/dist-packages/tensorrt-6.0.1.10.dist-info/top_level.txt
/usr/lib/python2.7/dist-packages/tensorrt-6.0.1.10.dist-info/zip-safe
/usr/lib/python2.7/dist-packages/tensorrt/__init__.py
/usr/lib/python2.7/dist-packages/tensorrt/_deprecated.py
/usr/lib/python2.7/dist-packages/tensorrt/legacy
/usr/lib/python2.7/dist-packages/tensorrt/legacy/__init__.py
/usr/lib/python2.7/dist-packages/tensorrt/legacy/_deprecated_helpers.py
/usr/lib/python2.7/dist-packages/tensorrt/legacy/infer
/usr/lib/python2.7/dist-packages/tensorrt/legacy/infer/__init__.py
/usr/lib/python2.7/dist-packages/tensorrt/legacy/lite
/usr/lib/python2.7/dist-packages/tensorrt/legacy/lite/__init__.py
/usr/lib/python2.7/dist-packages/tensorrt/legacy/parsers
/usr/lib/python2.7/dist-packages/tensorrt/legacy/parsers/__init__.py
/usr/lib/python2.7/dist-packages/tensorrt/legacy/utils
/usr/lib/python2.7/dist-packages/tensorrt/legacy/utils/__init__.py
/usr/lib/python2.7/dist-packages/tensorrt/tensorrt.so
/usr/lib/python3.6/dist-packages/tensorrt

Upon running dpkg -l | grep TensorRT I get:

ii  graphsurgeon-tf                            6.0.1-1+cuda10.0                                arm64        GraphSurgeon for TensorRT package
ii  libnvinfer-bin                             6.0.1-1+cuda10.0                                arm64        TensorRT binaries
ii  libnvinfer-dev                             6.0.1-1+cuda10.0                                arm64        TensorRT development libraries and headers
ii  libnvinfer-doc                             6.0.1-1+cuda10.0                                all          TensorRT documentation
ii  libnvinfer-plugin-dev                      6.0.1-1+cuda10.0                                arm64        TensorRT plugin libraries
ii  libnvinfer-plugin6                         6.0.1-1+cuda10.0                                arm64        TensorRT plugin libraries
ii  libnvinfer-samples                         6.0.1-1+cuda10.0                                all          TensorRT samples
ii  libnvinfer6                                6.0.1-1+cuda10.0                                arm64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                       6.0.1-1+cuda10.0                                arm64        TensorRT ONNX libraries
ii  libnvonnxparsers6                          6.0.1-1+cuda10.0                                arm64        TensorRT ONNX libraries
ii  libnvparsers-dev                           6.0.1-1+cuda10.0                                arm64        TensorRT parsers libraries
ii  libnvparsers6                              6.0.1-1+cuda10.0                                arm64        TensorRT parsers libraries
ii  nvidia-container-csv-tensorrt              6.0.1.10-1+cuda10.0                             arm64        Jetpack TensorRT CSV file
ii  python-libnvinfer                          6.0.1-1+cuda10.0                                arm64        Python bindings for TensorRT
ii  python-libnvinfer-dev                      6.0.1-1+cuda10.0                                arm64        Python development package for TensorRT
ii  python3-libnvinfer                         6.0.1-1+cuda10.0                                arm64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                     6.0.1-1+cuda10.0                                arm64        Python 3 development package for TensorRT
ii  tensorrt                                   6.0.1.10-1+cuda10.0                             arm64        Meta package of TensorRT
ii  uff-converter-tf                           6.0.1-1+cuda10.0                                arm64        UFF converter for TensorRT package

Output of $ echo $LD_LIBRARY_PATH

/usr/local/cuda-10.0/targets/aarch64-linux/lib:/usr/local/cuda-10.0/lib64:/usr/lib/python3.6/dist-packages/tensorrt

Hi,

The error indicates that TensorFlow want to link TensorRT 5.0 library but your device already upgrades to 6.0.
Suppose you are using JetPack 4.3, so you need to update the TensorFlow link into v43.

We provide two TensorFlow version for JetPack 4.3 user now.

TensorFlow 2.0

$ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v43 tensorflow-gpu

TensorFlow 1.15

$ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v43 ‘tensorflow-gpu<2’

If you want to use TF-TRT or TRT related feature, it’s recommended to choose v1.15 since TensorRT 6.0 doesn’t support v2.0 yet.

Thanks.

Yes, I downloaded the latest Nano image. So I am assuming that is JetPack 4.3. So, would you suggest that I uninstall the TensorFlow 1.13.1 first and then install TensorFlow 1.15?

I tried to install TensorRT 5 as well and I set up the LD_LIBRARY_PATH as well but that did not work. Would you advise if I could use TensorRT 5 here since TensorFlow 1.15 might be problematic for our case.

Also, when I tried to run the following:

(py3cv4) $ pip install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v43 ‘tensorflow-gpu<2’

I got the following error:

-bash: 2’: No such file or directory

Instead, I ran the following:

pip install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v43 tensorflow-gpu==1.15.0+nv19.12

Installing TensorFlow 1.15 actually solved the problem.

Good to know this.

The version v1.13 is built with JetPack4.2.x, which is incompatible to the JetPack4.3.
Thanks.

Is it possible to get the older versions of NVIDIA Jetson Nano image?

Hi,

We have lots of version built with JetPack4.2.x
https://developer.download.nvidia.com/compute/redist/jp/v42/tensorflow-gpu/

Thanks.

I was not asking for TensorFlow in my last comment. Instead, I was asking if older versions of JetPack were available for Jetson Nano so that the older versions of TensorFlow could be installed.

If there are older versions of JetPack available, then could you please instruct how to install them on Jetson Nano?

Thanks!

YES.

You can select different JetPack version directly from the sdkmanager.

STEP 01 -> TARGET OPERATING SYSTEM -> show all versions.

Thanks.

This is getting a bit confusing I guess now. I was asking for a Jetson Nano Developer Kit SD Card Image which supports earlier versions of JetPack.

Hi,

You don’t need to update the SD Card Image when re-flashing the system with JetPack.
Thanks.

Okay. So, then what the process of downgrading to an older version JetPack?

Hi,

You can use the same sdkmanager and switch to the previous JetPack version you want.
Please check comment #11 for the steps.

Thanks.

Hi,

Thanks for all your help so far. I want to clear one doubt and if you could help here that would be great!

After I download the SDK Manager on my laptop (that is running on Ubuntu 18.04) I configure what to install and select my target device. During this process am I also supposed to connect the Nano device? The steps mentioned here (https://docs.nvidia.com/sdk-manager/install-with-sdkm-jetson/index.html) are a bit confusing.

Hi,

For the flashing steps, you will need to connect the device with type-C connector.
For the SDK installation steps, both type-c and Ethernet can work for the installation.

Thanks.

The SDK installation needs to be done on the host computer right? The host is my laptop in this case.

YES.

Run the sdkmanager on a desktop and connect to the device with a type-c for installation.
Thanks.