Jetson Nano: Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory

Description

I am trying to convert a SavedModel to a TensorRT model for inference.

Environment

TensorRT Version: 8.0.1.6
GPU Type: 128 core Maxwell
Nvidia Driver Version:
CUDA Version: 10.2.300
CUDNN Version: 8.2.1.32
Operating System + Version: Jetpack 4.6
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 2.4.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

I am trying to convert a model by running this code:

import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices(‘GPU’)
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
from tensorflow.python.compiler.tensorrt import trt_convert as trt
import numpy as np
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS
conversion_params = conversion_params._replace(max_workspace_size_bytes=(300000000))
conversion_params = conversion_params._replace(precision_mode=“FP16”)
conversion_params = conversion_params._replace(
maximum_cached_engines=100)
encoder_model = trt.TrtGraphConverterV2(
input_saved_model_dir=’/home/rohan/Desktop/original_models/encoder’,
conversion_params=conversion_params)
encoder_model.convert()
encoder_model.save(output_saved_model_dir=’/home/rohan/Desktop/converted_models/encoder’)

Error:
2021-12-15 14:40:35.928076: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2
2021-12-15 14:40:47.158202: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-12-15 14:40:47.196227: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-12-15 14:40:47.236695: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] ARM64 does not support NUMA - returning NUMA node zero
2021-12-15 14:40:47.236944: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1747] Found device 0 with properties:
pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3
coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.86GiB deviceMemoryBandwidth: 194.55MiB/s
2021-12-15 14:40:47.237036: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2
2021-12-15 14:40:47.361727: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.10
2021-12-15 14:40:47.361972: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.10
2021-12-15 14:40:47.416525: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-12-15 14:40:47.475564: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-12-15 14:40:47.529972: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-12-15 14:40:47.553797: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.10
2021-12-15 14:40:47.556260: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-12-15 14:40:47.556585: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] ARM64 does not support NUMA - returning NUMA node zero
2021-12-15 14:40:47.556900: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] ARM64 does not support NUMA - returning NUMA node zero
2021-12-15 14:40:47.556992: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1889] Adding visible gpu devices: 0
2021-12-15 14:40:47.584272: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library ‘libnvinfer.so.7’; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2021-12-15 14:40:47.584392: F tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:49] getInferLibVersion symbol not found.
Aborted (core dumped)

Steps To Reproduce

Hi,
This looks like a Jetson issue. Please refer to the below samlples in case useful.

For any further assistance, we recommend you to raise it to the respective platform from the below link

Thanks!

Hi,

I tested in jetpack version 4.5.1 and worked, but for 4.6 had the same problem. Someone found a solution?

I have the same issue. I have Jetpack 4.6-b199.

I made a work arround by makinig a symbolic link of libnvinfer.so.8 as libnvinfer.so.7 is really not found in /usr/lib/aarch64-linux-gnu

sudo ln -s libnvinfer.so.8 libnvinfer.so.7

However, I found the next cause of the problem.

 python3.6 TF_RT_converter.py 
2022-02-26 13:25:58.904187: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.10.2
2022-02-26 13:26:05.312823: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libnvinfer.so.7
ERROR:tensorflow:Loaded TensorRT 8.0.1 but linked TensorFlow against TensorRT 7.1.3. It is required to use the same major version of TensorRT during compilation and runtime.