Could not load dynamic library ''; dlerror: while converting tf to trt


I am trying to convert tensorflow model to trt fp16 graph but getting issue Could not load dynamic library ‘’; dlerror:


TensorRT Version: 8.0.1-1
GPU Type: dGPU
Nvidia Driver Version: 510.47.03
CUDA Version: cuda11.3
Operating System + Version: Ubuntu-18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 2.6.2
Baremetal or Container (if container which image + tag):


from tensorflow.python.compiler.tensorrt import trt_convert as trt
import tensorflow as tf
input_saved_model_dir = './inference_graph/saved_model/'
output_saved_model_dir = './inference_graph/trt_6_0/'

# converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir)
# converter.convert()

# params = tf.experimental.tensorrt.ConversionParams(
#     precision_mode='FP16')
# converter = tf.experimental.tensorrt.Converter(
#     input_saved_model_dir=input_saved_model_dir, conversion_params=params)
# converter.convert()

# converter = tf.experimental.tensorrt.Converter(input_saved_model_dir=input_saved_model_dir)
# converter.convert()

# import tensorflow as tf
# from tensorflow.python.compiler.tensorrt import trt_convert as trt

# with'./inference_graph/frozen_inference_graph.pb', 'rb') as f:
#             graph_def = tf.GraphDef()
#             graph_def.ParseFromString(
# converter = trt.TrtGraphConverter(input_graph_def=graph_def)
# frozen_graph = converter.convert()

conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS
conversion_params = conversion_params._replace(precision_mode="FP16")
converter = trt.TrtGraphConverterV2(


2022-08-16 11:43:41.691740: W tensorflow/stream_executor/platform/default/] Could not load dynamic library ''; dlerror: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-11.3/lib64:/usr/local/cuda/lib64:/usr/local/cuda-11.4/lib64:/usr/local/cuda/lib64:/usr/local/cuda-11.4/lib64:/usr/local/cuda/lib64:/usr/local/cuda-11.1/lib64:/usr/lib/x86_64-linux-gnu:/usr/lib/i386-linux-gnu:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
2022-08-16 11:43:41.691772: F tensorflow/compiler/tf2tensorrt/stub/] getInferLibVersion symbol not found.



This looks like a Deepstream related issue. We will move this post to the Deepstream forum.


From the description, seesm you used TensorRT Version: 8.0.1-1. if you want to deploy the engine into deepstream, make sure the build TRT version and deployed TRT version match.

Thanks @Amycao for reply.

Installation of tensorRT nv-tensorrt-repo-ubuntu1804-cuda11.1-trt7.2.2.3-ga-20201211_1-1_amd64 resolved the issue.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.