Description
Experiencing Failures loading dynamic libraries when trying to carry out TF-TRT conversion:
2021-10-21 16:13:57.478648: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libnvinfer.so.7’; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2021-10-21 16:13:57.478672: F tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:49] getInferLibVersion symbol not found.
Aborted (core dumped)
I have been having significant trouble in general trying to update a workflow that involves generating models in tensorflow and converting them to tensorRT. It was previously using conversions of frozen graphs to uff format. I am trying to update to tensorflow 2.0 where this is no longer supported and TF-TRT has been the recommended workflow. Any further advice appreciated. (I have achieved conversion of the model to onnx)
Environment
TensorRT Version: 8.2.0-1 (also reproduced on seperate environment with 7.2.2.3)
GPU Type: 2080 ti
Nvidia Driver Version: 470.57.02
CUDA Version: 11.4
CUDNN Version: 8.2.0.51 (I Think, difficult to check)
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): 3.8
TensorFlow Version (if applicable): 2.6.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
The issue has been reproduced with this test case model : https://www.tensorflow.org/tutorials/quickstart/beginner
Steps To Reproduce
import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
input_saved_model_dir = “/opt/transfer/models/noopttest/”
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS
conversion_params = conversion_params._replace(
max_workspace_size_bytes=(1<<32))
conversion_params = conversion_params._replace(precision_mode=“FP16”)
conversion_params = conversion_params._replace(
maximum_cached_engines=100)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=input_saved_model_dir,
conversion_params=conversion_params)