TF-TRT Errors fetching dynamic library

Description

Experiencing Failures loading dynamic libraries when trying to carry out TF-TRT conversion:

2021-10-21 16:13:57.478648: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libnvinfer.so.7’; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory

2021-10-21 16:13:57.478672: F tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:49] getInferLibVersion symbol not found.

Aborted (core dumped)

I have been having significant trouble in general trying to update a workflow that involves generating models in tensorflow and converting them to tensorRT. It was previously using conversions of frozen graphs to uff format. I am trying to update to tensorflow 2.0 where this is no longer supported and TF-TRT has been the recommended workflow. Any further advice appreciated. (I have achieved conversion of the model to onnx)

Environment

TensorRT Version: 8.2.0-1 (also reproduced on seperate environment with 7.2.2.3)
GPU Type: 2080 ti
Nvidia Driver Version: 470.57.02
CUDA Version: 11.4
CUDNN Version: 8.2.0.51 (I Think, difficult to check)
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): 3.8
TensorFlow Version (if applicable): 2.6.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

The issue has been reproduced with this test case model : TensorFlow 2 quickstart for beginners  |  TensorFlow Core

Steps To Reproduce

import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt

input_saved_model_dir = “/opt/transfer/models/noopttest/”
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS
conversion_params = conversion_params._replace(
max_workspace_size_bytes=(1<<32))
conversion_params = conversion_params._replace(precision_mode=“FP16”)
conversion_params = conversion_params._replace(
maximum_cached_engines=100)

converter = trt.TrtGraphConverterV2(
input_saved_model_dir=input_saved_model_dir,
conversion_params=conversion_params)

Hi,
Please check the below links, as they might answer your concerns.
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_topic
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_layers
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/#restrictions-with-dla
Thanks!

Thank you for these links, the trtexec command line tool looks like a useful way to test out my onnx models.

However, I am not sure that this addresses the TF-TRT conversion error at all. Is it recommended that I pursue only the conversion to onnx format and building and running on tensorRT using that format?

It is also unclear whether a model generated on a particular TF and ONNX version should be able to be built and run inference on other versions?

Hi,

We recommend you to check the below samples links in case of tf-trt integration issues.
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#samples
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#framework-integration
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#integrate-ovr
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#usingtftrt

If issue persist, We recommend you to reach out to Tensorflow forum.

Thanks!

Those links were not relevant to the issue I was having. Followed up with a request on the tensorflow forum here:
TF-TRT: No Support for TensorRT v8? - General Discussion - TensorFlow Forum

Potential solution contained in this thread. Based on TensorRTv8 Breaking the API. There is an in progress Draft Pull Request for a fix on 8.2 linked in that thread for anyone experiencing similar issues.

Hi,

Sorry for that. Could you please share us complete error logs.

Thank you.