Description
When I tried to convert a saved model (ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8) in Tensorflow using Tensor-RT, the libnvinfer.so.7 failed to load. After making a work arround by making a symbolic link of libnvinfer.so.7 with libnvinfer.so.8 (which is al ready installed in (Jetpack 4.6) I found the root cause of the problem. A conflict in versions between Tensorflow and the installed Tensor-RT. Installed v8 and expected V7. How I can downgrad the Tensor-RT to version 7 ?
python3.6 TF_RT_converter.py
2022-02-26 13:25:58.904187: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.10.2
2022-02-26 13:26:05.312823: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libnvinfer.so.7
ERROR:tensorflow:Loaded TensorRT 8.0.1 but linked TensorFlow against TensorRT 7.1.3. It is required to use the same major version of TensorRT during compilation and runtime.
Environment
Package: nvidia-jetpack
Version: 4.6-b197
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-cuda (= 4.6-b197), nvidia-opencv (= 4.6-b197), nvidia-cudnn8 (= 4.6-b197), nvidia-tensorrt (= 4.6-b197), nvidia-visionworks (= 4.6-b197), nvidia-container (= 4.6-b197), nvidia-vpi (= 4.6-b197), nvidia-l4t-jetson-multimedia-api (>> 32.6-0), nvidia-l4t-jetson-multimedia-api (<< 32.7-0)
Steps To Reproduce
- Download the save model (ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8).
- Run the Tensor-RT converted:
from tensorflow.python.compiler.tensorrt import trt_convert as trt
converter = trt.TrtGraphConverterV2(input_saved_model_dir='ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model')
#converter.convert()
#converter.save('ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model_o')