When I tried to convert a saved model (ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8) in Tensorflow using Tensor-RT, the libnvinfer.so.7 failed to load. After making a work arround by making a symbolic link of libnvinfer.so.7 with libnvinfer.so.8 (which is al ready installed in (Jetpack 4.6) I found the root cause of the problem. A conflict in versions between Tensorflow and the installed Tensor-RT. Installed v8 and expected V7. How I can downgrad the Tensor-RT to version 7 ? my Tensorflow version is TF2.5 installed using the build provided here
tor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.10.2
2022-02-26 13:26:05.312823: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libnvinfer.so.7
ERROR:tensorflow:Loaded TensorRT 8.0.1 but linked TensorFlow against TensorRT 7.1.3. It is required to use the same major version of TensorRT during compilation and runtime.
Maintainer: NVIDIA Corporation
Depends: nvidia-cuda (= 4.6-b197), nvidia-opencv (= 4.6-b197), nvidia-cudnn8 (= 4.6-b197), nvidia-tensorrt (= 4.6-b197), nvidia-visionworks (= 4.6-b197), nvidia-container (= 4.6-b197), nvidia-vpi (= 4.6-b197), nvidia-l4t-jetson-multimedia-api (>> 32.6-0), nvidia-l4t-jetson-multimedia-api (<< 32.7-0)
Steps To Reproduce
- Download the save model (ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8).
- Run the Tensor-RT converted:
from tensorflow.python.compiler.tensorrt import trt_convert as trt
converter = trt.TrtGraphConverterV2(input_saved_model_dir='ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model')
Would you mind double-checking which package do you install?
The link you shared is built on JetPack4.6.
So it should link libnvinfer.so.8 rather than libnvinfer.so.7.
Yes, it is 2.5.0. Here is a screenshot showing it.
Here are the steps I followed to install tensorflow:
sudo apt-get install python3.6 python3.6-dev python3.6-distutils python3.6-venv
sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
python3.6 -m pip install -U pip testresources setuptools==49.6.0
python3.6 -m pip install -U --no-deps numpy==1.19.4 future==0.18.2 mock==3.0.5 keras_preprocessing==1.1.2 keras_applications==1.0.8 gast==0.4.0 protobuf pybind11 cython pkgconfig
sudo env H5PY_SETUP_REQUIRES=0 pip3 install -U h5py==2.9.0
python3.6 -m pip install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v46 tensorflow
@AastaLLL Should I provide additional logs/information ?
In case there is no clear reason of this conflict of versions. Is there a way to convert the Tensorlflow model (.pb or .tflite) to TensorRT model using my ordinay laptop (no GPU).
Please try the v2.6.2+nv21.12 package to see if the same issue occurs.
I failed to install the v2.6.2 or 2.6.0 as both have the h5py 3.1.0 dependency and I was not able to install in python3.6 either with pip nor with build from source. I opend a ticket in h5py repo, but I noticed other jetson users reporting the same issue accross the web. Any know solution?
UPDATE-1: I was able to instale the h5py 3.1.0 finally!, I posted how in the same linked issue ticked.
UPDATE-2: after the success in installing TF2.6 I suffered from Illegal instruction(core dumped) error once I import tensorfow it was solved using this (adapted from here)
UPDATE-3: I confirm that the issue is solved after upgrading to TF2.6.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.