Description
The python API’s import tensorrt
on the nvcr.io/nvidia/l4t-jetpack docker image fails with undefined symbol
. This is part of a larger problem that occurs when importing Tensorflow after a successful installation, according to here. It fails with TF-TRT Warning: Could not find TensorRT
, which is not a path not versioning problem, since a quick strace -e open,openat python3 -c "import tensorflow as tf" 2>&1 | grep "libnvinfer\|TF-TRT"
shows that one of the scanned paths (/lib/aarch64-linux-gnu/libnvinfer.so.8) indeed points to the location of the library, which is /usr/lib/aarch64-linux-gnu/libnvinfer.so.8.5.2
. These versions match up when probing python3 -c "import tensorrt as trt; print(trt.__version__)"
, which then causes the error in the title.
Environment
TensorRT Version: 8.5.5.2
GPU Type: see docker image
Nvidia Driver Version: see docker image
CUDA Version: 11.4
CUDNN Version: see docker image
Operating System + Version: aarch64
Python Version (if applicable): 3.8
TensorFlow Version (if applicable): 2.12
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/l4t-base:35.3.1
Relevant Files
Steps To Reproduce
- Pull mentioned image
- Run mentioned image
- Install Tensorflow with official guide
python3 -c "import tensorflow; import tensorrt"
Interestingly, the python package of TensorRT is there as soon as pip is installed. How is that possible? Importing it there also causes this error. Also, what’s with the naming conventions? It’s very hard finding the right versions when L4T has seperate versioning syntax than Jetpack and the symlinks for the internal libraries don’t describe clearly what they are linked to.