Hi everyone,
for a bigger project with ROS I developed inference code which needs to run under Python3 using the TensorRT (important: IT HAS TO BE TRT7.1.3 or lower) Python API. No problem whatsoever on the Jetson.
Now an other code module forces me to use Python3.7.
I created a Python3.7 virtualenv. But how exactly can I get it to use the Python API correctly. Without doing anything, when trying to import TensorRT, it can’t be found. If I add the dist-packages of Python3.6 to 3.7, I get the error that the python3-libnvinfer package was build with Python3.6 so it can’t be used with Python3.7.
When I try to install via pip (via nvidia-pyindex) I can’t get the correct TRT7.1.3 version - only 7.2 which will not fit my system nor my code.
On my x86 machine/container, I can simply use the .whl file provided in the TAR file (following the installation instructions: Installation Guide :: NVIDIA Deep Learning TensorRT Documentation) and simply use it to install TRT with pip in my virtual env.
But when it comes to ARM / Jetson / Jetpack, the only possibility to install TensorRT is to install the whole SDK. There is no TAR installation method for ARM based systems where I could copy the correct .whl file out of (The whl. files from the x86 TAR file are obviously just for x86 systems …)
Can I somewhere get the .whl file for TensorRT 7.1.3 on ARM/Jetson or does anyone has an other idea how to get TensorRT up and running using a Python3.7 virtual env?
Environment
7.1.3:
GPU Type: Jetson AGX Xavier
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 7.6.5
Operating System + Version: Jetpack 4.4
Python Version (if applicable): 3.7 (and 3.6)
TensorFlow Version (if applicable): -
PyTorch Version (if applicable): 1.6.0
Baremetal or Container (if container which image + tag): Container: l4t-base:r32.4.3