Please provide the following info (tick the boxes after creating this topic):
Software Version
[1] DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other
Target Operating System
[1] Linux
QNX
other
Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
[1] DRIVE AGX Orin Developer Kit (not sure its number)
other
SDK Manager Version
[1] 1.9.10816
other
Host Machine Version
[1] native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other
Can I install pre-built TensorRT Python package via pip?
Dear @WanchaoYao,
Is the ask you want to install TensorRT python libs on target using pip or on host for DRIVE OS 6.0.5 release?
Dear @WanchaoYao,
There is no prebuilt python package of TensorRT. May I know if it is critical for your development? Can you use TRT C++ APIs for your development on DRIVE AGX Orin?
@SivaRamaKrishnaNV Yes. Because our model deployment code are written by TRT Python API.
Fortunately, I have already installed TRT OSS python bindings after a lot effort.
1 Like
Dear @WanchaoYao,
Glad to hear you could compile TRT OSS python binding and install. Please share the steps and issues you have faced to help others in the community.
@SivaRamaKrishnaNV The key steps are:
-
Install TRT OSS
cmake -DTRT_OUT_DIR=
pwd/out -DTRT_PLATFORM_ID=aarch64 -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc -DCUDA_VERSION=11.4 -DTRT_LIB_DIR=/lib/aarch64-linux-gnu/ -DCUDNN_LIB=/lib/aarch64-linux-gnu/libcudnn.so.8 -DTENSORRT_LIBRARY_INFER=/lib/aarch64-linux-gnu/libnvinfer.so.8 -Dnvinfer_LIB_PATH=/lib/aarch64-linux-gnu/libnvinfer.so.8 -Dnvparsers_LIB_PATH=/lib/aarch64-linux-gnu/libnvparsers.so.8 -DTENSORRT_LIBRARY_INFER_PLUGIN=/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8 -DBUILD_ONNX_PYTHON=ON ..
-
Install python bindings according to TensorRT/python at main · NVIDIA/TensorRT · GitHub
export EXT_PATH=~/external
mkdir -p $$EXT_PATH && cd $$EXT_PATH
git clone https://github.com/pybind/pybind11.git
# Add Main Headers
cd ~/external
wget https://www.python.org/ftp/python/3.8.10/Python-3.8.10.tar.xz
tar -xf Python-3.8.10.tar.xz
mkdir -p python-3.8
cp -r Python-3.8.10/Include/ python3.8/include
# Add PyConfig.h
sudo mkdir /usr/include/aarch64-linux-gnu/python3.8m
sudo apt install libpython3.8-dev
sudo cp /usr/include/aarch64-linux-gnu/python3.8/pyconfig.h /usr/include/aarch64-linux-gnu/python3.8m
sudo ln -s /usr/lib/aarch64-linux-gnu/libnvinfer.so.8 /usr/lib/aarch64-linux-gnu/libnvinfer.so
sudo ln -s /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8 /usr/lib/aarch64-linux-gnu/libnvonnxparser.so
sudo ln -s /usr/lib/aarch64-linux-gnu/libnvparsers.so.8 /usr/lib/aarch64-linux-gnu/libnvparsers.so
sudo ln -s /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
cd $TRT_OSSPATH/python
export PYTHON=3.8
EXT_PATH=/home/nvidia/external PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=8 TARGET_ARCHITECTURE=aarch64 bash ./build.sh
pip install build/dist/tensorrt-*.whl
The built whl is uploaded. Hope it can help somebody.
tensorrt-8.2.0.6-cp38-none-linux_aarch64.whl (776.0 KB)
2 Likes
Dear @WanchaoYao,
Thank you for sharing the information.