TensorRT Python API Installation

Description

Where are the Python APIs for TensorRT?
How do I install the Python APIs for TensorRT?

Environment

L4T 28.1.0
Board: t210ref
Ubuntu 16.04 LTS
Kernel Version: 4.4.38-jetsonbot-doc-v0.3
TensorRT Version: 2.1
GPU Type: ?
Nvidia Driver Version: L4T Jetson TX1 Driver P28.1.
CUDA Version: 8.0
CUDNN Version: 6.0.21
Operating System + Version: Ubuntu 16
Python Version (if applicable): 3.5
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag): Baremetal
** Hardware (Board): Nvidia TX1

Relevant Files

nvidia@tegra-ubuntu:~/jetsonUtilities$ dpkg -l | grep TensorRT
libnvinfer-dev 3.0.2-1+cuda8.0 arm64 TensorRT development libraries and headers
libnvinfer3 3.0.2-1+cuda8.0 arm64 TensorRT runtime libraries
tensorrt-2.1.2 3.0.2-1+cuda8.0 arm64 Meta package of TensorRT

Steps To Reproduce

nvidia@tegra-ubuntu:~/jetsonUtilities$ sudo python3
Python 3.5.2 (default, Apr 16 2020, 17:47:17)
[GCC 5.4.0 20160609] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

import tensorrt
Traceback (most recent call last):
File “”, line 1, in
ImportError: No module named ‘tensorrt’

Hi @kaisar.khatak,
You can download TensorRT from here.

Also, please refer to the links for setting up TensorRT on your system
https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#overview

For the python API implementation and Samples, please check the below links
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#python_topics
https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html
Thanks!

Thanks for quick reply. I noticed that tensorrt-2.1.2 version makes no mention of supporting python APIs. Do I need to upgrade to Version 3? If I need to upgrade to Version 3, can I upgrade the TX1 without flashing via Jetpack? I noticed the tensorrt github repository doesn’t contain older versions. Is there a way to put the older versions out there for those who want to build from source?

I have CUDA 8 and CUDNN 6 loaded on the TX1 (Linux AArch64). Which version of TensorRT is compatible?

FYI:
I installed TensorRT Version 3:
https://developer.nvidia.com/nvidia-tensorrt3rc-download

sudo dpkg -i nv-tensorrt-repo-ubuntu1604-rc-cuda8.0-trt3.0-20170922_3.0.0-1_arm64.deb

nvidia@tegra-ubuntu:~/Downloads$ sudo dpkg -l | grep TensorRT
libnvinfer-dev 4.0.0-1+cuda8.0 arm64 TensorRT development libraries and headers
libnvinfer-samples 4.0.0-1+cuda8.0 arm64 TensorRT samples and documentation
libnvinfer3 3.0.2-1+cuda8.0 arm64 TensorRT runtime libraries
libnvinfer4 4.0.0-1+cuda8.0 arm64 TensorRT runtime libraries
tensorrt 3.0.0-1+cuda8.0 arm64 Meta package of TensorRT
tensorrt-2.1.2 3.0.2-1+cuda8.0 arm64 Meta package of TensorRT

I followed the directions on the TensorRT Install page, but ran into an error. It doesn’t look like the Python libraries can be installed using the debian file downloaded.

nvidia@tegra-ubuntu:~/TensorRT-3.0.0$ sudo apt-get install python3-libnvinfer-dev
[sudo] password for nvidia:
Reading package lists… Done
Building dependency tree
Reading state information… Done
E: Unable to locate package python3-libnvinfer-dev

Hi @kaisar.khatak,
Is there any specific reason you are using the old versions of TRT?
We recommend you to use the latest TRT version(7.1) which results in better performance.
In case you face an issue in installation or need more details, Jetson forum woul dbe able to help you better.
Thi slink might be helpful meanwhile.
https://docs.nvidia.com/jetson/jetpack/install-jetpack/index.html
Thanks!