Am I correct in saying there is no Python API in the current release of TensorRT for the Drive PX2? If so, how do I go about running inference on a pretrained Tensorflow model in TensorRT on the PX2? I have code to do so using the Python API on a desktop PC, are there similar functions which can be used in another interface e.g. C++?
AFAIK, the latest SDK supports TensorRT 3.0 RC which can support Python API.https://developer.nvidia.com/tensorrt
DriveInstall 22.214.171.124 Linux
•Added support for CUDA 9.0 Toolkit for Host (Ubuntu Linux x64 with Parker cross-development support)
•Added support for CUDA 9.0 Toolkit for aarch64 Linux
•TensorRT 3.0 RC and cuDNN 7 GA for x86 host and aarch64 target
•DriveWorks 0.6 for host and target
•NVIDIA System Profiler 3.9 ◦Support for hypervisor event trace.
◦Various bug fixes and performance enhancements.
Thanks for the reply. However, in the TensorRT Release Notes it states:
The part you have highlighted does not mention anything about the aarch64 target having support for the Python API.
Sorry for confusing.
You are right. The TensorRT Python APIs are only supported on x86 based systems. Thanks.
We are trying to run TensorRT on our Drive PX2 using python 3.5.2, but when we try to import PyCUDA and TensorRT we get an error saying: “ImportError: No module named…”
According the command below, the system has TensorRT ‘3.0.0-1+cuda9.0’
dpkg -l | grep TensorRT
On the other hand, I know that PyCUDA is not installed, and I cannot find instruction how to install it on the Drive PX2. I look at the instruction on https://wiki.tiker.net/PyCuda/Installation/Linux, but they are out of date in my opinion.
Does anyone can help me with it?
Thanks and regards
lgriera - Read the reply above, its been confirmed that the TensorRT Python API is only supported on x86 based systems, therefore is not available for the DrivePX2. This is why you cannot import the TensorRT module from Python as you are trying to do.
so i think we need to install the TensorRT 3.0 on the host and then use the python API for the TF model conversion on the host and then we can send it to the target for inference.Correct me if i’m wrong.