TensorRT Python support on DRIVE AGX

Hello,

What workflow should one follow to optimize their Python code using TensorRT on the DRIVE AGX platform?

From what I have seen on numerous posts, there are no Python bindings/support for TensorRT on DRIVE AGX. How should we optimize the code then?

Hi raul_91,

Yes, officially TensorRT Python API isn’t supported on Drive platforms as description in the doc.
Could you convert your model into UFF/ONNX one offline, and then import and run it with C++ API on your Drive system?

BTW, Please see if “Optimizing DNN Inference Using CUDA and TensorRT on NVIDIA DRIVE AGX” webinar, mentioned in below links, is helpful.
https://news.developer.nvidia.com/how-drive-agx-cuda-and-tensorrt-achieve-fast-accurate-autonomous-vehicle-perception/
https://devtalk.nvidia.com/default/topic/1064456/general/nvidia-webinars-mdash-optimizing-dnn-inference-using-cuda-and-tensorrt-on-nvidia-drive-agx/

Thanks VickNV! I will look into converting my model into UFF/ONNX, and see if that works.

Is there an example of using a custom UFF/ONNX file with the C++ API on DRIVE AGX?

What did you mean “custom”? Could you check “TensorRT Samples Support Guide”?

@VickNV

I check the document and TensorRT Python supports Linux AArch64 (per Table 1).

Can you please explain what can go wrong if TensorRT Python API is executed on Drive platform?

Please refer to TensorRT python on target machine PDK 5.2.0 + DriveWorks 3.5 - #6 by VickNV.