raul_91
November 14, 2019, 3:27pm
1
Hello,
What workflow should one follow to optimize their Python code using TensorRT on the DRIVE AGX platform?
From what I have seen on numerous posts, there are no Python bindings/support for TensorRT on DRIVE AGX. How should we optimize the code then?
VickNV
November 14, 2019, 7:21pm
2
Hi raul_91,
Yes, officially TensorRT Python API isn’t supported on Drive platforms as description in the doc .
Could you convert your model into UFF/ONNX one offline, and then import and run it with C++ API on your Drive system?
BTW, Please see if “Optimizing DNN Inference Using CUDA and TensorRT on NVIDIA DRIVE AGX” webinar, mentioned in below links, is helpful.
https://news.developer.nvidia.com/how-drive-agx-cuda-and-tensorrt-achieve-fast-accurate-autonomous-vehicle-perception/
https://devtalk.nvidia.com/default/topic/1064456/general/nvidia-webinars-mdash-optimizing-dnn-inference-using-cuda-and-tensorrt-on-nvidia-drive-agx/
raul_91
November 14, 2019, 7:24pm
3
Thanks VickNV! I will look into converting my model into UFF/ONNX, and see if that works.
raul_91
November 14, 2019, 8:15pm
4
Is there an example of using a custom UFF/ONNX file with the C++ API on DRIVE AGX?
VickNV
November 14, 2019, 8:50pm
5
What did you mean “custom”? Could you check “TensorRT Samples Support Guide ”?
@VickNV
I check the document and TensorRT Python supports Linux AArch64 (per Table 1).
Can you please explain what can go wrong if TensorRT Python API is executed on Drive platform?
VickNV
July 22, 2021, 4:04pm
7