General information on TensorRT, DLA, PVA, ISP, docker, OpenCV with python environment

Hardware Platform: DRIVE AGX Xavier™ Developer Kit
Software Version: DRIVE Software 10
Host Machine Version: Ubuntu 18.04.5 LTS
SDK Manager Version: 1.2.0.6733

Our goal is to run an app that is capable of doing object detection and segmentation at the same time at inference same as DL4AGX pipeline, but with a different use case.
(Github: https://github.com/NVIDIA/DL4AGX/tree/master/MultiDeviceInferencePipeline)

We would love to run a python program on the AGX Xavier using Tensorflow, OpenCV, that is accelerated through TensorRT.

How would one go about installing a Python environment that is capable of using Tensorflow, OpenCV, CUDA, DLA, PVA, ISP, TensorRT, camera feed, radar data, etc…

For now we were unable to install/compile Opencv and Tensorflow on the DRIVE AGX (directories are missing, or we cannot access libraries after installation)

Is it possible to provide a high level manner on how to achieve this?

Cheers,
Niels

Dear @niels_otiv,
Note that, There is no python TensorRT bindings for DRIVE AGX. Hence you need to use TensorRT C++ APIs to optimise your DNNs on AGX using TensorRT. However, you can prepare your model using TF oh host get onnx file. This onnx file can be used to get TensorRT model and inference can be performed on DRIVE AGX.
DLA can not programmed directly like GPU. It is accessible only via TensorRT.
Our Driveworks has tools to generate TensorRT model using onnx/caffe files and has APIs to load any DNN and perform inference. Please check DNN module in DW documentation. PVA is also accessbile via DW APIs. DW also has APIs to read sensors data and feed into DNNs.

Hi @SivaRamaKrishnaNV,

We already have TF models ready to convert to TensorRT models on the DRIVE AGX. Wouldn’t it be possible to convert from TF to TensorRT?

On terget device
Even installing Tensorflow, CUDA, CuDNN, TensorRT, OpenCV like stated in the github repo (https://github.com/NVIDIA/DL4AGX/tree/master/MultiDeviceInferencePipeline) is not possible. It can’t find certain directories or is unable to compile…

How can we get all these dependencies on the DRIVE AGX and than call them through a python file? We know that they are already in the DW software, is it possible to access through the software using python?

Thanks in advance,

Dear @niels_otiv,

Wouldn’t it be possible to convert from TF to TensorRT?

Yes. please check https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#build_model for more information.

We know that they are already in the DW software, is it possible to access through the software using python?

DW is based on c++ and no python API support currently.

Dear @SivaRamaKrishnaNV,

Thanks for getting back to me so quickly

Since there is no Python API, we would love to make a python environment and install everything from source.

What is the best way to do this? Through Docker?

Thanks in advance,