TAO-Deploy 4 on Jetson

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) Orin
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) All
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) 4.0
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

This is a question. When reviwing the new approach in TAO 4.0 there’s a revised method for creating the engine for Deepstream. The overview states the following: PLease see bold

When tao-deploy command is invoked through the TAO launcher, tao-deploy container is pulled from NGC and instantiated. The TAO Deploy container only contains few lightweight python packages such as OpenCV, Numpy, Pillow, and ONNX and is based on the NGC TensorRT container. Along with the NGC container, tao-deploy is also released as a public wheel on PyPI. The TensorRT engines generated by tao-deploy are specific to the GPU that it is generated on. So, based on the platform that the model is being deployed to, you will need to download the specific version of the tao-deploy wheel and generate the engine there after installing the corresponding TensorRT version for your platform

This suggests for the aarch then the wheel needs to be used. However, reading further down yields: Again please see bold

Installing TAO Deploy on a Jetson Platform

You can download the nvidia-tao-deploy wheel to a jetson platform using the same commands as the x86 platform installation. We recommend using the NVIDIA TensorRT Docker container that already includes the TensorRT installation. Due to memory issues, you should first run the gen_trt_engine subtask on the x86 platform to generate the engine; you can then use the generated engine to run inference or evaluation on the Jetson platform and with the target dataset.

This appears at odds to the first paragraph. My experience with TAO-Converter suggested that you needed the correct aarch binary for that tool and do the conversion.

In other words, can you create a Deepstream engine file for the Jetson on an x86 platform or do you need to install the wheel on the Jetson and create the engine file from the etlt on the Jetson?

Thank you

User can install the wheel on the Jetson and create the engine file from the etlt on the Jetson.
Or config the .etlt model in deepstream config file and let deepstream generate the engine.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.