Running GestureNet model on Holoscan

Hi ,

I have been trying to run gestureNet model from NGC to run on AGX ORIN. Downloaded from following link

It has a file model.eltl . I’m unable to convert it to ONNX or TENSORRT engine .

Could you anyone help with steps on how to convert eltl model to tesorrt engine on AGX ORIN and run it using holoscan . My AGX Orin is running JP 6.2.1
Also does the model work with Holoscan

@nvipwan1 Moving this topic from GPU-Accelerated Libraries forum to TAO forum since this .etlt model is released by TAO.

Please refer to tao_toolkit_recipes/tao_forum_faq/FAQ.md at main · NVIDIA-AI-IOT/tao_toolkit_recipes · GitHub to convert the .etlt model to .onnx file.
Then use trtexec to convert the .onnx file to tensorrt engine. Refer to TRTEXEC with Classification TF1/TF2/PyT — Tao Toolkit.

Hi ,

When I run the command on AGX Orin from the link you have provided
$ docker run --runtime=nvidia -it --rm -v /home/morganh:/home/morganh nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash

I’m getting following error
*Status: Downloaded newer image for nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
WARNING: The requested image’s platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
exec /opt/nvidia/nvidia_entrypoint.sh: exec format error
*

Is there any way I can convert eltl on AGX Orin

For the command of $ docker run --runtime=nvidia -it --rm -v /home/morganh:/home/morganh nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash, it is needed to run in x86-based machine instead of Jetson devices.

For Jetson devices, you can run trtexec inside a l4t docker to generate the engine. For example, Jetpack6.0 + nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel.

BTW, for how to use tao-deploy in Jetson devices, please take a look at Cannot use TAO Deploy in Jetson AGX Orin - #5 by Morganh.

More, to convert .etlt to .onnx file, please run with a dgpu machine instead of Jetson devices.