Running GestureNet model on Holoscan

Hi ,

I have been trying to run gestureNet model from NGC to run on AGX ORIN. Downloaded from following link

It has a file model.eltl . I’m unable to convert it to ONNX or TENSORRT engine .

Could you anyone help with steps on how to convert eltl model to tesorrt engine on AGX ORIN and run it using holoscan . My AGX Orin is running JP 6.2.1
Also does the model work with Holoscan

@nvipwan1 Moving this topic from GPU-Accelerated Libraries forum to TAO forum since this .etlt model is released by TAO.

Please refer to tao_toolkit_recipes/tao_forum_faq/FAQ.md at main · NVIDIA-AI-IOT/tao_toolkit_recipes · GitHub to convert the .etlt model to .onnx file.
Then use trtexec to convert the .onnx file to tensorrt engine. Refer to TRTEXEC with Classification TF1/TF2/PyT — Tao Toolkit.

Hi ,

When I run the command on AGX Orin from the link you have provided
$ docker run --runtime=nvidia -it --rm -v /home/morganh:/home/morganh nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash

I’m getting following error
*Status: Downloaded newer image for nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
WARNING: The requested image’s platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
exec /opt/nvidia/nvidia_entrypoint.sh: exec format error
*

Is there any way I can convert eltl on AGX Orin

For the command of $ docker run --runtime=nvidia -it --rm -v /home/morganh:/home/morganh nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash, it is needed to run in x86-based machine instead of Jetson devices.

For Jetson devices, you can run trtexec inside a l4t docker to generate the engine. For example, Jetpack6.0 + nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel.

BTW, for how to use tao-deploy in Jetson devices, please take a look at Cannot use TAO Deploy in Jetson AGX Orin - #5 by Morganh.

More, to convert .etlt to .onnx file, please run with a dgpu machine instead of Jetson devices.

Hi,

I followed the steps outlined on GitHub for x86 with a dGPU and was able to access the Docker command line successfully. I also downloaded the .eltl file and created the Python script as described in the GitHub documentation. However, after running the script, the .onnx file is not being generated, and no errors are reported during execution.

Could you please help me understand what might be going wrong?

Thanks.

Is there write-access in your path? More, is the key correct, nvidia_tlt according to https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/gesturenet?version=deployable_v2.0.2.

You can also add debug into the code to check further.

Hi,

Yes there is write access in the path, I’m able to create files using touch command and write data to the file. Yes I’m using write passkey. How can I add debug info to the script ? Should I add prints ? or any other way to debug ?

Yes, you can add some prints.