I have been trying to run gestureNet model from NGC to run on AGX ORIN. Downloaded from following link
It has a file model.eltl . I’m unable to convert it to ONNX or TENSORRT engine .
Could you anyone help with steps on how to convert eltl model to tesorrt engine on AGX ORIN and run it using holoscan . My AGX Orin is running JP 6.2.1
Also does the model work with Holoscan
When I run the command on AGX Orin from the link you have provided $ docker run --runtime=nvidia -it --rm -v /home/morganh:/home/morganh nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash
I’m getting following error
*Status: Downloaded newer image for nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
WARNING: The requested image’s platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
exec /opt/nvidia/nvidia_entrypoint.sh: exec format error
*
For the command of $ docker run --runtime=nvidia -it --rm -v /home/morganh:/home/morganh nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash, it is needed to run in x86-based machine instead of Jetson devices.
I followed the steps outlined on GitHub for x86 with a dGPU and was able to access the Docker command line successfully. I also downloaded the .eltl file and created the Python script as described in the GitHub documentation. However, after running the script, the .onnx file is not being generated, and no errors are reported during execution.
Could you please help me understand what might be going wrong?
Is there write-access in your path? More, is the key correct, nvidia_tlt according to https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/gesturenet?version=deployable_v2.0.2.
You can also add debug into the code to check further.
Yes there is write access in the path, I’m able to create files using touch command and write data to the file. Yes I’m using write passkey. How can I add debug info to the script ? Should I add prints ? or any other way to debug ?