ONNX Output in TAO 5.0 - how to get an .etlt-model in TAO 5.0.0

Please provide the following information when requesting support.

• Hardware A100

  • Deployment on Jetson Xavier NC --DS-Version 6.0.1
    • Network Type: Detectnet_v2
    • TLT Version – TAO 5.0.0

I recently saw that the Trafficcammodel does not fit in my use case, so i decided tro retrain it with te TAO toolkit.

Then i saw, that the training process does not give me .etlt as an output, just hdf5files.
and when i try to export it as an etlt, i get no options / an error message.
I’m using Deespstream 6.0.1 on a Xavier NX / XAVIER AGX (3 devices in total).

So i have two questions:

How can i change the quickstart-Notebook in Tao 5.0.0 in a way where i get an etlt formatted model as an output.

and if this is not possible:

How can i deploy the onnx model into my deepstream pipeline?

Has anyone else this problem? i’m relatively new, so did i something wrong?

The etlt file is actually encryption of onnx file.
Before TAO 5.0, etlt file is generated during exporting.
Since TAO 5.0, onnx file is generated during exporting.

To deploy the onnx file in Jetson devices, you can use trtexec in Jetson to generate tensorrt engine, then config it in model-engine-file. Comment out tlt-encoded-model and tlt-model-key.

Another trick is to encrypt onnx file to etlt file. Refer to
https://github.com/NVIDIA/tao_tensorflow2_backend/blob/main/internal/encode.eff.py.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.