How to Convert ETLT Model to ONNX Using TAO Toolkit

hello,

I have an ETLT model and I want to convert it to ONNX. Can someone explain in simple steps how to do this?

Please try to follow tao_toolkit_recipes/tao_forum_faq/FAQ.md at main · NVIDIA-AI-IOT/tao_toolkit_recipes · GitHub. Thanks.

I successfully converted my .etlt model to .onnx using a custom decoder script, but when I use the ONNX model in my DeepStream config, the pipeline crashes during engine creation.

Even if I comment out the engine file line in the config (so DeepStream tries to build it automatically), it still fails — the .engine file is never generated.

DeepStream Log

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvTrackerParams::getConfigRoot()] !!![WARNING] File doesn't exist. Will go ahead with default values
[NvMultiObjectTracker] Initialized
0:00:00.417009807 17728 0x73d902db5820 INFO nvinfer gstnvinfer.cpp:685:gst_nvinfer_logger:<primary-inference-5> NvDsInferContext[UID 20]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2123> [UID = 20]: Trying to create engine from model files
Segmentation fault (core dumped)

ETLT → ONNX Conversion Script

I used this Python script to decode my .etlt file before conversion:

import argparse
import struct
import os
from nvidia_tao_tf1.encoding import encoding

def parse_command_line(args=None):
    parser = argparse.ArgumentParser(description='ETLT Decode Tool')
    parser.add_argument('-m', '--model', type=str, required=True, help='Path to the .etlt file.')
    parser.add_argument('-o', '--uff', required=True, type=str, help='Output path for the .uff file.')
    parser.add_argument('-k', '--key', required=True, type=str, help='Encryption key for the ETLT model.')
    return parser.parse_args(args)

def decode(etlt_path, uff_path, key):
    if not os.path.isfile(etlt_path):
        raise FileNotFoundError(f"ETLT model not found: {etlt_path}")
    print(f"Decoding ETLT model: {etlt_path} -> {uff_path}")
    try:
        with open(etlt_path, 'rb') as encoded_file, open(uff_path, 'wb') as out_file:
            size_bytes = encoded_file.read(4)
            size = struct.unpack("<i", size_bytes)[0]
            input_node_name = encoded_file.read(size)
            print(f"Input node name: {input_node_name.decode('utf-8')}")
            encoding.decode(encoded_file, out_file, key.encode())
        print(f"Successfully decoded to {uff_path}")
    except Exception as e:
        print(f"Failed to decode ETLT: {e}")

def main(args=None):
    args = parse_command_line(args)
    decode(args.model, args.uff, args.key)

if __name__ == "__main__":
    main()

Vehicle Detection Config File

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
infer-dims=3;544;960
#uff-input-blob-name=input_1
#tlt-model-key=tlt_encode


#tlt-encoded-model=../../../surveillance_ai_model/x86_64/Vehicle_Detection/resnet18_detector_pruned_qat_Rank1.onnx


onnx-file=../../../surveillance_ai_model/x86_64/Vehicle_Detection/resnet18_detector_pruned_qat_Rank1.onnx
#int8-calib-file=../../../surveillance_ai_model/x86_64/Vehicle_Detection/vehicle_detection_label.txt
#model-engine-file=../../../surveillance_ai_model/x86_64/Vehicle_Detection/VehicleDetection_Res18_ReTrained_2n4_V1.0_int8.etlt_b2_gpu0_fp16.engine
labelfile-path=../../../surveillance_ai_model/x86_64/Vehicle_Detection/vehicle_detection_label.txt


batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=4
#interval=25
gie-unique-id=20
cluster-mode=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

#[class-attrs-all]
#pre-cluster-threshold=0.2
#group-threshold=1
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
#eps=0.2
#minBoxes=3

[class-attrs-0]
pre-cluster-threshold=0.35
#pre-cluster-threshold=1.0
detected-min-h=150
detected-min-w=150
detected-max-w=1050
detected-max-h=950
group-threshold=1
eps=0.7
minBoxes=1



[class-attrs-1]
#detected-max-w=500
#detected-max-h=450
pre-cluster-threshold=0.25
eps=0.7
group-threshold=1

My Model Directory

root@smarg:~/data/Ajeet_Gateway_Working/smarg/smarg_surveillance_prod/surveillance_ai_model/x86_64/Vehicle_Detection# ls
decode_etlt.py                          resnet18_detector_pruned_qat_Rank1.etlt  vehicle_detection_label.txt
resnet18_detector_pruned_qat_Rank1.bin  resnet18_detector_pruned_qat_Rank1.onnx
root@smarg:~/data/Ajeet_Gateway_Working/smarg/smarg_surveillance_prod/surveillance_ai_model/x86_64/Vehicle_Detection#

Can you open the onnx file successfully with Netron?
Please make sure this file is not corrupt.

I am not able to open the ONNX model in Netron. I suspect the ONNX file was not generated properly because when I try to load and check it using Python, I get the following er


import onnx
onnx_model_path = "example.onnx"
onnx_model = onnx.load(onnx_model_path)
onnx.checker.check_model(onnx_model)
print("ONNX model loaded successfully!")

Error:

Traceback (most recent call last):
  File "/root/data/convert_etlt_to_onnx/yolo.py", line 4, in <module>
    onnx_model = onnx.load(onnx_model_path)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/onnx/__init__.py", line 229, in load_model
    model = _get_serializer(format, f).deserialize_proto(_load_bytes(f), ModelProto())
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/onnx/serialization.py", line 121, in deserialize_proto
    decoded = typing.cast("Optional[int]", proto.ParseFromString(serialized))
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
google.protobuf.message.DecodeError: Error parsing message with type 'onnx.ModelProto'

So, please give me step-by-step suggestions on how I can convert an .etlt model to ONNX properly.

Could you please double check? Actually you can find examples by other users to convert .etlt to .onnx. For example, TAO .etlt to TensorRT Engine Conversion on Jetson Orin / WSL2 / Docker Failed.

You may also find searching results in Search results for 'recipe #intelligent-video-analytics:tao-toolkit order:latest' - NVIDIA Developer Forums.

Thanks.

I am aware of that you may convert an .etlt model which is actually detectent_v2 network. So, after conversion, it is uff file. It is a .uff file instead of .onnx file.
So, for detectnet_v2 network, you can not convert ngc’s etlt file to onnx file.
You can download the .tlt model(note: not .etlt model) . Then use TAO 5.0 docker to export .tlt file to .onnx file.

Please use TAO5.0 tf1 docker. nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5

$ detectnet_v2 export --model /path/to/model.tlt --key nvidia_tlt --output /path/to/model.onnx

Similar topic:
Fail to load onnx model after conversion from .etlt - #5 by Morganh

Similar topic:
Issue Converting ResNet18 ETLT to ONNX - Verification Fails with UTF-8 Error.

Hi! Since we haven’t heard back from you for a while, we’re assuming everything is resolved and will close this topic. If you need any further help, don’t hesitate to open a new topic. Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.