Fail to load onnx model after conversion from .etlt

Please provide the following information when requesting support.

• Hardware - GeForce RTX 3050
• Network Type - FaceDetect
• TLT Version - nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
• Training spec file (If have, please share here)
• How to reproduce the issue? (This is for errors. Please share the command line and the detailed log here.)

Steps to reproduce

  1. Download FaceDetect pruned_v2.0 from FaceDetect | NVIDIA NGC
  2. Create decode_etlt.py for .etlt to .onnx conversion
import argparse
import struct
from nvidia_tao_tf1.encoding import encoding

parser = argparse.ArgumentParser(description='ETLT Decode Tool')
parser.add_argument('-m',
                    '--model',
                    type=str,
                    required=True,
                    help='Path to the etlt file.')
parser.add_argument('-o',
                    '--uff',
                    required=True,
                    type=str,
                    help='The path to the uff file.')
parser.add_argument('-k',
                    '--key',
                    required=True,
                    type=str,
                    help='encryption key.')
args = parser.parse_args()
print(args)

with open(args.uff, 'wb') as temp_file, open(args.model, 'rb') as encoded_file:
    size = encoded_file.read(4)
    size = struct.unpack("<i", size)[0]
    input_node_name = encoded_file.read(size)
    encoding.decode(encoded_file, temp_file, args.key.encode())
print("Decode successfully.")
  1. docker run --runtime=nvidia -it --rm -v <local_dir>:<mapped_dir> nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash
  2. python decode_etlt.py -m model.etlt -o model.onnx -k nvidia_tlt
  3. Exit docker
  4. Load onnx model
import onnx

onnx_model = onnx.load("/home/tao_tutorials/model.onnx")

Error:

In ngc, the facenet .etlt model is trained with detectnet_v2 network. For detectnet_v2 etlt models, after decoding, it is a .uff model instead of an onnx model.

Thank you for your prompt reply. I will then convert the .etlt to .uff using tao_deploy/internal/decrypt_onnx.py at main · NVIDIA/tao_deploy (github.com)
. Could you please share with me how to convert .uff to .onnx?

Thanks!

You can download the trainable facenet .tlt model and use the TAO 5.0 tf1 docker to export to onnx.
For example,
$ detectnet_v2 export --model /path/to/model.tlt --key nvidia_tlt --output /path/to/model.onnx