Tao-converter [ERROR] Failed to parse the model, please check the encoding key to make sure its correct

Please provide the following information when requesting support.

• Hardware (4Gb jetson nano, RTX 3080)
• Network Type (Detectnet_v2)

  • DeepStream 5.1

• TLT Version (dockers:
nvidia/tao/tao-toolkit-tf:
v3.22.05-tf1.15.5-py3:
docker_registry: nvcr.io

format_version: 2.0
toolkit_version: 3.22.05
published_date: 05/25/2022

Hi there.
Further to an earlier issue with the tao-converter command, where I needed to “chmod +x” it, I get this set of errors when I attempt to use it to generate the engine:


I have exported the NGC API key and checked that it is in the nano with “echo $KEY”. It is the same key that I used to make the detectnet_v2 .etlt that is referred to in the bash command.

Is the issue with the key, or is it actually something else?
Please advise

For this kind of error, please make sure

  • The key is correct. You can also type $KEY to the actual key explicitly.
  • The etlt model is trained with this key.

Hi Morganh.

  • The key is correct: it’s 84 characters in length and comprises upper and lower case characters including integers. Therefore I always copy and paste it out of a safe document to eliminate the possibility of human error when transcribing.
    (I have tried putting the full key into the command line, in place of “$KEY” and I get the same set of [ERROR]s).

  • It is the only key I possess and it has been used throughout this (and earlier) projects to train and test various models, including the current .etlt model.

(BTW, I noticed an error in the -d dims and have corrected them to “3,704,1824”, with no change to the result)

Please advise

To narrow down, suggest you to download the peoplenet etlt model
wget ‘https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/trainable_v2.6/files/resnet34_peoplenet.tlt

and run similar command to check if it works.
Please note that its input is 960x544(width x height)

Here is the result:


The peoplenet model is a .tlt, so not encrypted.
I used the -k and the NGC API key as it is a required argument.

Please advise

Please use this one: PeopleNet | NVIDIA NGC
wget ‘https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/pruned_v2.3/files/resnet34_peoplenet_pruned.etlt

key: tlt_encode

Hi Morganh,
The tao-converter took about 10 minutes. Here is the bash output:


Please confirm:

  1. Was the tensor engine conversion “successful”?
  2. The documentation states that the default value for -w is “1073741824(1<<30)”. What would you suggest I might change this to in order to appropriately increase my workspace size (based on a 4Gb nano) for my experiment?
  3. Does this mean that I need to re-export my model with my key and then repeat the tao-converter process?

Please advise

Yes. You can find the .etlt file.
If you are going to increase -w, you can change it. By default, it is 1G(1073741824).
Since you can generate peoplenet model successfully, so the command and tao-converter has no issue. For your own model, please re-export my model again and rerun tao-converter.

Thank you Morganh.

Please may I ask a further (and related) question.
So far I have always used my NGC API key, when “-k” is a required argument.
Does this also apply to the export task?
Or am I defining the encryption key for the .etlt?
In which case, can I chose the key myself?
This is not clear in the documentation.
Thank you

It is fine to set the key to any value. No matter your ngc key or something others you want to set.
But make sure that, if you train a model using one key(for example, 123), please do other actions(export, retrain,etc) with the same key(123).

Hi again.

Rather than change the key and rerun the entire project I have kept using my NGC API key. I exported the model successfully as a .etlt file: this means the key is correct at the export stage (doesn’t it?).
However on the nano tao-converter gives the same [ERROR]s as at the start of the post.
I cannot see how the key can be correct for the export and then, minutes later, incorrect for tao-converter. I have tried entering it in full and also exporting it as KEY and then using “echo $KEY” after “-k”.
Please advise

Need to use .etlt instead of .tlt.
The tao-converter is generating tensorrt engine based on .etlt model.

Let me rephrase my last reply.

Here is the screen shot:


I used “.etlt”
Please advise

Above is the result when you run successfully with peoplenet.etlt.
Can you leverage it and try again with your own etlt?

I am struggling with the logic here.
resnet34_peoplenet_pruned.etlt was encrypted using the key you supplied and was converted to a .engine when the same key was used after “-k”: -k tlt_encode.
resnet50_detector.etlt was encrypted using the key that had been used throughout training and evaluation and that successfully exported a .etlt file, however that same key does not allow the tao-converter to parse the model.
I do not understand what part of the peoplenet result I can leverage. I have triple-checked that $KEY on the nano matches $KEY on the machine used for training. There are no issues with autocorrect functions like autocapitalisation.
I cannot simply keep running the same command with the same key, whether copy/pasted or $KEYed, and expect to get a different result. Is it possible that the issue might lie elsewhere?


Why there is input_file here? The command line should not contain it.

I removed “input_file”, but I get the same [ERROR]s.

" Why there is input_file here? The command line should not contain it."
In “Using the tao-converter”, under “Required Arguments”, the syntax treats “input_file” in the same way as “-k”, “-d” and “-o”. I must admit that the sample log doesn’t use it, so apologies for that.

I suggest you to run “export” again. To avoid unknown issue, please export with the key explicitly. For example, if your key is 123, then please use 123 instead of $KEY.

After exporting, then use the same explicit key in the tao-converter command line as well.

It says the key is invalid.
Can I salvage anything from this project?
Please advise

Please use the key when you run training.