Tlt-converter run in jetson nano with tensorrt 7.1 occured some errors

I download peopleNet model in NV NGC,then Run the tlt-converter generate the engine.
export API_KEY= ‘tlt_encode’
export OUTPUT_NODES=output_bbox/BiasAdd,output_cov/Sigmoid
export INPUT_DIMS=3,960,544
export D_TYPE=fp32
export MODEL_PATH=resnet18_peoplenet.tlt

the error:
xxx:~/PeopleNet$ ./tlt-converter -k $API_KEY -o $OUTPUT_NODES -d $INPUT_DIMS $MODEL_PATH
[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

Please modify above to

export API_KEY= tlt_encode

I have try it,but the same error.

Refer to How to run tlt-converter

$ ./tlt-converter resnet18_peoplenet_pruned.etlt -k tlt_encode -c resnet18_peoplenet_int8.txt -o output_cov/Sigmoid,output_bbox/BiasAdd -d 3,544,960 -i nchw -e peoplenet_int8.engine -m 64 -t int8 -b 64

I have converted peopleNet correctly,now I want to convert my own model(based on DetectNet_v2)
the new error:
./tlt-converter resnet18_detector.etlt -k aDl*************Y0 -o output_cov/Sigmoid,output_bbox/BiasAdd -d 3,720,1280 -i nchw -e peoplenet_fp16.engine -m 4 -t fp16 -b 2

[ERROR] UffParser: Could not parse MetaGraph from /tmp/filewQOHoP
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

the model train in TLT with GPU 2080TI and tensorrt 7.0,I can run tlt-converter in this PC correctly,but when i try to convert this model in jetson nano, this Error occured above.
please help me !!!

Did you use your own key during training?
Normally, the error results from ngc-key.
See more in TLT Converter UffParser: Unsupported number of graph 0 - #4 by Morganh

the error: UffParser: Could not parse MetaGraph from /tmp/filewQOHoP

Please focus on " please check the encoding key to make sure it’s correct".
The error should be related to ngc key.

See more in

For the error, please check

  1. The $KEY is really set when you train the etlt model. Also make sure it is correct.
  2. The key is correct when you run tlt-converter. The key should be exactly the same as used in the TLT training phase
  3. The etlt model is available

Reference topic:
https://devtalk.nvidia.com/default/topic/1067539/transfer-learning-toolkit/tlt-converter-on-jetson-nano-error-/
https://devtalk.nvidia.com/default/topic/1065680/transfer-learning-toolkit/tlt-converter-uff-parser-error/?offset=11#5397152

the model from tlt-train with tensorrt 7.0 , but we use tlt-converter 7.1 ,Does this have any influence?

No, it does not.

To narrow down, you can train only 1 epoch for a new model with your own key. And run tlt-converter again.