Tlt-converter ERROE: UffParser: Unsupported number of graph 0*

I was trying to convert my .etlt model into .engineer modle in the tlt launcher of the classification.ipynb. but I encountered with these error:


*!tlt tlt-converter $USER_EXPERIMENT_DIR/export/final_squeezenet_model.etlt *

  •           -k $KEY \*
    
  •           -o predictions/Softmax \*
    
  •           -d 3,224,224 \*
    
  •           -i nchw \*
    
  •           -m 256 \*
    
  •           -t fp32 \*
    
  •           -e $USER_EXPERIMENT_DIR/export/squeezenet_fp32.engineer \*
    
  •           -b 64*
    

2021-08-08 20:52:04,233 [INFO] root: Registry: [‘nvcr.io’]
[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
2021-08-08 20:52:19,019 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.


What wrong with it? How to deal it? Thanks a lot.

Make sure

  • The key is correct
  • The path should be inside docker. The etlt file should be existing.
    You can run below to check.
    $ tlt classification run ls $USER_EXPERIMENT_DIR/export/final_squeezenet_model.etlt

Thank for so rapid reply.

  1. The key is correct. I has only one key, it is correct. The errors such as “Network must have at least one output” show that the etlt file is existing, but “Network validation failed”. I do not know what is Network validation?
  2. I do not know whether the path is inside docker. The tlt commands such as “tlt classification” will run a container as a black box. There are so many errors in this black box, such as file missing. Can we attach these container to location the missing files? Just guessing is not enough to locate the errors.
    How to deal it? Thank.

All the path are defined in your ~/.tlt_mounts.json.
If you want to debug , you can login the docker directly.
$ tlt classification run /bin/bash

Thank for help. I enter the container and check the file. It is all right. I test the command tlt-converter in the container as following:


*tlt-converter /workspace/tlt-experiments/classification/export/final_cspdarknet_model.etlt *
*> -k 11122323242343453545453423…w342343454 *
*> -o predictions/Softmax *
*> -d 3,224,224 *
*> -i nchw *
*> -m 256 *
*> -t fp32 *
*> -e /workspace/tlt-experiments/classification/export/cspdarknet.engineer *
> -b 64


and it seems something are different:


[ERROR] UffParser: Could not parse MetaGraph from /tmp/fileSgs6qm
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)


What wrong?

Is the key correct? Is it used to train and export to get the tlt and etlt file?

More info, see NVIDIA TAO Documentation

Yes, it is on a same computer with the same key.

There is another error on another computer with same KEY. It is conducted in jupyter notebook.


[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)


Usually for this kind of error, please check

  1. The $KEY is really set when you train the etlt model. Also make sure it is correct.
  2. The key is correct when you run tlt-converter. The key should be exactly the same as used in the TLT training phase
  3. The etlt model is available

Reference topic:
https://devtalk.nvidia.com/default/topic/1067539/transfer-learning-toolkit/tlt-converter-on-jetson-nano-error-/
https://devtalk.nvidia.com/default/topic/1065680/transfer-learning-toolkit/tlt-converter-uff-parser-error/?offset=11#5397152

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.