[TRT]: UffParser: Unsupported number of graph 0 parseModel: Failed to parse UFF model

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) RTX 2070
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.0
• NVIDIA GPU Driver Version (valid for GPU only) 440.82

I have trained a classifier using latest TLT with the following config step-by-step as depicted by the example with preset classification_spec.cfg

After finished training and pruning, I have the .etlt file to be deployed with deepstream-app as a secondary classifier (sgie-0)
> [property]

    gpu-id=0
    net-scale-factor=1
    offsets=124;117;104
    tlt-model-key=tlt_encode
    tlt-encoded-model=../samples/models/mask/final_model_mask.etlt
    labelfile-path=../samples/models/mask/labels.txt
    input-dims=3;224;224;0
    uff-input-blob-name=input_1
    batch-size=1
    process-mode=2
    model-color-format=0
    ## 0=FP32, 1=INT8, 2=FP16 mode
    network-mode=2
    network-type=1
    num-detected-classes=2
    interval=0
    gie-unique-id=6
    output-blob-names=predictions/Softmax
    classifier-threshold=0.2

But when I run ./deepstream-app -c config.txt, I am facing the following error:

libEGL warning: DRI2: failed to authenticate
0:00:01.694985809  9923 0x562b126d14c0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 6]: Trying to create engine from model files
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: UffParser: Unsupported number of graph 0
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:01.881709525  9923 0x562b126d14c0 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 6]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 6]: build engine file failed
Segmentation fault (core dumped)

Need help regarding the deployment issue.

I also see this message during INT8 optimization step

NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.

I wonder whether the TF version on my TLT2 docker is slightly backdated as I see its 1.13.1

Please set tlt-model-key correctly with your own API key.

please, i have same problem, how can i get the tlt-model-key, in the documentation “https://ngc.nvidia.com/catalog/models/nvidia:tlt_peoplenet” said tlt-model-key = tlt_encode ?
thanks

Hi sylia,

Please help to open a new topic for your issue. Thanks

the problem was solved, it was a problem of access path thank you

1 Like