UffParser: Unsupported number of graph 0

0:00:10.152480524 5386 0x248db80 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Unsupported number of graph 0
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:10.386728833 5386 0x248db80 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

The above error exists even after giving the correct encode KEY.

Can you share full command and config file?

CONFIG FILE…
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#0=RGB, 1=BGR
model-color-format=0

tlt-encoded-model=models/yolov4_cspdarknet_tiny_epoch_210_af.etlt
tlt-model-key=nvidia_tlt

int8-calib-file=models/

#model-engine-file=models/
labelfile-path=models/labels.txt

infer-dims=3;544;960
#uff-input-blob-name=Input
#output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd

maintain-aspect-ratio=1
uff-input-order=0
uff-input-blob-name=Input

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0
num-detected-classes=7
gie-unique-id=1

0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=2

output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=models/libnvds_infercustomparser.so
#maintain-aspect-ratio=0

[class-attrs-all]
nms-iou-threshold=0.5
pre-cluster-threshold=0.2
post-cluster-threshold=0.1
#group-threshold=1
eps=0.3
#detected-max-w=500

###########

python3 deepstream_xavier_test_stable.py file:///opt/nvidia/deepstream/deepstream-5.1/samples/deepstream_python_apps/deepstream-nvdsanalytics/sample_videos/test.mp4

Please check

  • if the key is correct
  • if models/yolov4_cspdarknet_tiny_epoch_210_af.etlt is available

Please follow GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream to run

I have followed it, and the model is also present

I am using DeepDtream-5.1, I have used 5.1 for TAO model earlier.
It is only the YOLO models causing the trouble.

Please use the official yolo models and retry. Download it from the guide in the github.

But why Official models, then there is no point in training custom models?

This is to narrow down the issue.
If official model + key(nvidia_tlt) works,
but your mdoel + key(nvidia_tlt) does not work, there might be something mismatching for your model.

Same Issue…

0:00:10.183842114 1194 0x3539180 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Unsupported number of graph 0
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:10.450626540 1194 0x3539180 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

Resnet models are not causing any problem.
It is only the yolo models…

Do you mean issue still happens when you run official yolo models?

YES, the issue persists…

It is not expected. Other users can run well with the official models.

One more thing needed to check, please check if there are additional characters after “nvidia_tlt”. Space characters are not expected.

You mean other users can run with DeepStream-5.1?
And there is no way that there is a space, I have checked that too.

Well, are there any changes to be made in the CONFIG FILE besides only the KEY?

You can run with an old branch. GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tao3.0
Download official models and retry.

Did that too… I am still facing the same error. How do I solve it?

I share the searching result in the forum.
https://forums.developer.nvidia.com/search?expanded=true&q=parseModel:%20Failed%20to%20parse%20UFF%20model%20%20%23intelligent-video-analytics:tao-toolkit%20

Could you have a look to check if they can help you?