[TRT]: UffParser: Unsupported number of graph 0 parseModel: Failed to parse UFF model

Hello,

Please, I work on people detection and counting with Deepstream5.0 SDK and TLT, I followed this tutorial, “https://ngc.nvidia.com/catalog/models/nvidia:tlt_peoplenet” I encountered type errors when i run
"deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/deepstream_app_source1_peoplenet.txt "

ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/…/…/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine open error
0:00:00.408771938 22470 0x562e97fda0a0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/…/…/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine failed
0:00:00.408812849 22470 0x562e97fda0a0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/…/…/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine failed, try rebuild
0:00:00.408822007 22470 0x562e97fda0a0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Unsupported number of graph 0
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:00.657500869 22470 0x562e97fda0a0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
corrupted size vs. prev_size
Aborted (core dumped)

**• Hardware Platform : PC/GPU
**• DeepStream Version 5.0
**• TensorRT Version 7.0

Hi,

ERROR: … resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine open error

It looks like there is no engine file and leads to this error.
Do you update the model information in config_infer_primary_peoplenet.txt based on your environment?

tlt-model-key=
tlt-encoded-model=
labelfile-path=
int8-calib-file=
input-dims=
num-detected-classes=

Thanks.

thanks for response, but does “tlt-model-key=tlt_encode” or other key ? where can i find “resnet34 peoplenet pruned.etlt_b1_gpu0_fp16.engine”

Hi,

resnet34 peoplenet pruned.etlt_b1_gpu0_fp16.engine is the serialized TensorRT engine file.
You can create it with tlt-converter. Please check the following document for the details instruction:
https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#gen_eng_tlt_converter

Thanks.

ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Unsupported number of graph 0
parseModel: Failed to parse UFF model

Hi harsh.vijay,
Nor sure what the issue is, please help to open a new topic.