Deepstream PGIE config error, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson AGX Orin
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1
• TensorRT Version 8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, I got an error while converting the onnx model into an engine file.

The onnx file is a segmentation model with 3 outputs and one of them is the segmentation mask while the others are negligible.

The Pgie config file I use is like the following.

[property]
gpu-id=0
net-scale-factor=1.0
model-color-format=0

model-engine-file=../model/segmentation_model.engine
onnx-file=../model/segmentation_model.onnx
infer-dims=3;1024;1024

uff-input-order=0
uff-input-blob-name=input
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=30
interval=0
gie-unique-id=2

network-type=2
output-blob-names=output
segmentation-threshold=0.0
batch-size=1

[class-attrs-all]
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

The error I got is

Config file path: config/pgie_config_1.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

The infer-dims and uff-input-blob-name are right.
But the output-blob-names seem to be the problem.

I put the name of the segmentation mask output among the 3 outputs but the engine build fails.

Is it not possible to use a multiple output segmentation model on deepstream pipeline?

Yes, it’s possible. Could you attch more log?

Thanks for the reply yuweiw!
Here are more logs.

 <nvdsinfer_context_impl.cpp:1923> [UID = 2]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: 4: [graphShapeAnalyzer.cpp::analyzeShapes::1872] Error Code 4: Miscellaneous (IShuffleLayer Unsqueeze_69: reshape changes volume. Reshaping [1,128,128] to [8,1,128,128].)
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:07.804872823  2596      0x8748900 ERROR                nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<nvinfer_1> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 2]: build engine file failed
0:00:07.978689195  2596      0x8748900 ERROR                nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<nvinfer_1> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 2]: build backend context failed
0:00:07.978740107  2596      0x8748900 ERROR                nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<nvinfer_1> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 2]: generate backend failed, check config file settings
0:00:07.978861483  2596      0x8748900 WARN                 nvinfer gstnvinfer.cpp:888:gst_nvinfer_start:<nvinfer_1> error: Failed to create NvDsInferContext instance
0:00:07.978890155  2596      0x8748900 WARN                 nvinfer gstnvinfer.cpp:888:gst_nvinfer_start:<nvinfer_1> error: Config file path: config/pgie_config_1.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

The path is correct since the config file works fine with the Unet model.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

There are some issues with your model reshape. Could you describe how your model was generated? If convenient, you can also attach your ONNX model.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.