NvDsInfer Error: NVDSINFER_CONFIG_FAILED for FasterRCNN

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) AGXXavier
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only) 5.1.2
• TensorRT Version 8.5.2

I have all libraries installed as in this discussion.

My primary config file is

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
offsets=103.939;116.779;123.68
onnx-file=../../models/Rectitude/frcnn_resnet50.onnx
model-engine-file=../../models/Rectitude/frcnn_resnet50.onnx_b4_gpu0_fp16.engine
labelfile-path=../../models/Rectitude/labels.txt
batch-size=4
infer-dims=3;544;960 
uff-input-blob-name=input_image
uff-input-order=0
process-mode=1
model-color-format=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=2
interval=0
gie-unique-id=1
#output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
force-implicit-batch-dim=1
is-classifier=0
output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomNMSTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_tlt_apps/post_processor/libnvds_infercustomparser_tlt.so



[class-attrs-all]
pre-cluster-threshold=0.6
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

Then I have errors as NvDsInfer Error: NVDSINFER_CONFIG_FAILED.
Complete messages are as follow.

atic@ubuntu:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/Rectitude$ ./deepstream-app -c ../../../../samples/configs/deepstream-app/rectitude_config_main.txt
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:00.527922085 36095 0xaaaacf157610 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1174> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Rectitude/frcnn_resnet50.onnx_b4_gpu0_fp16.engine open error
0:00:02.912284133 36095 0xaaaacf157610 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Rectitude/frcnn_resnet50.onnx_b4_gpu0_fp16.engine failed
0:00:02.992248664 36095 0xaaaacf157610 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Rectitude/frcnn_resnet50.onnx_b4_gpu0_fp16.engine failed, try rebuild
0:00:02.992404800 36095 0xaaaacf157610 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: [TRT]: ModelImporter.cpp:731: ERROR: ModelImporter.cpp:519 In function importModel:
[4] Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && "This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag."
ERROR: Failed to parse onnx file
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:04.216002367 36095 0xaaaacf157610 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2022> [UID = 1]: build engine file failed
0:00:04.287399494 36095 0xaaaacf157610 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2108> [UID = 1]: build backend context failed
0:00:04.287502923 36095 0xaaaacf157610 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1282> [UID = 1]: generate backend failed, check config file settings
0:00:04.288022339 36095 0xaaaacf157610 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:04.288352531 36095 0xaaaacf157610 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/restitude_config_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized
** ERROR: <main:716>: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
nvstreammux: Successfully handled EOS for source_id=1
nvstreammux: Successfully handled EOS for source_id=2
nvstreammux: Successfully handled EOS for source_id=3
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/restitude_config_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

please comment out force-implicit-batch-dim=1 in the configuration file.

Thanks, I’ll try tomorrow.
Do I need to put output blob names?

yes, pease set output-blob-names.

May I know what are output-blob-names for fasterrcnn?

you can use Netron to get the onnx model’s output layer name .

I see thanks

I still have errors.
The config files are
rectitude_config_main.txt (5.8 KB)
restitude_config_primary.txt (4.1 KB)

output-blob-names are set as follows as seen in netron output.

output-blob-names=nms_out_1;nms_out

What could be still wrong? The whole error messages are

atic@ubuntu:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/Rectitude$ ./deepstream-app -c ../../../../samples/configs/deepstream-app/rectitude_config_main.txt
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:00.482160099  5891 0xaaaad831fe10 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1174> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Rectitude/frcnn_resnet50_ver1.onnx_b4_gpu0_fp16.engine open error
0:00:02.671591647  5891 0xaaaad831fe10 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Rectitude/frcnn_resnet50_ver1.onnx_b4_gpu0_fp16.engine failed
0:00:02.733998285  5891 0xaaaad831fe10 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Rectitude/frcnn_resnet50_ver1.onnx_b4_gpu0_fp16.engine failed, try rebuild
0:00:02.734101773  5891 0xaaaad831fe10 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
ERROR: [TRT]: Validation failed: libNamespace == nullptr
/opt/nvidia/deepstream/deepstream-6.3/sources/TensorRT/plugin/proposalPlugin/proposalPlugin.cpp:528

ERROR: [TRT]: std::exception
ERROR: [TRT]: Validation failed: libNamespace == nullptr
/opt/nvidia/deepstream/deepstream-6.3/sources/TensorRT/plugin/proposalPlugin/proposalPlugin.cpp:528

ERROR: [TRT]: std::exception
WARNING: [TRT]: builtin_op_importers.cpp:5243: Attribute isBatchAgnostic not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
ERROR: [TRT]: 4: [network.cpp::validate::3096] Error Code 4: Internal Error (input_image: for dimension number 2 in profile 0 does not match network definition (got min=544, opt=544, max=544), expected min=opt=max=720).)
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:04.569594349  5891 0xaaaad831fe10 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2022> [UID = 1]: build engine file failed
0:00:04.636431551  5891 0xaaaad831fe10 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2108> [UID = 1]: build backend context failed
0:00:04.636543967  5891 0xaaaad831fe10 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1282> [UID = 1]: generate backend failed, check config file settings
0:00:04.637036732  5891 0xaaaad831fe10 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:04.637084155  5891 0xaaaad831fe10 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/restitude_config_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized
** ERROR: <main:716>: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
nvstreammux: Successfully handled EOS for source_id=1
nvstreammux: Successfully handled EOS for source_id=2
nvstreammux: Successfully handled EOS for source_id=3
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/restitude_config_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
  1. please update infer-dims=3;544;960 to infer-dims=3;720:1280. from the screenshot, the input is 3x720x1280.
  2. about “Validation failed: libNamespace == nullptr”, I can’t reproduce this issue on DS6.3 docker. could you share the frcnn model by forum private email? Thanks! please click forum avatar-> email.

Model links are sent in private message.

log-1121.txt (5.9 KB)
pgie_frcnn_tao_config.yml (2.1 KB)
using model frcnn_resnet50_ver2.onnx you shared, I can’t reproduce that “Validation failed: libNamespace == nullptr” issue on docker nvcr.io/nvidia/deepstream:6.3-triton-multiarch.

I will test on Jetson DS6.3.

wondering if it is TensorRT issue, could you use the following comman-line to test model?

/usr/src/tensorrt/bin/trtexec --onnx=frcnn_resnet50_ver1.onnx --fp16 \
	--saveEngine=frcnn_resnet50_ver1.onnx_b4_gpu0_int8.engine --minShapes=input_image:1x3x720x1280 \
	--optShapes=input_image:4x3x720x1280 --maxShapes=input_image:4x3x720x1280

Yes thanks it worked. But still have “Validation failed: libNamespace == nullptr” error.
Directly convert on deepsteram-app also worked eventhough it showed “Validation failed: libNamespace == nullptr” error.
Is the error important?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.