Error in NvDsInferContextImpl::parseBoundingBox()

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) → dGPU aws T4
• DeepStream Version → 5.0
• TensorRT Version → 7
• NVIDIA GPU Driver Version (valid for GPU only) → 440.82

I am trying to run inference on RetinaFace but it shows me this error. I have a custom layer written to decode the output of retinaface which is included in the model engine file. Successfully able to deserialize the engine file and run on h264 stream but cannot get any output:

Now playing: ../../../samples/streams/sample_720p.h264
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:02.700166702   120 0x55f409fa04d0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/deepstream-retinaface-multistream/tensorrt_engines_awsT4/retina_r50.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT data            3x640x1088      
1   OUTPUT kFLOAT prob            428401x1x1      

0:00:02.700248097   120 0x55f409fa04d0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/deepstream-retinaface-multistream/tensorrt_engines_awsT4/retina_r50.engine
0:00:02.702319985   120 0x55f409fa04d0 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:retinaface_pgie_config.txt sucessfully
Running...
0:00:02.851189263   120 0x55f409f9ac50 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:02.851215394   120 0x55f409f9ac50 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:573> [UID = 1]: Failed to parse bboxes

I have uploaded the app here on drive in case you want to try to reproduce the error with a README

Obviously, you need to customize your own post process parser. There are many samples which demonstrate how to write the parser. Pls refer samples under /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/ or refer GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream

@bcao Okay, but then Id have to implement two custom libs one for the decode plugin and the other for parsing bounding boxes how do I specify two different libs in custom-lib-path?

No you just need to compile one lib and include the plugin and post process parser, you can check the sample.

我把tensorrtx这个作者写的retinaface后处理和插件加上,输出的矩形框不正确。请问你解决这个问题了吗?

1 Like

@383109759 NvDsInferLayerInfo not giving expected no. of outputs

Is there anyway to create a ‘post process parser’ in python rather than a c++ lib?

1 Like

Hi lycaass,

Please help to open a new topic for your issue.

Thanks