Failed to parse bboxes using custom parse function

this is my setup information:

**• Hardware Platform Orin AGX
**• DeepStream Version 6.3
**• JetPack Version 5.1.2
**• TensorRT Version 8.5.2
• Issue Type( question )
**• How to reproduce the issue ?

we have trained yolov4 model on the NVIDIA tao container v3.21 and made “.etlt” and “.bin” files, and then create the “.engine” file with tao-converter v4.0.0_trt8.5.2.2. Then when we want to test on deepstream 6.3. So, we replaced these .etlt , .bin and .engine file in deepstream sample config file. but we recieve this error:

Mismatch in the number of output buffers.Expected 2 output buffers, detected in the network :4
0:00:05.527984971 11877 0xaaaabb927060 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:726> [UID = 1]: Failed to parse bboxes using custom parse function
Segmentation fault

for the logs, there was an error in bbox parsing function. what are the model’s output layers?
which sample are you testing? could you share nvinfer’s configuration?

we have changed in the config files and now it runs but doesn’t show any bounding box.
these are modified config files:
deepstream_app_source1_detection_models.txt (5.1 KB)
config_infer_primary_yolov4.txt (2.9 KB)

There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.

  1. if the model is custom,please make sure the model can give the right bbox by the thirdparty tool.
  2. if testing by deesptream, please make sure the nvinfer’s parameters are right, please refer to the doc for the explanation.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.