Failed to parse bboxes

• Jetson nano
• DeepStream Version 6.0
• JetPack Version 4.6.2

If you have trained YOLOv3 using the Taotoolkit and I am having issues when trying to run it in the Deepstream app:

sudo deepstream-app -c config.txt
Unknown or legacy key specified ‘is-classifier’ for group [property]
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.

Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
0:00:05.701223554 21770 0x1efaf460 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/omniflow/yolov3/yolov3.onnx_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT Input 3x384x1248
1 OUTPUT kINT32 BatchedNMS 1
2 OUTPUT kFLOAT BatchedNMS_1 200x4
3 OUTPUT kFLOAT BatchedNMS_2 200
4 OUTPUT kFLOAT BatchedNMS_3 200

0:00:05.702534612 21770 0x1efaf460 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/omniflow/yolov3/yolov3.onnx_b1_gpu0_fp16.engine
0:00:05.720007235 21770 0x1efaf460 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/config_infer_primary_yoloV3.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:180>: Pipeline running

ERROR: yoloV3 output layer.size: 4 does not match mask.size: 3
0:00:06.281389001 21770 0x1e8e6540 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:726> [UID = 1]: Failed to parse bboxes using custom parse function
Segmentation fault

This is the config file that I am using:
config.txt (793 Bytes)

You need to modify the postprocess function to adapt your own model.

opt\nvidia\deepstream\deepstream\sources\gst-plugins\gst-nvdspostprocess\postprocesslib_impl\post_processor_custom_impl.cpp
static bool NvDsPostProcessParseYoloV3(
    std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
    NvDsInferNetworkInfo const& networkInfo,
    NvDsPostProcessParseDetectionParams const& detectionParams,
    std::vector<NvDsPostProcessParseObjectInfo>& objectList,
    const std::vector<float> &anchors,
    const std::vector<std::vector<int>> &masks)

Or you can just get the yolov3 model we used in this deepstream_tao_apps.

DeepStream version 6.0 does not include the gts-nvdspostprocess.

OK. You can write the postprocess function by yourself and set the related parameters in the config file, like pgie_yolov3_tao_config.txt. You can refer to the link below: post_processor

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.