Custom yolov4 model bboxes integration error

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have a yolov4 trained to classify vehicles into different classes e.g car, truck.
i created the onnx, and when inspected it:

Model Inputs:
Input name: input
Shape: [16, 3, 416, 416]

Model Outputs:
Output name: boxes
Shape: [16, 10647, 1, 4]
Output name: confs
Shape: [16, 10647, 6]

now, when i am integrating in deepstream, the engine file is being built but there is a error:

DsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2138> [UID = 4]: serialize cuda engine to file: /home/a2i/Downloads/deep_app_stable/deepstream_lpr_app-master/deepstream-lpr-app/secondary3yolov4/yolov4_16_3_416_416_static.onnx_b16_gpu0_fp16.engine successfully
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x416x416
1 OUTPUT kFLOAT boxes 10647x1x4
2 OUTPUT kFLOAT confs 10647x6

ERROR: [TRT]: 3: Cannot find binding of given name: confs

0:09:42.824025094 9242 0x562bdc7c3cc0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 4]: Could not find output coverage layer for parsing objects
0:09:42.824099654 9242 0x562bdc7c3cc0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 4]: Failed to parse bboxes
0:09:42.824122895 9242 0x562bdc7c3cc0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 4]: Could not find output coverage layer for parsing objects

Honestly, i am interested in confidence values mostly, i can do without bboxes if thats what causing trouble.

This is my config:

[property]

GPU setup

gpu-id=0
model-color-format=0

Paths to model files

onnx-file=/home/a2i/Downloads/deep_app_stable/deepstream_lpr_app-master/deepstream-lpr-app/secondary3yolov4/yolov4_16_3_416_416_static.onnx
#model-engine-file=/home/a2i/Downloads/deep_app_stable/deepstream_lpr_app-master/deepstream-lpr-app/secondary3yolov4/yolov4_1_3_416_416_static.onnx_b1_gpu0_fp16.engine
labelfile-path=/home/a2i/Downloads/deep_app_stable/deepstream_lpr_app-master/deepstream-lpr-app/secondary3yolov4/labels.txt

Model configuration

network-mode=2 # Set to FP16 for optimal speed and precision (0=FP32, 1=INT8, 2=FP16)
num-detected-classes=6

Inference configuration

gie-unique-id=4
maintain-aspect-ratio=1
output-blob-names=confs
output-tensor-meta=1

Custom parsing and libraries

parse-bbox-func-name=NvDsInferParseCustomYoloV4

custom-lib-path=/opt/nvidia/deepstream/deepstream-6.4/lib/libnvdsinfer_custom_impl_Yolo.so

Batch and Optimization Settings

batch-size=16
workspace-size=8192

Thresholds

confidence-threshold=0.000001
infer-dims=3;416;416

any help would be appreciated

You need to modify the open source code below in the sources\libs\nvdsinfer\nvdsinfer_context_impl_output_parsing.cpp according to your own model.

bool
DetectPostprocessor::parseBoundingBox(vector<NvDsInferLayerInfo> const& outputLayersInfo,
    NvDsInferNetworkInfo const& networkInfo,
    NvDsInferParseDetectionParams const& detectionParams,
    vector<NvDsInferObjectDetectionInfo>& objectList)
{
...
}

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.