DeepStream parseBoundingBox(): Could not find output coverage layer error with YOLOv8 custom parser

❓ Issue Summary

I’m integrating a YOLOv8 face detection model into DeepStream 7.0 using a custom NvDsInferParseCustomYoloV8 parser.

Despite ensuring my model has one output named output0 with shape [1, 5, 8400], I consistently receive this error:

ERROR gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox(): Could not find output coverage layer for parsing objects
ERROR gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Failed to parse bboxes
Segmentation fault (core dumped)


⚙️ Setup

DeepStream Version: 7.0

TensorRT Version: 8.5.x

Platform: x86_64 Ubuntu

Model: YOLOv8n Face ONNX exported using export.py from Ultralytics

Converted TensorRT Engine:

Successfully built using:

trtexec --onnx=models/yolov8n-face.onnx --explicitBatch --fp16 --saveEngine=models/yolov8n-face.engine

Output confirmed with:

Binding 0 (images): Input 1x3x640x640
Binding 1 (output0): Output 1x5x8400


🧪 Custom Parser Code

extern “C”
bool NvDsInferParseCustomYoloV8(std::vector& objectList,
NvDsInferLayerInfo* layersInfo,
uint32_t numLayers,
NvDsInferNetworkInfo* networkInfo,
NvDsInferParseDetectionParams* detectionParams)
{
const char* kOutputBlobName = “output0”;
float* output = nullptr;
int numBoxes = 8400;

for (uint32_t i = 0; i < numLayers; ++i) {
    if (!strcmp(layersInfo[i].layerName, kOutputBlobName)) {
        output = (float*) layersInfo[i].buffer;
        break;
    }
}

if (!output) {
    std::cerr << "ERROR: Could not find output layer " << kOutputBlobName << std::endl;
    return false;
}

for (int i = 0; i < numBoxes; ++i) {
    float x = output[i];
    float y = output[numBoxes + i];
    float w = output[2 * numBoxes + i];
    float h = output[3 * numBoxes + i];
    float conf = output[4 * numBoxes + i];

    if (conf < detectionParams->perClassThreshold[0]) continue;

    NvDsInferObjectDetectionInfo obj;
    obj.classId = 0;
    obj.detectionConfidence = conf;
    obj.left = x - w / 2;
    obj.top = y - h / 2;
    obj.width = w;
    obj.height = h;
    objectList.push_back(obj);
}

return true;

}

extern “C” bool NvDsInferParseCustomYoloV8_Release() {
return true;
}


🧾 PGIE Config

[class-attrs-all]
threshold=0.5
pre-cluster-threshold=0.5

[primary-gie]
enable=1
model-engine-file=models/yolov8n-face.engine
network-type=0
gie-unique-id=1
output-blob-names=output0
parse-bbox-func-name=NvDsInferParseCustomYoloV8
custom-lib-path=/path/to/libnvdsinfer_custom_impl_yolo.so
batch-size=1


✅ Confirmed

Engine file builds and loads.

output0 is the correct output layer (confirmed via trtexec --dumpOutputBindings).

Shape of output0 is [1x5x8400].


❌ Still getting

Could not find output coverage layer for parsing objects

Followed by Failed to parse bboxes and then a segmentation fault.


🙏 Help Needed

Why is DeepStream not recognizing output0 even though the parser and config match?

Do I need to register the output layer in a different way?

Is anything wrong with the way I’m reading the tensor?

How did you generate the engine file?

I tried it using trtexec --explicitBatch --onnx=models/yolov8n-face-lindevs.onnx --saveEngine=yolov8n-face-lindevs.engine --fp16

I have also tried trtexec --onnx=models/yolov8n-face-lindevs.onnx --saveEngine=yolov8n-face-lindevs.engine

I have also tried by deleting the engine file and letting DeepStream convert the onnx during runtime but issue persist

This error is printed by your own code

Please print the “layersInfo[i].layerName” to check the actual layer names.

Hi,

Thanks for the suggestion earlier.

I added a loop to print out the layer names from layersInfo[i].layerName in my custom parser like this:

for (uint32_t i = 0; i < numLayers; ++i) {
    std::cout << "Layer " << i << ": " << layersInfo[i].layerName << std::endl;
}

However, this code never executes. It seems the parser is not even being called. The pipeline fails earlier during parseBoundingBox() and fillDetectionOutput() with the error:

Could not find output coverage layer for parsing objects
Failed to parse bboxes

Yet DeepStream logs clearly show:

OUTPUT kFLOAT output0 5x8400

I’ve double-checked that output-blob-names=output0 is correctly set in my config, and that NvDsInferParseCustomYoloV8 is defined and returns true. I also added the release function.

It seems DeepStream is not invoking the custom parser at all, or is unable to associate it properly. Is there something else I might be missing that would prevent the custom parser from being triggered?

Thanks.

The nvinfer bbox parsing postprocessing callback function definition is

typedef bool (* NvDsInferParseCustomFunc) (
        std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
        NvDsInferNetworkInfo  const &networkInfo,
        NvDsInferParseDetectionParams const &detectionParams,
        std::vector<NvDsInferObjectDetectionInfo> &objectList);

Seems you use the wrong interface. Please refer to /opt/nvidia/deepstream/deepstream/sources/includes/nvdsinfer_custom_impl.h

What is the NvDsInferParseCustomYoloV8_Release() for?

Hi,

Thanks for your response. As per your suggestion, I’ve updated the parser function to match the correct signature defined in nvdsinfer_custom_impl.h. Below are both my parser .cpp file and the DeepStream .txt config file for nvinfer.


📄 nvdsinfer_custom_yolo.cpp

#include <cstring>
#include <vector>
#include <cmath>
#include <iostream>
#include "nvdsinfer_custom_impl.h"

extern "C"
bool NvDsInferParseCustomYoloV8(
    std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
    NvDsInferNetworkInfo const &networkInfo,
    NvDsInferParseDetectionParams const &detectionParams,
    std::vector<NvDsInferObjectDetectionInfo> &objectList)
{
    const char* kOutputBlobName = "output0"; // should match ONNX output name
    const int numBoxes = 8400;

    float* output = nullptr;

    for (const auto& layer : outputLayersInfo) {
        if (!strcmp(layer.layerName, kOutputBlobName)) {
            output = (float*)layer.buffer;
            break;
        }
    }

    if (!output) {
        std::cerr << "ERROR: Could not find output layer: " << kOutputBlobName << std::endl;
        return false;
    }

    // Expected output shape is [1 x 5 x 8400]
    for (int i = 0; i < numBoxes; ++i) {
        float x = output[i];
        float y = output[numBoxes + i];
        float w = output[2 * numBoxes + i];
        float h = output[3 * numBoxes + i];
        float conf = output[4 * numBoxes + i];

        if (conf < detectionParams.perClassThreshold[0]) continue;

        NvDsInferObjectDetectionInfo obj;
        obj.classId = 0;
        obj.detectionConfidence = conf;
        obj.left = x - w / 2;
        obj.top = y - h / 2;
        obj.width = w;
        obj.height = h;

        objectList.push_back(obj);
    }

    return true;
}

🧾 config_infer_primary.txt

[primary-gie]
enable=1
gpu-id=0
batch-size=1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
net-scale-factor=1.0
network-type=0
num-detected-classes=1
network-input-shape=3;640;640
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV8
custom-lib-path=./libnvdsinfer_custom_yolov2.so
output-blob-names=output0

[property]
onnx-file=models/yolov8n-face-lindevs.onnx
model-engine-file=models/yolov8n-face-lindevs3.engine
labelfile-path=labels.txt

[class-attrs-all]
threshold=0.25

❗ Issue

Despite this setup, I still receive the following errors:

NvDsInferContextImpl::parseBoundingBox() Could not find output coverage layer for parsing objects
NvDsInferContextImpl::fillDetectionOutput() Failed to parse bboxes

I’ve confirmed the engine has only one output named output0, and its shape is [1, 5, 8400], which matches the parsing logic.

Could you please help identify what’s causing DeepStream to fail parsing this output?

Thanks again!

If this configuration is correct, parseBoundingBox() will not be evoked. Please check whether the name and the path is correct.

gst-nvinfer is open source, you can also debug with the source code to find out why the customized function is not trigered.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks