DeepStream parseBoundingBox(): Could not find output coverage layer error with YOLOv8 custom parser

❓ Issue Summary

I’m integrating a YOLOv8 face detection model into DeepStream 7.0 using a custom NvDsInferParseCustomYoloV8 parser.

Despite ensuring my model has one output named output0 with shape [1, 5, 8400], I consistently receive this error:

ERROR gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox(): Could not find output coverage layer for parsing objects
ERROR gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Failed to parse bboxes
Segmentation fault (core dumped)


⚙️ Setup

DeepStream Version: 7.0

TensorRT Version: 8.5.x

Platform: x86_64 Ubuntu

Model: YOLOv8n Face ONNX exported using export.py from Ultralytics

Converted TensorRT Engine:

Successfully built using:

trtexec --onnx=models/yolov8n-face.onnx --explicitBatch --fp16 --saveEngine=models/yolov8n-face.engine

Output confirmed with:

Binding 0 (images): Input 1x3x640x640
Binding 1 (output0): Output 1x5x8400


🧪 Custom Parser Code

extern “C”
bool NvDsInferParseCustomYoloV8(std::vector& objectList,
NvDsInferLayerInfo* layersInfo,
uint32_t numLayers,
NvDsInferNetworkInfo* networkInfo,
NvDsInferParseDetectionParams* detectionParams)
{
const char* kOutputBlobName = “output0”;
float* output = nullptr;
int numBoxes = 8400;

for (uint32_t i = 0; i < numLayers; ++i) {
    if (!strcmp(layersInfo[i].layerName, kOutputBlobName)) {
        output = (float*) layersInfo[i].buffer;
        break;
    }
}

if (!output) {
    std::cerr << "ERROR: Could not find output layer " << kOutputBlobName << std::endl;
    return false;
}

for (int i = 0; i < numBoxes; ++i) {
    float x = output[i];
    float y = output[numBoxes + i];
    float w = output[2 * numBoxes + i];
    float h = output[3 * numBoxes + i];
    float conf = output[4 * numBoxes + i];

    if (conf < detectionParams->perClassThreshold[0]) continue;

    NvDsInferObjectDetectionInfo obj;
    obj.classId = 0;
    obj.detectionConfidence = conf;
    obj.left = x - w / 2;
    obj.top = y - h / 2;
    obj.width = w;
    obj.height = h;
    objectList.push_back(obj);
}

return true;

}

extern “C” bool NvDsInferParseCustomYoloV8_Release() {
return true;
}


🧾 PGIE Config

[class-attrs-all]
threshold=0.5
pre-cluster-threshold=0.5

[primary-gie]
enable=1
model-engine-file=models/yolov8n-face.engine
network-type=0
gie-unique-id=1
output-blob-names=output0
parse-bbox-func-name=NvDsInferParseCustomYoloV8
custom-lib-path=/path/to/libnvdsinfer_custom_impl_yolo.so
batch-size=1


✅ Confirmed

Engine file builds and loads.

output0 is the correct output layer (confirmed via trtexec --dumpOutputBindings).

Shape of output0 is [1x5x8400].


❌ Still getting

Could not find output coverage layer for parsing objects

Followed by Failed to parse bboxes and then a segmentation fault.


🙏 Help Needed

Why is DeepStream not recognizing output0 even though the parser and config match?

Do I need to register the output layer in a different way?

Is anything wrong with the way I’m reading the tensor?