DeepStream - Video decoding error and bbox parsing failure with YOLOv8 engine

πŸ“„ Description:

I’m integrating a face recognition pipeline (detection, cropping, embedding, matching) with DeepStream (deepstream-test3.py base). While running the integrated pipeline, I encounter the following errors:


❗ Observed Errors:

ERROR nvinfer gstnvinfer.cpp:676: NvDsInferContext[UID 1]: Could not find output coverage layer for parsing objects  
ERROR nvinfer gstnvinfer.cpp:676: Failed to parse bboxes  
Segmentation fault (core dumped)

Additionally, GStreamer throws:

gst_h264_parse_handle_frame (): Broken bit stream

πŸ” Steps Taken So Far:

  • Re-encoded input videos with ffmpeg using H.264, YUV420p, AAC audio.

  • Verified TensorRT engine is correctly deserialized.

  • YOLOv8 engine loads fine (yolov8n-face-lindevs1.engine).

  • Modified DeepStream config:

    • cluster-mode=2
    • num-detected-classes=4
  • Set up inference and pipeline based on official DeepStream Python apps (deepstream-test3).


πŸ”§ Environment:

  • DeepStream version: 7.0
  • TensorRT: 8.6.0
  • Platform: x86_64
  • Video: H.264

πŸ“Œ Question:

  1. How can I resolve the Could not find output coverage layer error?
  2. What GStreamer-safe video encoding parameters ensure decoding success with nvv4l2decoder?
  1. please set custom-lib-path and parse-bbox-func-name for nvinfer of yolov8. you can refer to this configuration.
  2. coud you share a complete log? can’t the source video be decoded forever? if so, could you share some test video? Thanks!

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.