π Description:
Iβm integrating a face recognition pipeline (detection, cropping, embedding, matching) with DeepStream (deepstream-test3.py base). While running the integrated pipeline, I encounter the following errors:
β Observed Errors:
ERROR nvinfer gstnvinfer.cpp:676: NvDsInferContext[UID 1]: Could not find output coverage layer for parsing objects
ERROR nvinfer gstnvinfer.cpp:676: Failed to parse bboxes
Segmentation fault (core dumped)
Additionally, GStreamer throws:
gst_h264_parse_handle_frame (): Broken bit stream
π Steps Taken So Far:
-
Re-encoded input videos with
ffmpegusing H.264, YUV420p, AAC audio. -
Verified TensorRT engine is correctly deserialized.
-
YOLOv8 engine loads fine (
yolov8n-face-lindevs1.engine). -
Modified DeepStream config:
cluster-mode=2num-detected-classes=4
-
Set up inference and pipeline based on official DeepStream Python apps (
deepstream-test3).
π§ Environment:
- DeepStream version: 7.0
- TensorRT: 8.6.0
- Platform: x86_64
- Video: H.264
π Question:
- How can I resolve the
Could not find output coverage layererror? - What GStreamer-safe video encoding parameters ensure decoding success with
nvv4l2decoder?