Segmentation Fault in DeepStream Pipeline with YOLOv8n-Face Integration

Hi,

I’m working on a DeepStream pipeline for person detection, face detection, and classification. I recently modified the face detection model to use YOLOv8n-face, incorporating landmark detection for face alignment. However, I’m encountering a segmentation fault, causing the pipeline to crash.

Has anyone experienced a similar issue or can offer guidance on how to resolve this?

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

Are you trying this sample ?

Thanks for your response! I’d like to know how the NvDsFacialLandmarks plugin can help me check face alignment, or what DeepStream offers for this purpose

This is the Graph Composer extension for Facial Landmarks Estimation | NVIDIA NGC. It can’t be used with the YOLOv8n-face model you mentioned.

What DeepStream provides to the YOLOv8n-face model is the basic inferencing pipeline and model integration interfaces.

Thanks for the details. Where exactly are the landmarks stored in the metadata(obj_meta), and how can I extract them?

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

  • DeepStream SDK Version: 6.3.0
  • CUDA Driver Version: 12.2
  • TensorRT Version: 8.6
  • cuDNN Version: 8.9
  • Hardware: NVIDIA T4 GPU

I want to extract landmarks from the YOLO model, I need to know exactly where the landmarks are stored to use them in face alignment.

How to extract landmarks depends on your model. If you are using DeepStream-Yolo-Face, the output layer already includes landmarks. You only need to parse this tensor.

Hello,

I’ve successfully extracted the landmarks using the DeepStream-Yolo-Face model, but I’m encountering an issue when I draw them on the frame. The landmarks appear misaligned and don’t fit within the bounding box of the face.

I followed this example(GitHub - marcoslucianops/DeepStream-Yolo-Face: NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 application for YOLO-Face models) for normalizing the landmarks according to the frame dimensions, but the landmarks seem to shift positions instead of aligning correctly within the face bounding box.

My original frame dimensions are 1280x720, and the streammux dimensions are 960x540. Could anyone provide guidance on how to normalize the landmarks correctly? Alternatively, is it possible to modify the tensor to match my original frame dimensions?

Thanks in advance for your help!

This sample already shows how to calculate the normalized coordinates.

There are several ways to do this:

  1. Set the width and height of nvstreammux to be the same as the original frama dimensions,

  2. Add nvvideoconvert after nvinfer to scale

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.