Deetstream Pipeline - Yolo Face Detection Model - Yolov8n

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) RTX 4060
• DeepStream Version 7.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 10
• NVIDIA GPU Driver Version (valid for GPU only) 12.7
• Issue Type( questions, new requirements, bugs)

im trying to integrate my custom yolo face detection model, but not able to work around with it.
The engine file is being created successfully but when the engine file got created it has zero no. of layers, however when i try to check inference on onnx model it shows results, even though i have tried to manually converted the onnx to engine using yolo ultralytics export method and it got successfully converted but then it is not integrating into the deepstream because of the version issue, so kindly help me out here.

im using this repo here: GitHub - marcoslucianops/DeepStream-Yolo-Face: NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 application for YOLO-Face models

error:
root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face# python3 deepstream.py -s file:///opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/new.264 -c config_infer_primary_yoloV8_face.txt
/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/deepstream.py:2: PyGIWarning: Gst was imported without specifying a version first. Use gi.require_version(‘Gst’, ‘1.0’) before import to ensure that the right version gets loaded.
from gi.repository import Gst, GLib
/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/deepstream.py:84: DeprecationWarning: Gst.Element.get_request_pad is deprecated
streammux_sink_pad = streammux.get_request_pad(pad_name)
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:00.338649147 7921 0x55e29cbc05c0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
0:01:00.419229654 7921 0x55e29cbc05c0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2138> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/yolov8n-face-lindevs.onnx_b1_gpu0_fp32.engine successfully
Implicit layer support has been deprecated
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:327 [Implicit Engine Info]: layers num: 0

0:01:00.759007975 7921 0x55e29cbc05c0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_yoloV8_face.txt sucessfully
Failed to query video capabilities: Inappropriate ioctl for device
0:01:01.121291024 7921 0x55e330e9ef60 ERROR nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:60> [UID = 1]: Could not find output coverage layer for parsing objects
0:01:01.121321210 7921 0x55e330e9ef60 ERROR nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:736> [UID = 1]: Failed to parse bboxes
Segmentation fault (core dumped)

This error indicates that the onnx you exported has errors and cannot be parsed normally. In fact, I tested the repository and it worked fine. You can try this onnx exported by me

yolov8n-face.onnx (11.8 MB)

This issue is not officially supported, you’d better raise an issue on GitHub

1 Like

ohh i cant thank you enough here, at last the issue has been resolved. just want a little thing here, can you please tell me how did you or which method did you use to export the .pt to onnx file format.

2nd thing here is i have another model which is basically the emotion model i want to use it as 2nd inference so just guide me a bit how can i use that to integrate it deepstream after 1st primary which is working fine now.

note: i have already tested these both models separately and they are working fine, now i want to integrate them into deepstream, and for that one model is already being setup in deepstream now want to know how can i leverage the second model into deepstream.

Thanks in advance.

DS-7.1 depends on CUDA-12.6, so you need to install the latest pytorch

pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu126

Then modify the export_yoloV8_face.py

 import torch.nn as nn
 from copy import deepcopy
 from ultralytics import YOLO
-from ultralytics.yolo.utils.torch_utils import select_device
+from ultralytics.utils.torch_utils import select_device
 from ultralytics.nn.modules import C2f, Detect, RTDETRDecoder

Export dynamic onnx model.

python3 export_yoloV8_face.py -w ../yolov8n-face.pt --dynamic

Is this a classification model? Is this a classification model? test2 is a sample of a detector + multiple classifiers

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/deepstream_test2_app.c

1 Like

thank you i tried the conversion of models myself by following you guide and it is working fine.
now i want to utilize my second model which takes the input from this first model (yolov8n-face) and use those inputs for my second emotion model and here is the arch from netron app of emotion model onnx format:

format
ONNX v7
producer
tf2onnx 1.16.1 e1042c
version
0
imports
ai.onnx v13
ai.onnx.ml v2
graph
tf2onnx
description
converted from model_1
input
name: input
tensor: float32[unk__191,64,64,1]
predictions
name: predictions
tensor: float32[unk__192,7]

now can i integrate these both models into deepstream test app 2 or do i have to make some custom chnages as well?

This requires some simple modifications, treating the classification model as sgie

1 Like

can you provide me a basic template according to repo code here: DeepStream-Yolo-Face/deepstream.py at master · marcoslucianops/DeepStream-Yolo-Face · GitHub

and the config here: DeepStream-Yolo-Face/config_infer_primary_yoloV8_face.txt at master · marcoslucianops/DeepStream-Yolo-Face · GitHub

where and what changes do i have to make so i can integrate the second model which is emotion detection model into deepstream.

these are the inputs and outputs of the emotion model here:

name: input
tensor: float32[unk__191,64,64,1]
predictions
name: predictions
tensor: float32[unk__192,7]

im new to deepstream and im still learning and i would really appreciate your help here.
Thank you!