DeepStream Pipeline: pipeline does not work with yolov11-face and yolov11-cls

Please provide complete information as applicable to your setup.

• Hardware Platform: NVIDIA Jetson Orin NX Engineering Reference Developer Kit
• DeepStream Version: deepstream-7.0
• JetPack Version: JetPack 6.0
• TensorRT Version: TensorRT v8602
• NVIDIA GPU Driver Version: CUDA Version: 12.2, NVIDIA-SMI 540.3.0
• Issue Type: pipeline does not wotrk with yolo models

Hello!
I have a problem with my DeepStream pipeline where I want to detect people, their faces, and to classify whether the face is occluded or not.

To be more clear, I have 2 problems, one problem is related to the face detection and the second is about classification model.

People detection model is pgie, face detection is sgie and classification model is also sgie and it works after face.

People detecion model configuration:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=../yolo_models/yolov12_person.onnx
model-engine-file=../yolo_engines/yolov12_person.engine
labelfile-path=../yolo_labels/labels_person.txt
network-mode=0
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=../yolo_libs/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
#classifier-async-mode=1

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

Face detecion model configuration:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=../yolo_models/yolov8n-face.onnx
model-engine-file=../yolo_engines/yolov8n-face.engine
labelfile-path=../yolo_labels/labels_face.txt
network-mode=2
num-detected-classes=1
interval=0
gie-unique-id=2
process-mode=2
network-type=3
cluster-mode=4
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-instance-mask-func-name=NvDsInferParseYoloFace
custom-lib-path=../yolo_libs/libnvdsinfer_custom_impl_Yolo_face.so
output-instance-mask=1
#scaling-compute-hw=1
operate-on-class-ids=1

[class-attrs-all]
pre-cluster-threshold=0.25
topk=300

Occlusion classification model configuration:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=../yolo_models/yolov11-cls.onnx
model-engine-file=../yolo_engines/yolov11-cls-trt.engine
labelfile-path=../yolo_labels/labels_cls.txt
num-detected-classes=2
classifier-async-mode=0
classifier-threshold=0.1
process-mode=2
network-mode=2
network-type=1
cluster-mode=4
gie-unique-id=3
operate-on-gie-id=2
operate-on-class-ids=0

[class-attrs-all]
topk=3                     
pre-cluster-threshold=0.2

1st problem:
I’ve taken yolo11l-face standard model and converted into onnx using the script from repository “DeepStream-Yolo-Face”. The “onnx” file has been created successfully. Also I took a configuration file for DeepStream from the same repository. Unfortunately, the pipeline crashes after the creation engine file immediately with the following error.

0:16:17.726601997 439784 0xaaab1a971830 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<face> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::resizeOutputBufferpool() <nvdsinfer_context_impl.cpp:1463> [UID = 2]: Failed to allocate cuda output buffer during context initialization
0:16:17.726661455 439784 0xaaab1a971830 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<face> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::allocateBuffers() <nvdsinfer_context_impl.cpp:1595> [UID = 2]: Failed to allocate output bufferpool

0:16:17.726685871 439784 0xaaab1a971830 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<face> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1375> [UID = 2]: Failed to allocate buffers

After that, I’ve decided to create an engine file using trtexec command and, unfortunately, the result is the same.

I’ve started googling the problem and found this thread Deetstream Pipeline - Yolo Face Detection Model - Yolov8n, where you posted yolov8n-face.onnx and with this model my pipeline ran successfully.

Then I started trying different ways to make sure that I did everything what I could .

  1. I converted yolov11l-face.pt file using ultralytics documentation, it didn’t work.
  2. I converted yolov11l-face.pt file using scripts from DeepStream repositories: Seg, Pose, Face, Detection, it didn’t help.

Moreover, I took yolov8n-face.pt and converted this file using the script from Face repository and it didn’t help too.

Unfortunately, I have no ideas how to resolve this problem on my own, it’s first time when nothing helped me.

If you help me to resolve this problem I will be thankful very much since I don’t want to use yolo nano, I want to use yolo11.


To check my classification model, I took yolov8n-face.onnx temporary, and again I encountered with a problem.

2nd problem:
The second problem is either pipeline can’t create the engine from onnx file in one case or classificator does not work in the second case.

1st case:
I converted my yolo11-cls.pt file using the script described by ultralytics documentation.
The result is during creation the engine file pipeline crashes with this error:

WARNING: [TRT]: onnx2trt_utils.cpp:372: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: 3: [optimizationProfile.cpp::setDimensions::119] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/optimizationProfile.cpp::setDimensions::119, condition: std::all_of(dims.d, dims.d + dims.nbDims, [](int32_t x) noexcept { return x >= 0; }))
ERROR: [TRT]: 3: [optimizationProfile.cpp::setDimensions::119] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/optimizationProfile.cpp::setDimensions::119, condition: std::all_of(dims.d, dims.d + dims.nbDims, [](int32_t x) noexcept { return x >= 0; }))
ERROR: [TRT]: 3: [optimizationProfile.cpp::setDimensions::119] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/optimizationProfile.cpp::setDimensions::119, condition: std::all_of(dims.d, dims.d + dims.nbDims, [](int32_t x) noexcept { return x >= 0; }))

2st case:
I converted my yolo11-cls.pt file using trtexec command on my device, and then pipeline ran successfully, but NvDsClassifierMetaList does not have any data.

Conclusion:
I tried to use another device orin nx dev kit with deepstream-7.1 and there were the same problems. I will be thankful very much for help since I tried already everything what I knew…

The input or output layers may have dynamic dimensions. Some extra work may be needed to generate the onnx model which can be supported by DeepStream. Please refer to deepstream_tools/yolo_deepstream/deepstream_yolo at main · NVIDIA-AI-IOT/deepstream_tools

DeepStream only support the first dimension (batch size) to be dynamic.

The issue happens with TensorRT. Please consult in TensorRT forum. Latest Deep Learning (Training & Inference)/TensorRT topics - NVIDIA Developer Forums

it is a DeepStream-Yolo, not DeepStream-Yolo-Face. or doesn’t it matter in this case?

The failure is similar. You must be familiar with the model struct. It is just a reference.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.