Issue With segmentation mask

Please provide complete information as applicable to your setup.
Deepstream 6.2 dGPU

Im trying to run deepstream-segmentaion-test sample on a custom model, But im getting a black screen.

This is how my config looks

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=yolov8n-face.onnx
model-engine-file=yolov8n-face.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=3
cluster-mode=4
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-instance-mask-func-name=NvDsInferParseYoloFace
custom-lib-path=nvdsinfer_custom_impl_Yolo_face/libnvdsinfer_custom_impl_Yolo_face.so
output-instance-mask=1

[class-attrs-all]
pre-cluster-threshold=0.25
topk=300

I have uploaded all the required files, the custom parser that I obtained from https://github.com/marcoslucianops/DeepStream-Yolo-Face

./deepstream-segmentation-app config_infer_primary_yoloV8_face.txt test_img.jpg 
Now playing: test_img.jpg,
0:00:03.582079357 124445 0x5623826a6d00 INFO                 nvinfer gstnvinfer.cpp:751:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT images          3x640x640       
1   OUTPUT kFLOAT output0         80x80x80        
2   OUTPUT kFLOAT 389             80x40x40        
3   OUTPUT kFLOAT 397             80x20x20        

0:00:03.672320406 124445 0x5623826a6d00 INFO                 nvinfer gstnvinfer.cpp:751:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: yolov8n-face.onnx_b1_gpu0_fp32.engine
0:00:03.674044403 124445 0x5623826a6d00 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:DeepStream-Yolo-Face/config_infer_primary_yoloV8_face.txt sucessfully
Running...
in videoconvert caps = video/x-raw(memory:NVMM), format=(string)RGBA, framerate=(fraction)1/1, width=(int)80, height=(int)80
nvstreammux: Successfully handled EOS for source_id=0
Got EOS from stream 0
End of stream
Returned, stopping playback
Deleting pipeline

I did not see anything except black screen.
I tried changing the width and height of nvsegvisual to 80x80, still nothing.

can you try to reproduce this issue or guide me where Im going wrong ?
yolov8-face.zip (54.2 MB)

Can you try running this with GST_DEBUG for more information. Maybe do it with GST_DEBUG=3, if not enough, GST_DEBUG=4.
https://gstreamer.freedesktop.org/documentation/tutorials/basic/debugging-tools.html?gi-language=c#:~:text=Printingnin%20debug%20information-,The%20debug%20log,-GStreamer%20and%20its

e.g.,
GST_DEBUG=4 ./deepstream-segmentation-app config_infer_primary_yoloV8_face.txt test_img.jpg

What is your goal? YoLo-face is a detection network

deepstream-segmentaion-test can only display mask currently. It’s a sematic models.

Do you want the effect like this?

You can refer to deepstream-segmask in deepstream_python_apps

No, I dont want effect like that.
Im building a face detection-recognition pipeline and trying various model. It works fine with models that give bbox only. I want to get the facial landmarks too.

I tried modifying the custom parser for retinaface model, I got the face bbox and landmarks too. I could do that by modifying NvDsInferObjectDetectionInfo but the accuracy was very low because of pytorch → onnx → trt conversion or because of the model. Im not sure.

So, Im just trying various models that can give me bbox and landmarks. I dont wish to display seg-mask. What the custom parser in that repo is doing is, it is using NvDsInferInstanceMaskInfo to populate the landmarks. I just want to access those landmarks.

I tried modifying NvDsInferObjectDetectionInfo by adding a field float* mask , but I couldnt do it.

DeepStream-Yolo-Face can mark bboxes and landmarks. Can’t it meet your needs?

Or do you want to use retinaface to achieve the same effect?

you can implement the postprocess parser in deepstream following /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer_customparser

Or do you want user-specific metadata?

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_metadata.html#user-custom-metadata-addition-inside-nvdsbatchmeta

thats what Im saying, when Im using DeepStream-Yolo-Face model, I get nothing but a black screen. I have added my config files and models. I want to understand how to do that and how to use this model. I have tried others, they dont meet my accuracy needs.

Ok, I understand what you mean, I will try to use this project to process pictures

Since you wish to process images, track is unnecessary.

In addition, because the image immediately encounters eos, nveglglessink cannot display normally. so, I use filesink to replace nveglglessink

You can refer to my patch. I have tested it on DS-6.3. This is my test image.

out.patch (6.2 KB)

git apply out.patch

Then rebuild and run. you will look a jpeg file named face.jpg.

Did you use the model that I gave you and got the landmarks ?

I will try with your patch

there is no tracker in my pipeline though, which sample app you tried it with ?

I exported my own yolov8n-face.onnx using the steps provided by the project and tested the code.

https://github.com/marcoslucianops/DeepStream-Yolo-Face

okay I will try and update

I’m closing this topic due to there is no update from you for a period, assuming this issue was resolved.
If still need the support, please open a new topic. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.