Deepstream-app can not visual segmentation mask?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jeston
• DeepStream Version 5.0.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

this is my config file: primary-gie:detection, sgie0:cls, sgie1:seg

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1920
height=1080
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
#uri=file:/home/nvidia/yolov5-in-deepstream/Deepstream-5.0/test.h264
#uri=rtsp://172.16.104.168/0
uri=file:/home/nvidia/yolov5-in-deepstream/Deepstream-5.0/all.mp4
#uri=file:/home/nvidia/Documents/5-Materials/Videos/0825.avi
num-sources=1
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0
camera-v4l2-dev-node=1
camera-width=1920
camera-height=1080
#drop-frame-interval=1

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
#1=mp4 2=mkv
#container=1
#1=h264 2=h265
#codec=1
#output-file=yolov5.mp4
#rtsp-port=8553
#iframeinterval=1
#enc-type=0

[osd]
enable=1
gpu-id=0
border-width=2
text-size=12
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000

Set muxer output width and height

width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
gpu-id=0
model-engine-file=yolov5s-2.engine
labelfile-path=labels2.txt
#batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV5.txt

[secondary-gie0]
enable=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
nvbuf-memory-type=0
config-file=config_infer_thy_cls.txt

[secondary-gie1]
enable=1
gpu-id=0
gie-unique-id=3
operate-on-gie-id=1
nvbuf-memory-type=0
config-file=config_infer_thy_seg.txt

[tracker]
enable=1
tracker-width=512
tracker-height=320
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so

[tests]
file-loop=1

Hi,

It seems that you use a segmentation type SGIE.
Do you want to apply the segmentation only on the ROI region detected by the PIE?

Deepstream do have a component (nvsegvisual) that can visualize segmentation output.
Please check the document and example below:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvsegvisual.html

Thanks.

Yes, only classification and detection results can be displayed at present. That is to say, I need to rebuild a pipline, just like deepstream-app, and then integrate nvsegvisual?

This is the effect I want

If OSD can’t display mask, why are there parameters about mask:
from deepstream_osd_bin.c (line99-line 102)

g_object_set (G_OBJECT (bin->nvosd), “gpu-id”, config->gpu_id, NULL);
g_object_set (G_OBJECT (bin->nvosd), “display-text”, config->draw_text, NULL);
g_object_set (G_OBJECT (bin->nvosd), “display-bbox”, config->draw_bbox, NULL);
g_object_set (G_OBJECT (bin->nvosd), “display-mask”, config->draw_mask, NULL);

Hi,

To use nvsegvisual, you will need to use deepstream-segmentation-analytics rather than deepstream-app.

And as you mentioned, you can also use OSD for the mask.
The output will looks like image shared in below blog:
https://developer.nvidia.com/blog/training-instance-segmentation-models-using-maskrcnn-on-the-transfer-learning-toolkit/

The corresponding example configure file can be found in below locations:

/opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/deepstream_app_source1_mrcnn.txt
/opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/config_infer_primary_mrcnn.txt

Thanks.

Thank you for your reply. Your sample image is the result of instance segmentation, but my pipline is detection (pgie) - > segmentation (sgie). Can OSD display the result just like instance segmentation?

Hi,

The example setup the network type to 3 for instance segmentation.
Could you try the same configure in your sgie model?

Thanks.