Segmentation using sample python application

Please provide complete information as applicable to your setup.

•Hardware Platform (Jetson / GPU) : T4
•DeepStream Version : 5.0
•TensorRT Version : 7
•Cuda Version: 10.2
•Cudnn Version: 7.6.5
•Ubuntu Version: 18.04
•NVIDIA GPU Driver Version: 440.33.01

Hi, I am currently working on a segmentation problem using Deepstream and wants to build the pipeline using python.

So I modified the Deepstream-test1-rtsp-out sample application pipeline, and the current pipeline looks like this:

file-source → h264-parser → nvh264-decoder → nvinfer → nvsegvisual → nvvidconv → nvosd → nvvidconv_postosd → caps → encoder → rtppay → udpsink

Basically, I have added nvsegvisual plugin in the pipeline. But I do not get a proper output (pixel size reduced significantly).

I used the Industrial and Semantic models and configuration files. I have attached a screenshot of the output that I get.

Screenshot from 2021-01-14 19-07-32

Would like to know what extra changes need to be done. Also if there are any guides to build a segmentation pipeline in python in Deepstream.

Thank You!

I think the output is correct per your screenshot

Hi @bcao, thanks for the reply. But the output video size decreases as well. Is it designed to be like that?

Also even for a single class model, multiple colour masks can be observed, just like we can see in the screenshot.

Yeah, the output size will be same as your model’s output for nvsegvisual and you can config the width/height by setting the plugin’s corresponding property.

there will be a background color I think, what do you expect for a single class segmentation model?