No Segmentation Meta Data (NvDsInferSegmentationMeta) built with custom segmentation model

Platform : TX2
Jetpack : 4.4.1
Deepstream : 5.0
TensorRT: 7.1.3

Hello,

I have an issue with using a custom segmentation model in deepstream.
The unet sample works just fine and in C++ i receive a NvDsInferSegmentationMeta objet with the detection result from nvinfer. If i output the visual mask on a screen it looks just fine.

But when running the same pipeline with my own model it does not work the same way.

Here’s the output when running the pipeline.

gst-launch-1.0 v4l2src device=/dev/video0 ! decodebin ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 live-source=1 ! nvinfer config-file-path=config_infer.txt batch-size=1 unique-id=1 ! nvsegvisual ! nvoverlaysink overlay-w=1920 overlay-h=1080 overlay-x=0 overlay-y=0 overlay=1 sync=false

Setting pipeline to PAUSED ...
0:00:03.080964343  6174   0x557e7b6330 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :wire_best.engine
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_image     3x768x1024      
1   OUTPUT kFLOAT preds           1x768x1024      

0:00:03.081109367  6174   0x557e7b6330 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: wire_best.engine
0:00:03.086495782  6174   0x557e7b6330 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:/srv/qualitics/ai/wire/segm/config_infer.txt sucessfully
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
in videoconvert caps = video/x-raw(memory:NVMM), format=(string)RGBA, framerate=(fraction)5/1, width=(int)1280, height=(int)720
Caught SIGSEGV

For this to run i HAVE to specify the width and height to nvsegvisual
The probable reason for this is that nvinfer does not build NvDsInferSegmentationMeta at all.
In C++ i check and i indeed don’t receive anything, the only way for me is to set output-tensor-meta=1 and parse it myself but i would rather nvinfer to do that.

I have no clue why my network output (1x768x1024) with a mask is different than the sample.
Anything i can check or try to determine what’s the problem in my network?
The network itself is a custom one that i did not created myself but i can ask the person responsible for information or changes.
I receive an ONNX from him and converted it using :

/usr/src/tensorrt/bin/trtexec --onnx=wire_best.onnx --saveEngine=wire_best.engine --workspace=256 --buildOnly

Here’s the nvinfer configuration

[property]
gpu-id=0
interval=0
gie-unique-id=1
net-scale-factor=0.003921568627451
batch-size=1
model-engine-file=wire_best.engine
## 0= RGB, 1= BGR
model-color-format=0
## 2 =segmentation
network-type=2
## 0=FP32, 1=INT8, 2=FP16
network-mode=0
num-detected-classes=1
segmentation-threshold=0.5
cluster-mode=4
output-blob-names=preds
output-tensor-meta=1

Thanks

The sample model output size is 512x512.
Your model output is width 1024 and height 768, you just need to set “width” and “height” properties of nvsegvisual plugin to the actual size.

Such as “gst-launch-1.0 v4l2src device=/dev/video0 ! decodebin ! nvvideoconvert ! “video/x-raw(memory:NVMM)” ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 live-source=1 ! nvinfer config-file-path=config_infer.txt batch-size=1 unique-id=1 ! nvsegvisual width=1024 height=768 ! nvoverlaysink overlay-w=1024 overlay-h=768 overlay-x=0 overlay-y=0 overlay=1 sync=false”

The nvoverlaysink plugin is deprecated in L4T release 32.1 Please do not use nvoverlaysink plugin.

DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

So you mean that nvsegvisual do not receive the network output size from nvinfer even though it can be found inside the buffer?
The default values for the plugin are 1280x720 so if it works properly using the samples i should not have to specify it as well.

Also what is the replacement for nvoverlaysink?

No, it can not.

Almost all deepstream samples use “eglglessink”.

Then i am very confused because, using the sample nvsegvisual DOES get the output resolution directly from nvinfer and i don’t have to specify anything.

I saw eglglessink but i didn’t managed to use it without any X server which is the point of me using nvoverlaysink.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

What do you mean? Do you mean the sample picture “sample_industrial.jpg”? Or do you mena the sample application of deepstream-segmentation-test?