Platform : TX2
Jetpack : 4.4.1
Deepstream : 5.0
TensorRT: 7.1.3
Hello,
I have an issue with using a custom segmentation model in deepstream.
The unet sample works just fine and in C++ i receive a NvDsInferSegmentationMeta objet with the detection result from nvinfer. If i output the visual mask on a screen it looks just fine.
But when running the same pipeline with my own model it does not work the same way.
Here’s the output when running the pipeline.
gst-launch-1.0 v4l2src device=/dev/video0 ! decodebin ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 live-source=1 ! nvinfer config-file-path=config_infer.txt batch-size=1 unique-id=1 ! nvsegvisual ! nvoverlaysink overlay-w=1920 overlay-h=1080 overlay-x=0 overlay-y=0 overlay=1 sync=false
Setting pipeline to PAUSED ...
0:00:03.080964343 6174 0x557e7b6330 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :wire_best.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_image 3x768x1024
1 OUTPUT kFLOAT preds 1x768x1024
0:00:03.081109367 6174 0x557e7b6330 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: wire_best.engine
0:00:03.086495782 6174 0x557e7b6330 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:/srv/qualitics/ai/wire/segm/config_infer.txt sucessfully
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
in videoconvert caps = video/x-raw(memory:NVMM), format=(string)RGBA, framerate=(fraction)5/1, width=(int)1280, height=(int)720
Caught SIGSEGV
For this to run i HAVE to specify the width and height to nvsegvisual
The probable reason for this is that nvinfer does not build NvDsInferSegmentationMeta at all.
In C++ i check and i indeed don’t receive anything, the only way for me is to set output-tensor-meta=1 and parse it myself but i would rather nvinfer to do that.
I have no clue why my network output (1x768x1024) with a mask is different than the sample.
Anything i can check or try to determine what’s the problem in my network?
The network itself is a custom one that i did not created myself but i can ask the person responsible for information or changes.
I receive an ONNX from him and converted it using :
/usr/src/tensorrt/bin/trtexec --onnx=wire_best.onnx --saveEngine=wire_best.engine --workspace=256 --buildOnly
Here’s the nvinfer configuration
[property]
gpu-id=0
interval=0
gie-unique-id=1
net-scale-factor=0.003921568627451
batch-size=1
model-engine-file=wire_best.engine
## 0= RGB, 1= BGR
model-color-format=0
## 2 =segmentation
network-type=2
## 0=FP32, 1=INT8, 2=FP16
network-mode=0
num-detected-classes=1
segmentation-threshold=0.5
cluster-mode=4
output-blob-names=preds
output-tensor-meta=1
Thanks