Failed to parse bboxes and instance mask using custom parse function

• Hardware Platform ( NVIDIA GeForce RTX 3060 Laptop GPU)
• DeepStream Version - 6.1.0
• NVIDIA GPU Driver Version - 510.85.02
• Issue Type - Trying to integrate a segmentation model in deepstream-python-apps and came across this error -

ERROR: some layers missing or unsupported data types in output tensors
0:00:03.769151566 54327      0x30716a0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:683> [UID = 1]: Failed to parse bboxes and instance mask using custom parse function
Segmentation fault (core dumped)

This is how the config file looks like -

[property]
gpu-id=0
net-scale-factor=0.003921568627451
model-color-format=0
onnx-file =last.onnx
model-engine-file=last.onnx_b1_gpu0_fp32.engine
labelfile-path=labels.txt
infer-dims=3;640;640
uff-input-order=0
uff-input-blob-name=images
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
cluster-mode=4
output-instance-mask=1
num-detected-classes=80
interval=0
gie-unique-id=1
network-type=3
output-blob-names=output
segmentation-threshold=0.2
parse-bbox-instance-mask-func-name=NvDsInferParseCustomMrcnnTLTV2
custom-lib-path=nvdsinfer_customparser/libnvds_infercustomparser.so

Could you attach your last.onnx model? There are some problems with the configuration of your output layer.
Which demo did you run?

Thanks for your reply.
Sure attaching the onnx file for your reference - onnx file - Google Drive . I’m running the code of deepstream-imagedata-multistream from deepstream_python_apps.
Thanks

You can use https://netron.app/ to check the layer name of your model. The layer name of your model doesn’t match the parameter that you set in the config file.

Thanks for your reply -
I have made some specific changes in my config file

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
# 0:RGB 1:BGR 2:GRAY
model-color-format=0
model-engine-file=yolov7-seg.trt
labelfile-path=labels.txt
batch-size=1
# num classes
num-detected-classes=80
interval=0
gie-unique-id=1
# 1: Primary model, secondary model
process-mode=1
network-type=3
# 0:NCHW 1:NHWC
network-input-order=0
# 0:FP32 1:INT8 2:FP16
network-mode=2
# 0:Group Rectange 1:DBSCAN 2:NMS 3:DBSCAN+NMS 4:None
cluster-mode=4
# Scale and padding the image maintain aspect ratio
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-instance-mask-func-name=NvDsInferParseCustomEfficientNMSTLTMask
custom-lib-path=nvdsinfer_customparser/libnvds_infercustomparser.so
output-instance-mask=1
segmentation-threshold=0.4

[class-attrs-all]
pre-cluster-threshold=0.35

The code runs but there’s no detections/mask shown in the results -
Sharing the output

This is your own model, so you should implement your own postprocess function instead of NvDsInferParseCustomEfficientNMSTLTMask. You can refer to the source code:https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/post_processor/nvdsinfer_custombboxparser_tao.cpp

Thanks for your reply.
I haven’t created a postprocess function ever before. Would you mind sharing any documentation on how to di it?
I did see the attached link but do not have an understanding on what to do next.

You can refer to the link below to learn the basic concepts of postprocess:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvdspostprocess.html
Then you can refer to our demo code from the link I attached before.

Sure, thanks

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks