MaskRcnn Error for parsing the bounding box on deepstream application

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson
• DeepStream Version
5.0
• JetPack Version (valid for Jetson only)
4.5
• TensorRT Version
7.1.3
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Train the tao instance segmentation using the coco dataset
• Requirement details( This is for the new requirement. Including the module name-for which plugin or for which sample application, the function description)

I trained Instance Segmentation using TAO MaskRCNN with all default parameters and using the coco dataset. I deployed the etlt model and tried to replace the engine with the default engine on deepstream folders, The original engine file generated the pipeline but the tao trained model with coco data set is generating the following error:

NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
**PERF:  0.00 (0.00)
ERROR: some layers missing or unsupported data types in output tensors
0:03:24.795478726 12324     0x3fb77720 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:682> [UID = 1]: Failed to parse bboxes and instance mask using custom parse function
Segmentation fault

here is my config file

[property]
gpu-id=0
net-scale-factor=0.017507
offsets=123.675;116.280;103.53
model-color-format=0
#labelfile-path=mrcnn_labels_original.txt
labelfile-path=mrcnn_labels.txt
#tlt-encoded-model=../../models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt
#tlt-encoded-model=/opt/nvidia/deepstream/deepstream-5.0/samples/models/models/mrcnn/models/mrcnn/mask_rcnn_resnet50.etlt
tlt-encoded-model=/opt/nvidia/deepstream/deepstream-5.0/samples/models/models/mrcnn/amVisionModel/model.step-25000.etlt
#tlt-model-key=nvidia_tlt
tlt-model-key = NXVodTI0MXNnZGtzdXBic2o0cTIwbmp0bnA6N2IwZDEyMGYtMGZiOS00MDNlLTllOGMtOGMzOTJiYmRlMzk0
#model-engine-file=../../models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine
#model-engine-file=/opt/nvidia/deepstream/deepstream-5.0/samples/models/models/mrcnn/model.step-25000.engine
#int8-calib-file=/opt/nvidia/deepstream/deepstream-5.0/samples/models/models/mrcnn/models/mrcnn/cal.bin
#int8-calib-file=../../models/tlt_pretrained_models/mrcnn/cal.bin
infer-dims=3;832;1344;
uff-input-blob-name=Input
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=91
interval=0
gie-unique-id=1
network-type=3
#output-blob-names=generate_detections;mask_head/mask_fcn_logits/BiasAdd
output-blob-names=generate_detections;mask_fcn_logits/BiasAdd
parse-bbox-instance-mask-func-name=NvDsInferParseCustomMrcnnTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_infercustomparser.so
#no cluster
## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
## MRCNN supports only cluster-mode=4; Clustering is done by the model itself
cluster-mode=4
workspace-size=1000
#segmentation-threshold=0.3
output-instance-mask=1

[class-attrs-all]
pre-cluster-threshold=0.8
#group-threshold=1
#eps=0.2
roi-top-offset=0
[osd]
enable=1
gpu-id=0
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
display-mask=1
display-bbox=0
display-text=0

[primary-gie]
enable=1
gpu-id=0
# Modify as necessary
batch-size=4
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
# Replace the infer primary config file when you need to
# use other detection models
#model-engine-file=../../models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine
#model-engine-file=/opt/nvidia/deepstream/deepstream-5.0/samples/models/models/mrcnn/model.step-25000.engine
config-file=config_infer_primary_mrcnn.txt

The libnvds_infercustomparser.so is on this directiory : /opt/nvidia/deepstream/deepstream-5.0/lib/

Regards

you can check code - /opt/nvidia/deepstream/deepstream-5.1/sources/libs/nvdsinfer_customparser/nvdsinfer_custombboxparser.cpp about this failure

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.