Error running TLT-retrained maskrcnn on deepstream

Hi,

I am trying to run my maskrcnn model that i retrained using TLT on deepstream.
I already checked that the model is working fine and i ran inference on it using Tensorrt and python after converting it into an engine and it ran fine.

When i tried using deepstream, i ran into the following error:

ERROR: some layers missing or unsupported data types in output tensors
0:02:10.489252872 13142 0x1630c590 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:681> [UID = 1]: Failed to parse bboxes and instance mask using custom parse function
Segmentation fault (core dumped)

I am using the example app provided in the samples/configs/tlt_pretrained_models/ in deepstream directory.
The inference config file is attached as well.

Any help is appreciated, Thank you

config_infer_primary_mrcnn.txt (2.3 KB)

• Hardware Platform: Jetson AGX Xavier
• DeepStream Version: Deepstream 5.1
• JetPack Version (valid for Jetson only): Jetpack 4.5.1
• TensorRT Version: TensorRT 7.1.3

Any specific customized post process parser needed for your model?
Could you try if the original mrcnn model can run well?

No, no customized post-processing needed, it’s just the original mrcnn model here retrained according to this blog here using TLT v3.0 so it’s the same model.

I did try the original mrcnn model and it ran well which confuses me even more.

i did however notice something, not sure if it’s related.
The output-blob-names in the original model are: generate_detections;mask_head/mask_fcn_logits/BiasAdd
But after retraining the model with TLT v3.0 these output-blob-names gave me an error and it turns out they had to be changed to generate_detections;mask_fcn_logits/BiasAdd.
No sure if it’s TLT related thing, so maybe that’s what’s giving the parsing error?

Yeah, maybe you need to use the new version parser, refer deepstream_tlt_apps/nvdsinfer_custombboxparser_tlt.cpp at master · NVIDIA-AI-IOT/deepstream_tlt_apps · GitHub

I Will try it out, thank you