Parsing Maskrcnn output with TensorRT in python

Hi
I am trying to run inference on a Maskrcnn model i trained using TLT.
I used the pretrained model available here and trained it on COCO dataset following this blog post

I then converted the .tlt model to .etlt followed by conversion to an engine file using tlt-converter.

i know i can use deepstream directly but that’s not my goal here, i want to run inference on python.
I used the following code i attached for inference.

I can parse the bounding boxes just fine, I don’t, however understand how to parse the masks.

i know that the masks resolution is 28x28 and i used PeopleSegNet before and i was able to parse the masks by reshaping the output to (100,2,28,28) but the output this time has dimensions of 7134400 which i don’t know what to reshape it to.

Maybe i am missing something.

Any help would be appreciated

Environment

• Hardware Platform (Jetson / GPU) : Jetson Xavier AGX
• DeepStream Version : Deepstream 5.1
• JetPack Version (valid for Jetson only) : Jetpack 4.5.1
• TensorRT Version : TensorRT 7.1.3

Relevant Files

TRT_infer.py (3.5 KB)

Hi,
We recommend you to raise this query in TLT forum for better assistance.

Thanks!