Running Mask RCNN Inference

Hello there!

I’ve finished training a Mask RCNN model from TAO Toolkit Getting Started | NVIDIA NGC (skipped steps 6 and 7 of pruning and retraining) using a custom dataset on AWS EC2 and was then trying to run inference on my own Jetson Orin AGX.

I’m currently with a .tlt model that, running the !tao maskrcnn export function, exports it to a .uff file (despite the notebook saying it would generate a .onnx file… anyways)

The notebook then generates a TensorRT engine through the !tao deploy mask_rcnn gen_trt_engine function, that I’m not able to run on the Jetson since I guess it’s created based on the GPU architecture for the AWS machine that’s not the one on the Orin. So I also tried to run the !tao deploy mask_rcnn gen_trt_engine function straight on the Orin without any success.

Could you please guide me through these final steps… what should I do and then finally how to implement (using deepstream maybe?)

Thank you very much for any kind of help,
Best regards


Do you use PeopleSemSegNet?
If yes, please check below for a sample:

To create the TensorRT engine, please check the below page for the tao-converter:


Hi @AastaLLL,

I’m running the notebook from the link above, version 5.0.0 in notebooks/tao_launcher_starter_kit/mask_rcnn, the model is downloaded from !ngc registry model download-version nvidia/tao/pretrained_instance_segmentation:resnet50 (for what I saw in one of the links that’s also the architecture used on PeopleSegNet, right?)

I tried using tao-converter, but its documentation says I need an .etlt file - when running !tao maskrcnn export I get only a .uff as mentioned above. Any comments?

Thanks for the help so far!

You can copy .uff model into Orin and then follow TRTEXEC with Mask RCNN - NVIDIA Docs to generate tensorrt engine.

1 Like

Hi @Morganh, thanks for the reply!

I was able to export the model using:

and tried to run inference using without success… (ValueError: cannot reshape array of size 156800 into shape (1024,1024) - got this error when trying to run inference)

Any further ideas on how to make the engine run using python?

Thanks once again

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

You can refer to the tao deploy source code for Mask_rcnn.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.