Re-trained Pytorch Mask-RCNN inferencing in Jetson Nano

Hi Community,

I have trained a Custom Trained Pytorch Mask-RCNN network which takes image as an input and gives outputs the bounding box, masks with class and class labels. I have used Mask-RCNN model directly for the torchvision v0.4.0. The training and data preprocessing code is similar to https://github.com/pytorch/vision/tree/master/references/segmentation and to get mask-rcnn model I have just used from torchvision.models.detection import MaskRCNN with no changes and trained it for 2 classes.

I tried to test this trained model on Jetson Nano without any use of ONNX/ DeepStream/TensorRT conversion and the swap memory(4GB) and the main memory(4GB) got filled up just while loading the model. The model weights .pth file is of size 241MB(just for ref).
I just installed the pytorch 1.2.0 and torchvision 0.4.0 as per the ref: PyTorch for Jetson - version 1.6.0 now available

Can someone tell me what I should do or how I should optimize this model to run on Jetson Nano. Is it possible to run this model in DeepStream or TensorRT? How can I convert the model to run in such system?

I’m new on this hardware, so in need of some guidance.

Thanks in advance.
Regards,

Hi,

Do you want to re-train the model or inference the model?

1
For re-training, you can find some information and sample in this GitHub:

2
For inferencing, it’s recommended to convert your model into TensorRT to save memory first:
Here is a sample for Mask-RCNN TensorFlow-based model:


Here is another sample for an ONNX-based (pytorch) model:

Thanks.