Not able to deploy Mask RCNN on Jetson Nano

Hello All,

I have used GitHub - matterport/Mask_RCNN: Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow repo in which Mask RCNN (with ResNet 50 as a backbone) is implemented in Keras on custom data and stored the weights in .h5.

Later I came to know that to deploy it into Jetson Nano I need to convert .h5 to .pb, I did that and able to run inferencing on the converted weights and model. (weights size 130 MB)

After then we need to convert it with TensorRT which is also optimized .pb file, I converted and got .pb file but it of 120 MB, did actually optimized or not?? How to check if it is optimized or not??

Mask RCNN has many layers above the ResNet 50, what to do with those layers??

Still, I tried on deploy those weights on Jetson Nano, but every time after loading the weights to infer, it shuts down, I don’t know the reason for the also??? I am using 5V 4 Amp supply?

I used “jtop” to check the memory usage and came to know it goes ot out of memory even after loading the weights only, so do jetson has ability to process Mask RCNN or I need to buy another hardware for this.

And there is no clear way in docs on how to deal with Mask RCNN, can you guys please share with us the proper guidance.

Thanks and Regards
Swaroop

Hi,

We have a sample for converting MaskRCNN model into TensorRT here:
[url]https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffMaskRCNN[/url]

Thanks.

Due to memory issue, you can add the following lines in model.py
############################################################

FIX OUT OF MEMORY ISSUE

############################################################
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.40 # 40% of memory
config.log_device_placement = True # to log device placement (on which device the operation ran)
sess = tf.Session(config=config)
set_session(sess) # set this TensorFlow session as the default session for Keras
##########################################################################
The default setting allows dynamically grow the memory used on the GPU.
This works for DrivePx2. For jetson Nano, you should try a lower value than 40%

Link broken, please update the same.

Hi @blauge.reskrooge,

Please try this link.