I have used https://github.com/matterport/Mask_RCNN repo in which Mask RCNN (with ResNet 50 as a backbone) is implemented in Keras on custom data and stored the weights in .h5.
Later I came to know that to deploy it into Jetson Nano I need to convert .h5 to .pb, I did that and able to run inferencing on the converted weights and model. (weights size 130 MB)
After then we need to convert it with TensorRT which is also optimized .pb file, I converted and got .pb file but it of 120 MB, did actually optimized or not?? How to check if it is optimized or not??
Mask RCNN has many layers above the ResNet 50, what to do with those layers??
Still, I tried on deploy those weights on Jetson Nano, but every time after loading the weights to infer, it shuts down, I don’t know the reason for the also??? I am using 5V 4 Amp supply?
I used “jtop” to check the memory usage and came to know it goes ot out of memory even after loading the weights only, so do jetson has ability to process Mask RCNN or I need to buy another hardware for this.
And there is no clear way in docs on how to deal with Mask RCNN, can you guys please share with us the proper guidance.
Thanks and Regards