Take the provided .h5 file. Use the provided config.py and mrcnn_to_trt_single.py to convert the .h5 file to .uff. Note that I modified those files since I used resnet50 backbone and 256x256 images. Make sure your environment is similar the one explained in the TensorRT sampleUffMaskRCNN sample (Can’t provide link due to new account). Take the .uff file over there. Take my mrcnn_config.h and sampleUffMaskRCNN.cpp. Make a copy of the Samples folder in Jetson Xavier NX. Put the .h and .cpp file above into this replacing whatever is already there. Compile all samples and run on the images provided in Google Drive. You will get a
Need help on this ASAP. The 2nd issue has my configuration during training in MaskRCNN. All files have been modified to the best of my knowledge. Any help adapting the provided files for my usecase would be greatly appreciated
On taking a further look it actually seems to be that since I reduced my 1024 to 256 images, the mAnchorsCnt has reduced down to 256 (inputDims[0].d[0] should be the image width/height I assume and that’s 256). The mAnchorBoxesHost hasn’t scaled accordingly. I am not sure what I should change to fix it though. Nothing gets set in it