I am trying to optimise my custom model for MaskRCNN. For this, I successfully converted the
h5 model to
uff and inferred using
TensorRT 18.104.22.168 Cuda 11.3 cudnn 22.214.171.124-1 TensorFlow 1.15.5 Onyx 1.8
For the inference part, I recompile the
sample_uff_maskRCNN after changing its config file. This requires me to run a built executable
.sample_uff_maskRCNN each time.
I need to know what options do I have for deploying this optimised Uff model to my Jetson Xavier system ? I am aware of DeepStream and ‘.plan’ inference engine based on onnx workflow. The latter nowhere describes how to use the Uff file. I would be grateful if someone could help me understand all the options I currently have.