I have a model trained with Keras/TF in different versions (.ckpt, .h5 .pb) - SSD MobileNet V2 1024x1024 n_classes = 1
and the Jetson Nano Board.
Is there any way to run inference on the Hardware? The provided conversion (-> .uff) tools did not work at all.
Is this topic duplicate to Custom trained model on Jetson Nano?
If yes, please check the suggestion shared in the above topic.