How to run my own mobilenetv2 model on Nano?

It seems like a straightforward enough task, but from searching I was unable to find a comprehensive guide on how to do this.

I’ve got a trained tf2 mobilenetv2 binary image classifier saved in h5 format. I just need it to perform inference @2 fps or so on a saved image.

Are there any step-by-step guides on how to get a tf2 model saved in h5 format running in TRT on the Jetson? If not, can someone please help.

Much appreciated!


We have couples of examples for mobilenetv2:

1. Only Inference


2. Integrate with multimedia interface (Deepstream)


3. Gstreamer interface(jetson-inference)