How to run my own mobilenetv2 model on Nano?

It seems like a straightforward enough task, but from searching I was unable to find a comprehensive guide on how to do this.

I’ve got a trained tf2 mobilenetv2 binary image classifier saved in h5 format. I just need it to perform inference @2 fps or so on a saved image.

Are there any step-by-step guides on how to get a tf2 model saved in h5 format running in TRT on the Jetson? If not, can someone please help.

Much appreciated!

Hi,

We have couples of examples for mobilenetv2:

1. Only Inference

/usr/src/tensorrt/samples/sampleUffSSD/

2. Integrate with multimedia interface (Deepstream)

/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/

3. Gstreamer interface(jetson-inference)

Thanks.