Jetson Nano Keras Model loading speed up

Hello everyone, I’m using Keras MobileNetV2 modified model. Import from .h5 file takes more than 30 seconds. Is it possible to speed it up somehow? Thank you for any tips!

Hi,

Have you maximized the device performance first?

sudo nvpmodel -m 0
sudo jetson_clocks

And is there any swap memory used in your environment?
Please noticed that swap memory takes disk space as memory and its read-write speed is limited to the disk performance.

Thanks.

Yes, I maximized performance at the beginning. I’m quite sure that project does not takes any swap memory. All of the test projects as “imagenet-console” classifier works quite fast, so I thought there is any chance to transform keras model to another form to speed it up?

Thank you so much for your reply!

Hi,

MobileNetV2 can reach around 64FPS on Jetson Nano with TensorRT:
https://developer.nvidia.com/embedded/jetson-nano-dl-inference-benchmarks

Is TensorRT an option for you?
If yes, please follow this tutorial to convert it into TensorRT engine.
/usr/src/tensorrt/samples/sampleUffSSD/

Please freeze your .h5 model into TensorFlow .pb first.
Thanks.