How to run Tensorflow 2.0 custom inference program utilizing GPU

Hi,

I have a trained custom Tensorflow 2.0 AI model built on cloud. I would like to utilize GPU memory of Jetson Nano while doing the inference. Can you let me know the clear steps that I need to do to achieve faster inference?

It would be better if someone can share few examples of custom AI model Image classification using GPU memory in Jetson Nano

Hi,

It’s recommended to convert the model into a TensorRT engine and deploy it on Nano.
Below is an example for your reference:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.