Can we put a python file with a keras h5 file anywhere within our Jetson nano, execute it via the command line and the Jetson inference engine will process the script with its GPU?
Am I understanding this correctly? Or do I have to have the script in a certain directory, build it a certain way, or convert it to a binary and execute that instead.
I would like some elaboration. Thank you.
TensorRT doesn’t support .pb or h5 file directly.
You will need to convert the model into .uff file first.
Here is an example for your reference:
Thanks for the reply. I am understanding better what needs to be done. I found this article.
Is this the correct way forward?
Sorry for the late.
The website shared in comment#3 is using TF-TRT.
TF-TRT is a TensorFlow frameworks with TensorRT backend support. The conversion can be done by just enabling an option.
However, it is not the optimal solution for inference on the Jetson since the input/output interface still use TensorFlow.
It’s more recommended to convert your model into pure TensorRT.
You can find a sample in this page: https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification