[HELPP] How to inference frozen model(.pb) with tensorrt on jetson nano

Description

Hi, I’m doing facial expression recognition but I want to inference the model on my jetson nano, so I converted keras model (.h5) to tensorflow frozen model (.pb) then optimized it with tensorrt , finally I got trt_graph.pb.
this is the code for converting .h5 to .pb

Google Colab

this is the .pb model I generated using above script.

trt_graph.pb - Google Drive

I tried to use this block of code but it’s extremely slow, it takes more than 15-20s just to predict a single image.

Google Colab

My question is : I want to use tensorrt to inference the .pb model on my jetson nano but I don’t know how to do it. :( ,
Any guidelines or samples code would be appreciate.
Thanks

Environment

TensorRT Version : 8.0.0.3-1
Operating System + Version : ubuntu 20.04
Python Version (if applicable) : 3.6
TensorFlow Version (if applicable) : 1.15

helpppp

Hi,

The following may help you.

Or We can also convert the .pb model to ONNX and then convert it to the TensorRT engine.

Thank you.