Too low inference results with TensorRT

Hi all,

I am currently working on creating a simple 2-class classification model and deploying this model on Jetson Nano using TensorRT optimization. I have trained an Inception V3 model using Keras with TensorFlow backend and inference results with ‘.h5’ file were pretty good. Then I created frozen model afterwards which means now I have the ‘.pb’ version of that ‘.h5’ file. In this phase, I don’t have an inference code written in “raw Tensorflow” (I could use some help here) so I can’t diagnose how this model performs with that ‘.pb’ file. As the last step, I have created TensorRT engine file with ‘.pb’ file and inference results that I am getting from that engine file is not even close to the ‘.h5’ inference results.

What could be the reason? How can I diagnose this problem? And I am definitely open to the suggestions.

Thanks in advance.