TensorRT optimized graph is getting larger in size

Hi.

I am on Jetson Nano platform, TensorFlow version is 1.15.0. I am trying to perform faster inference by first converting a MobileNetV2 network into its TensorRT optimized variant. Here’s the workflow I am following:

  • Download the weights of the pre-trained MobileNetV2 network in Keras
  • Create a frozen graph out of those weights
  • Convert that frozen graph to its TensorRT optimized variant

But the TensorRT optimized graph is bigger in size with regards to the original weights of the MobileNetV2 network. Is this the right behavior?