The converted model is too large!

Description

I have fine-tuned Tensorflow2 object detection API models using my custom dataset. To be more precise, I am using “ssd_resnet50_v1_fpn_640x640_coco17_tpu-8” and all the related codes in this github repo “GitHub - tensorflow/models: Models and examples built with TensorFlow”. After training the models, I used this module “models/exporter_main_v2.py at master · tensorflow/models · GitHub” to convert the output checkpoints to SavedModel format. Finally, I used this github repo “GitHub - tensorflow/tensorrt: TensorFlow/TensorRT integration” to convert my SavedModel to trt. The model size before conversion is about 13MB and after it it is 402MB. Would you please help me to figure out what the problem is?

Environment

TensorRT Version: TensorRT 7.2.2
GPU Type: GeForce RTX 3090
Nvidia Driver Version: 455.23.04
CUDA Version: cuda11.1.0.024
CUDNN Version: * NVIDIA cuDNN 8.1.0]
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): Python 3.8
TensorFlow Version (if applicable): 2.4.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorflow:21.03-tf2-py3

Relevant Files

Steps To Reproduce

Use the attached Jupiter notebook step by step

Hi,
Please check the below link, as they might answer your concerns

Thanks!

I do not use TRT Python API. As I said I am using Tensorflow object detection API

Hi @hani.khosravi,

We recommend you to post your concern on Issues · tensorflow/tensorrt · GitHub to get better help.

Thank you.