TF-TRT Saved Model Optimization Flow Input Signature Defs


I’m using the TF-TRT optimization flow with a saved model described here:

My flow is to run convert -> build -> save. In this process, the input signature def of the original saved model gets overwritten to “Placeholder”. When I’d run saved model optimization perviously (ie TF 1.13/TRT 5.1) the optimization would preserve both the input and output signature defs of the saved model. Is there a way in this more recent version (see environment below) to preserve input signature naming?


TensorRT Version: 7.2.1
GPU Type: GTX 3070
Nvidia Driver Version: 455.45.01
CUDA Version: 11.1.74
CUDNN Version:
Operating System + Version: Ubuntu 20.04 lts
Python Version (if applicable): 3.8
TensorFlow Version (if applicable): 2.3.1
PyTorch Version (if applicable): N/A
Baremetal or Container (if container which image + tag): N/A

Hi, Request you to share the model and script so that we can try reproducing the issue at our end.

Also we recommend you to check the below samples links, as they might answer your concern


Thanks for the quick reply! Thanks for the links, yes as I posted that is the documentation I’m working from. Unfortunately it’s not possible for me to share the model or script as they are proprietary.

That said I don’t think I’d need to share the model or scripts, fundamentally my question is if it’s possible in that linked flow (ie convert -> build -> save) to prevent TensorRT from overwriting the input signature definition.