I’m using the TF-TRT optimization flow with a saved model described here:
My flow is to run convert → build → save. In this process, the input signature def of the original saved model gets overwritten to “Placeholder”. When I’d run saved model optimization perviously (ie TF 1.13/TRT 5.1) the optimization would preserve both the input and output signature defs of the saved model. Is there a way in this more recent version (see environment below) to preserve input signature naming?
Environment
TensorRT Version: 7.2.1 GPU Type: GTX 3070 Nvidia Driver Version: 455.45.01 CUDA Version: 11.1.74 CUDNN Version: 8.0.4.30 Operating System + Version: Ubuntu 20.04 lts Python Version (if applicable): 3.8 TensorFlow Version (if applicable): 2.3.1 PyTorch Version (if applicable): N/A Baremetal or Container (if container which image + tag): N/A
Thanks for the quick reply! Thanks for the links, yes as I posted that is the documentation I’m working from. Unfortunately it’s not possible for me to share the model or script as they are proprietary.
That said I don’t think I’d need to share the model or scripts, fundamentally my question is if it’s possible in that linked flow (ie convert → build → save) to prevent TensorRT from overwriting the input signature definition.
Thanks for the reply. I should clarify, that I am working with the TensorFlow SavedModel (*.pb) format. If I understand the documentation for trtexec correctly I don’t think it supports that model type? And again I’m hoping to convert the SavedModel format directly (just using the process I linked) and I’d like to load the resulting SavedModel without changing my API. Before (ie TensorRT 5.1) it would not modify the input signature of that SavedModel format and now in 7.2.1 it changes our custom input signature naming and defaults to “Placeholder”.
If trtexec can help with that, though do let me know and I’ll take a look, but based on a quick review of documentation it doesn’t seem like it could.