Convert pb saved model obtained from Tensorflow Object detection API to a TensorRT model and load it to TensorRT Inference Server

Dears,

I am struggling with converting models that are being used for Tensorflow Serving to optimized TensorRT models that can be loaded into TensorRT Inference Server. The desired models are either have a fixed input size like some classification models, or variable one like the ones obtained from TensorFlow Object Detection API.

Please advise,

Hello, can you describe in detail what errors/difficulties are you having converting the model? The subject string is too long and is getting cut off.

thanks
NVIDIA Enterprise Support