Dears,
I am struggling with converting models that are being used for Tensorflow Serving to optimized TensorRT models that can be loaded into TensorRT Inference Server. The desired models are either have a fixed input size like some classification models, or variable one like the ones obtained from TensorFlow Object Detection API.
Please advise,