Reading many topics and documentation about how to optimize a TensorFlow model and generate a TRT engine, I can summarize that in four ways:
A- Convert the Tensorflow model to ONNX, then use:
1- trtexec tool to optimize and generate a trt engine.
2- onnx2trt tool
3- Nvidia TensorRT Python/C++ API
B- 4- Using the TF-TRT tool to optimize supported layers using TensorRT
Are there other methods that allow optimizing the TensorFlow for inference running?
What is the best easy method?
What is the best performing method?
In TensorRT 8, they added the ‘TensorFlow Object Detection API Models in TensorRT’, link below. Have they used TensorRT API to generate the TRT engine or what they employed here?
Thanks in advance for your help