Different ways to convert TensorFlow model to TensorRT

Hello all,
Reading many topics and documentation about how to optimize a TensorFlow model and generate a TRT engine, I can summarize that in four ways:
A- Convert the Tensorflow model to ONNX, then use:
1- trtexec tool to optimize and generate a trt engine.
2- onnx2trt tool
3- Nvidia TensorRT Python/C++ API
B- 4- Using the TF-TRT tool to optimize supported layers using TensorRT
Are there other methods that allow optimizing the TensorFlow for inference running?
What is the best easy method?
What is the best performing method?
In TensorRT 8, they added the ‘TensorFlow Object Detection API Models in TensorRT’, link below. Have they used TensorRT API to generate the TRT engine or what they employed here?

Thanks in advance for your help

Hi,

There are two different ways for running a TensorFlow model with TensorRT: pure TensorRT or TF-TRT.

For A mentioned above, the three different ways are all directed to the same parser.
It depends on whether you want to serialize the model into an engine or runtime convert it with an API call.

In general, we recommend converting the engine with trtexec first.
The sample you shared also converts the model first but it uses python.

Thanks.

1 Like

Hi @AastaLLL
Thank you for these information

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.