TF-TRT vs TensorRT

Hi,
I found that we can optimize the Tensorflow model in several ways. If I am mistaken, please tell me.

1- Using https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html (TF-TRT)
, This API developer by tensorflow and integreted TensoRT to Tensorflow and this API called as :

from tensorflow.python.compiler.tensorrt import trt_convert as trt

This API can be applied to any tensorflow models (new and old version models) without any converting error, because If this API don’t support any new layers, don’t consider these layers for TensorRT engines and these layers remain for Tensorflow engine and run on Tensorflow. right?

2- Using https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#overview (TensorRT), This API by developed by NVIDA and is independent of Tenorflow library (Not integrated to Tensorflow), and this API called as:

import tensorrt as trt

If we want to use this api, first, we must converting the tensorflow graph to such as UFF or ONXX using uff-convertor and then parse the UFF graph to this API.
In this case, If the Tensorflow graph have unsupported layers we must use plugin or custom code for these layers, right?

3- I don’t know, when we work with Tensorflow models, Why we use UFF/ONNX converter and then parse them to TensorRT, we can use directly TF-TRT API, right? If so, Are you tested the Tensorflow optimization model from these two method to get same performance? what’s advantage of this UFF/ONNX converter method?

I have some question about the two cases above:
4- I convert the ssd_mobilenet_v2 using two cases, In the case 1, I achieve slight improvement in speed but in the case 2, I achieve more improvement, why?
My opinion is that, In the case 1, The API only consider converting the precision (FP32 to FP16) and merging the possible layers together, But in the case 2, the graph is clean by UFF such as remove any redundant nodes like Asserts and Identity and then converted to tensorrt graph, right?

5- when we convert the trained model files like .ckpt and .meta, … to frozen inference graph(.pb file), These layers don’t remove from graph? only loss states and optimizer states , … are removed?

1 Like

HI,

1. YES. TF-TRT convert the supported layer into TensorRT.
For those non-supported one, it use TensorFlow original implementation instead.

2. The pipeline should looks like .pb → .uff → TensorRT engine.
It’s recommended to check our support matrix first:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html
For the non-supported layer, yes, you will need to implement it with our plugin API.

3. TensorFlow usually have poor performance on Jetson, especially the huge required memory.
For TF-TRT, although it part of the layers have TensorRT acceleration, the overall interface is still TensorFlow(data input/output, …).
It’s expected that pure TensorRT will give you a much better performance.

4. Please check no.3.

5. YES.

Thanks.

2 Likes