Performance comparison of TensorRT-optimized model between: (i) TF-TRT vs (ii) TensorRT C++ API?

Hi,

If I have a Tensorflow model, I have two options to optimize to TensorRT-optimized model: (i) via TF-TRT, which is relatively easy and simple, and (ii) using TensorRT C++ API. From a same model, in a same GPU, will both methods, (i) and (ii), generate a same performance, e.i., same FPS result? Or there will be a different of the performance? Can you provide a benchmark result of them?

Thanks.

Hello,

We are currently working on TF-TRT vs. TRT benchmarks. Unfortunately, we are not sharing the results yet. Please stay tuned for future announcements.

regards,
NVIDIA Enterprise Support

Hi All,

After starting to try TensorRT optimization and I personally found difficulties here and there, so, I decide to make a video tutorial here how we can optimize deep learning model obtained using Keras and Tensorflow. I also demonstrate to optimize YOLOv3. Hope it helps for those who begins trying to use TensorRT, and don’t encounter similar difficulties as I experienced before.

  1. Optimizing Tensorflow to TensorRT:
    01 Optimizing Tensorflow Model Using TensorRT with 3.7x Faster Inference Time - YouTube

  2. Visualizing model graph before and after TensorRT optimization:
    02 Visualizing Deep Learning Graph Before and After TensorRT Optimization - YouTube

  3. Optimizing Keras model to TensorRT:
    03 Optimizing Keras Model to TensorRT - YouTube

  4. Optimizing YOLOv3:
    06 Optimizing YOLO version 3 Model using TensorRT with 1.5x Faster Inference Time - YouTube

  5. YOLOv3 sample result, before and after TensorRT optimization:
    07 Another YOLOv3 Detection Result (Native Tensorflow vs TensorRT optimized) - YouTube

1 Like

Has there been any update on this benchmarking that can be shared?

I didn’t found any official benchmark. But in “Deep LearningInference on PowerEdge R7425” by dell is a comparison of TensorRT-API and TF-TRT.
In my research i got simillar results, so i can confirm the section in this whitepaper.