GTC 2020: TensorRT inference with TensorFlow 2.0

GTC 2020 S22408
Presenters: Jonathan Dekhtiar,NVIDIA; Tamas Bela Feher,NVIDIA; Xuan Vinh Nguyen,NVIDIA
Abstract
NVIDIA TensorRT is a platform for high-performance deep learning inference. We’ll describe how TensorRT is integrated with TensorFlow and show how combining the two improves the efficiency of machine-learning models while retaining the convenience and ease-of-use of a TF Python development environment. We’ll provide updates for the TF 2.0 TRT interface, C++ API, dynamic shape support, and latest performance benchmarking.

Watch this session
Join in the conversation below.

Hi and thank you for the presentation.

At 17:00, the slide mentions TF-TRT inference in C++ where you can port a TF model to a C++ environment. However, the github page contains an example with a TF1.14 docker container. This is supposed to be a TRT for TF2.+ talk…
Is there any progress on doing TF2.0 → standalone C++ engine conversion?

link: https://github.com/tensorflow/tensorrt/tree/r1.14%2B/tftrt/examples/cpp/image-classification

Thanks

Thanks Raphael, that’s correct. The current C++ example is for 1.14. We will work on updating it to 2.x and update you soon. Stay tuned!

Great, thanks! Looking forward to it.

I would lie to know if TF native (AMP + XLA) works for any type of Deep learning architecture. In the presentation, Natural language processing has been mentionned. But what about Computer Vision?
Also, do AMP + XLA related to the TRAX project?

Thank you