import tensorflow model to c++ TensorRT on windows

I’m just now started to check about TensorRT so I don’t have to much background on it.
What i’m trying to do is to train a tensorflow model in python and use it in c++ program.
Can somebody help my with the right workflow and example?
From what i figured out until now, I need to convert and save the tensorflow model to .uff file and load this file to my c++ code, but I also read that the TensorRT python API (for the file converting part) doesn’t support windows right now, so how i’m supposed to do that??
For summery, I need help with the right workflow, libraries etc. in order to use a python tensorflow model in c++.

my environment is -
windows 10
tensorflow 1.12
cuDnn 7.3.1
cuda toolkit 10.0
quadro M2000 GPU

Thank you.


I’d suggest the following options:

  • Consider running TensorRT containers on Windows. NVIDIA GPU Cloud (NGC) is availabe at

  • Or if you’d like to use TRT natively on Windows, here’s one workflow approach (C++ api):

  1. Import A TensorFlow Model Using The C++ UFF Parser API:

  2. Build An Engine, then serialize it (store it) in C++

NVIDIA Enterprise Support

thanks for the answer.
tell me if i’m wrong, before importing the model to TRT in C++, I need to convert it somehow to UFF format. I already spend many hours on the web trying to find example for how to do this but without success.
I saw that people talk on tf-trt , convert-to-uff file and on end_to_end_tensorflow_mnist example that come with TRT but I don’t have neither of this, I have only a zip file of TRT, no instalation of anything.
I saw that some people use ‘import uff’ or/and ‘import tensorrt’ while some people just do ‘tf.train.write_graph(sess.graph, “”, “output.pb”)’.
I’m very confused here, please help me get over this process of python-tensorflow -> c+±TRT.
Thank you