Tensorflow --> TensorRT in Windows

Description

If one has a trained neural network in Python Tensorflow and wants to run the C++ TensorRT engine in Windows, what are the options?

  1. Using TF-TRT on a Linux machine to generate the TensorRT model and then move the model to Windows to build the engine?
  2. Using the UFF model converter on a Linux machine to generate the TensorRT model and then move the model to Windows to build the engine?
  3. Convert to ONNX using a tool like tf2onnx and importing this ONNX into TensorRT on Windows?
  4. Build the network layer by layer in TensorRT?

Would any of the above work? Is there another method? What is the recommended method?

Environment

TensorRT Version: 7
GPU Type: P5000
CUDA Version: 10.0
CUDNN Version: 7.6.5
Operating System + Version: Windows 10
Python Version (if applicable): 3.7
TensorFlow Version (if applicable): 2.1

Hi,
I think option 2, 3 & 4 will work, but I will recommend to try option 3.

Thanks

OK I will give this a try. I have another followup question. While doing the training in Python and TensorFlow I used CUDA 10.1. It looks like the latest version of TensorRT (7) is prebuilt for Windows for CUDA 10.0 and CUDA 10.2, and as of now I have installed the 10.0 version. Would there be a conflict in this case? I would think as long as the machine with the TensorRT runtime engine has CUDA 10.0 it would not matter what the network was trained in, but I wanted to double check.