TensorRT, Python and Windows

I have a Tensorflow model trained in Python on a Windows machine.
I plan to convert to UFF and do inference to optimize “execution time”.

I can read in other posts, that for Python samples and UFF converter, then install DEB package.
Does that mean TensorRT inference and the following execution/deployment can only be done from Linux?

Or how to install the UFF parser and TensorRT for Python in a Windows environment?

Hello,

The UFF file is platform independent, so you can use your UFF on linux.

Windows installation of TensorRT includes a collection of Parsers, which inlcudes the UFF parser. It’s not a separate installation.

But according to documentation it seems like no Python support for TensorRT on Windows…?

So to achieve deployment on TensorRT engine for a Tensorflow model, either:

  1. go via C++ API on Windows, and do UFF conversion and TensorRT inference in C++.

Or

  1. if I prefer Python, I must change to Linux OS, and then it is possible to use UFF converter and TensorRt inference via Python on Linux.

Correct?

Hello,

Yes. if you must use Python, you’d use Linux.

Apologies for the inconvenience.

Hello,

Do you plan to allow the installation of the Python API on Windows?

Thank you.

How is it that 1.5 years later there still isn’t Python support for TensorRT on Windows?

Here is a Windows compatible lib: https://github.com/KorovkoAlexander/tensorrt_models

1 Like

May I ask if you have used this library? In my test, I found that the pure infer time of this library was much slower than that of TensorRT C++, and even much slower than that of the original pytorch model. However, the predicted result was correct, which made me very confused.