Deploying Linux-built TensorRT model on Windows

Hi,

I have a trained network in PyTorch on Ubuntu.
I’d like to create its TensorRT version yet in Linux, and then to deploy the produced model on Windows.

I can’t find any references whether such use case is possible,
Can you please help / suggest possible solution?

Environment details:

I’m using a workstation with dual-boot - which means I’m using the same GPU card both in Linux and Windows.
GPU card: GeForce RTX 2070.
Linux: Ubuntu 18.04 (LTS)
Windows: Windows10
Drivers - 440.xx (latest relevant, for Linux & Windows).

Current CUDA version 10.2, Pytorch 1.5, TRT - 7.

I’m open to change my environment versions (CUDA, PyTorch, etc) to the needed versions in order to be able to produce TRT model in Linux with python TRT interface and being able to run it in Windows.

Please help.
Thanks a lot!

Hi @sharon.hezy,
I am afraid, but the generated engine files are not portable across platforms or TensorRT versions. They are specific to the exact GPU model they were built on (in addition to the platforms and the TensorRT version) .
Thanks!

Thanks for replying.

I’m building the model on exactly the same GPU as I want to run it on (it’s the same workstation, with dual boot), and TensorRT version is the same too.
The only difference is the OS - I’m building on Ubuntu, but want to run it on Windows.
Is it expected to work?

Thank you for helping!

Hi @sharon.hezy,

Here platform implies OS.
Reason being,there might be different features or kernels enable on Linux vs Windows which could lead to a unsuccessful transition.
Hence we recommend to use the same OS, TRT version and the GPU model.
Thanks!

I see.

Thank you very much for the detailed explanation!