I have a trained network in PyTorch on Ubuntu.
I’d like to create its TensorRT version yet in Linux, and then to deploy the produced model on Windows.
I can’t find any references whether such use case is possible,
Can you please help / suggest possible solution?
I’m using a workstation with dual-boot - which means I’m using the same GPU card both in Linux and Windows.
GPU card: GeForce RTX 2070.
Linux: Ubuntu 18.04 (LTS)
Drivers - 440.xx (latest relevant, for Linux & Windows).
Current CUDA version 10.2, Pytorch 1.5, TRT - 7.
I’m open to change my environment versions (CUDA, PyTorch, etc) to the needed versions in order to be able to produce TRT model in Linux with python TRT interface and being able to run it in Windows.
Thanks a lot!