My environment is Windows 10 with TensorRT installed from the following zip file: TensorRT-8.4.2.4.Windows10.x86_64.cuda-11.6.cudnn8.4.zip
My GPU is an RTX 3090. PyTorch version is 1.12. I installed cuda-tookit and cudnn (making sure the versions match those specified in the zip file name) via conda, but I used pip to install uff, graphsurgeon, and onnx_graphsurgeon from the TensorRT installation files, as per the instructions.
What could be the issue?
I should also point out that the code works without error in Ubuntu, where the versions slightly differ.
That’s pretty much the complete log of the error. If there’s a setting to get more verbose logs please let me know.
As for the code and model, I’m afraid they’re not immediately shareable. I was hoping that the error itself could point to identifiable causes of the problem.
The code and models work on Ubuntu, but not on Windows, so I imagine there must be an issue with my TensorRT installation on Windows. I also confirmed that the model gives expected results when disabling TensorRT.
Is there a reason why Nvidia isn’t providing a conda package for TensorRT on Windows when it does have one for the cuda toolkit?