Invalid resource handle when doing inference with TensorRT

Hello

I’m getting the following error when attempting to do inference with PyTorch on a model optimised with TensorRT:

[08/03/2022-17:42:31] [TRT] [E] 1: [reformatRunner.cpp::nvinfer1::rt::cuda::ReformatRunner::execute::603] Error Code 1: Cuda Runtime (invalid resource handle)

My environment is Windows 10 with TensorRT installed from the following zip file: TensorRT-8.4.2.4.Windows10.x86_64.cuda-11.6.cudnn8.4.zip

My GPU is an RTX 3090. PyTorch version is 1.12. I installed cuda-tookit and cudnn (making sure the versions match those specified in the zip file name) via conda, but I used pip to install uff, graphsurgeon, and onnx_graphsurgeon from the TensorRT installation files, as per the instructions.

What could be the issue?

I should also point out that the code works without error in Ubuntu, where the versions slightly differ.

Hi,

Could you please share with us complete logs and if possible minimal issue repro model/script for better debugging?

Thank you.

That’s pretty much the complete log of the error. If there’s a setting to get more verbose logs please let me know.

As for the code and model, I’m afraid they’re not immediately shareable. I was hoping that the error itself could point to identifiable causes of the problem.

The code and models work on Ubuntu, but not on Windows, so I imagine there must be an issue with my TensorRT installation on Windows. I also confirmed that the model gives expected results when disabling TensorRT.

Is there a reason why Nvidia isn’t providing a conda package for TensorRT on Windows when it does have one for the cuda toolkit?

Hi,

This error is more related to the CUDA context issue.
Please make sure you’re handling the CUDA context/stream correctly.

Following a similar issue may help you.

Currently, we do not have info on this.

Thank you.