cuDNN error: CUDNN_STATUS_EXECUTION_FAILED Cuda 10.1 with Pytorch

I am using the following system specs

GPU: 2080 ti
Nvidia Driver: 418.67
Cuda Version: 10.1
OS: Ubuntu 18.04
Pytorch: 1.0

The problem I am having is when attempting to run a RNN using pytorch I receive the error stated in the title. This error occurs when I attempt to send the encoder to the GPU with .to(device). (No reduction in size prevents this error)

When installing the Nvidia software I installed cuDNN 7.6 but when installing pytorch it installed 7.4.

I cannot seem to find the right configuration to allow my RNN to run efficiently with the use of cudNN. I have worked around it for the time being by disabling cudnn from the pytorch command but my performance diminshes drastically.

Is it possible to use pytorch with cuda 10.1?

Do I need to downgrade to CUDA 10.0 with cuDNN 7.5 to operate pytorch 1.0?

Since this is my work machine and I needed to start developing a model, I could not spend much longer on the issue.

I downgraded to cuda 10.0 with driver 410.XX and cuDNN 7.5.

Installed pytorch and everything is hunky dory.

If anyone knows the solution to my original question, I am still interested in finding a solution. I doubt I am the only one who had run into the issue and would like to keep my deep learning environment to be as up to date as possible.