Cuda 10.1 on RTX 3090

I am trying to use Cuda 10.1 on RTX 3090 with Ampere architecture. It is throwing following error when I try to load my model.

(env3.6) bhaskar@bhaskar:/media/bhaskar/Data_Disk/Bhaskar-system_backup/try_cuda$ python a.py
setting gpu on gpu_id: 0
using cuda
loading training data
loading validation data
Train Data size 992
Valid Data size 416
Traceback (most recent call last):
File “a.py”, line 139, in
model.cuda()
File “/media/bhaskar/Data_Disk/Bhaskar-system_backup/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 260, in cuda
return self._apply(lambda t: t.cuda(device))
File “/media/bhaskar/Data_Disk/Bhaskar-system_backup/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 187, in _apply
module._apply(fn)
File “/media/bhaskar/Data_Disk/Bhaskar-system_backup/env3.6/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 117, in _apply
self.flatten_parameters()
File “/media/bhaskar/Data_Disk/Bhaskar-system_backup/env3.6/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 113, in flatten_parameters
self.batch_first, bool(self.bidirectional))
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
[1]+ Killed python a.py

Proble continues even though CUDNN is already installed.

The code breaks at line 139 of my python file a.py where I am loading my model on cuda using model.cuda().

This problem doesn’t happen when I use Cuda 11.3 and Pytorch 1.9.
May I know how can I use Cuda 10.1 on RTX 3090 because other libraries have dependency and I need to use cuda 10.1.

You can’t. The first Cuda version to recognise the RTX3090 was 11.1.

1 Like

Could you please show the official info source of " The first Cuda version to recognise the RTX3090 was 11.1."?

Sure. From the 11.1 Release Notes:
“Added support for NVIDIA Ampere GPU architecture based GA10x GPUs GPUs (compute capability 8.6), including the GeForce RTX-30 series.”

1 Like