RuntimeError: CUDA error: no kernel image is available for execution on the device on RTX 3060

I seem unable to run a Cuda project on a container on my 3060 due to some mismatch of the libs:

opt/python/venv/lib/python3.7/site-packages/torchvision/models/_utils.py:253: UserWarning: Accessing the model URLs via the internal dictionary of the module is deprecated since 0.13 and will be removed in 0.15. Please access them via the appropriate Weights Enum instead.
      "Accessing the model URLs via the internal dictionary of the module is deprecated since 0.13 and will "
    /opt/python/venv/lib/python3.7/site-packages/torchvision/models/_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
      f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "    NVIDIA GeForce RTX 3060 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
    The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
    If you want to use the NVIDIA GeForce RTX 3060 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

        RuntimeError: CUDA error: no kernel image is available for execution on the device
    CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
    For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

The requirements is here:

aiofiles==0.4.0
aniso8601==3.0.2
apispec==1.0.0b6
apistar==0.6.0
asgiref==2.3.2
async-timeout==3.0.1
certifi==2018.11.29
chardet==3.0.4
Click==7.0
docopt==0.6.2
graphene==2.1.3
graphql-core==2.1
graphql-relay==0.4.5
graphql-server-core==1.1.1
h11==0.8.1
httptools #==0.0.11
idna==2.8
itsdangerous==1.1.0
Jinja2==2.10
MarkupSafe==1.1.0
marshmallow==2.17.0
numpy==1.15.4
opencv-python==4.5.3.56
parse==1.9.0
Pillow==5.4.0
promise==2.2.1
pyspellchecker==0.5.4
python-multipart==0.0.5
PyYAML==3.13
requests==2.21.0
requests-toolbelt==0.8.0
responder
rfc3986==1.2.0
Rx==1.6.1
six==1.12.0
starlette==0.9.9
typesystem==0.2.4
urllib3==1.24.1
uvicorn==0.3.24
uvloop==0.11.3
websockets==7.0
whitenoise==4.1.2
scipy==1.4.1
imutils==0.5.3
scikit-image #==0.17.2
rapidfuzz
psutil
pyjwt
cryptography
prometheus_client
nvgpu
torch -f https://download.pytorch.org/whl/nightly/torch_nightly-1.2.0.dev20190731%2Bcu100-cp35-cp35m-linux_x86_64.whl
tensorflow -f https://github.com/marcossilva/tensorflow-cuda11-wheel/releases/download/v2.4.1/tensorflow-2.4.0-cp38-cp38-linux_x86_64.whl
torchvision
easyocr

From what I’ve grasped the problem is that the cuda/cudnn version used to compile the wheel files are older than the one needed by my gpu. I did try to use:

torch -f https://download.pytorch.org/whl/nightly/torch_nightly-1.2.0.dev20190731%2Bcu100-cp35-cp35m-linux_x86_64.whl
tensorflow -f https://github.com/marcossilva/tensorflow-cuda11-wheel/releases/download/v2.4.1/tensorflow-2.4.0-cp38-cp38-linux_x86_64.whl

You’re using a really old pytorch 1.2 built against cuda 10, you need a more recent version built with cuda 11.

How can I manage to do it?

Install a newer torch version, e.g 1.12+cu116
https://download.pytorch.org/whl/torch/

1 Like