CUDA error: the provided PTX was compiled with an unsupported toolchain

Hello all.
I am running RoseTTAFold on ubuntu with a CUDA error. The details are as follow,

Using backend: pytorch
Traceback (most recent call last):
  File "/home/ganjh/RoseTTAFold/network/predict_pyRosetta.py", line 199, in <module>
    pred = Predictor(model_dir=args.model_dir, use_cpu=args.use_cpu)
  File "/home/ganjh/RoseTTAFold/network/predict_pyRosetta.py", line 67, in __init__
    self.model = RoseTTAFoldModule(**MODEL_PARAM).to(self.device)
  File "/home/ganjh/.conda/envs/RoseTTAFold/lib/python3.8/site-packages/torch/nn/modules/module.py", line 852, in to
    return self._apply(convert)
  File "/home/ganjh/.conda/envs/RoseTTAFold/lib/python3.8/site-packages/torch/nn/modules/module.py", line 530, in _apply
    module._apply(fn)
  File "/home/ganjh/.conda/envs/RoseTTAFold/lib/python3.8/site-packages/torch/nn/modules/module.py", line 530, in _apply
    module._apply(fn)
  File "/home/ganjh/.conda/envs/RoseTTAFold/lib/python3.8/site-packages/torch/nn/modules/module.py", line 552, in _apply
    param_applied = fn(param)
  File "/home/ganjh/.conda/envs/RoseTTAFold/lib/python3.8/site-packages/torch/nn/modules/module.py", line 850, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: the provided PTX was compiled with an unsupported toolchain.
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

CUDA and Driver’s information are as follow:
NVIDIA-SMI 450.142.00
Driver Version: 450.142.00
CUDA Version: 11.0

Thanks

1 Like

Update the GPU driver to the latest one for your GPU.

1 Like

Thanks for your answer. Although I have fixed the problem by updating the driver version to 470.57, my previous driver version 450.142.00 is higher than 450.80, that is minimum required driver versions for CUDA11.0.

Torch can “bring along” its own CUDA version. Even though you may have installed CUDA 11.0 yourself, when you installed Torch, it may have brought along its own CUDA version that is higher than 11.0. If it brought along CUDA 11.1, for example, your 450.xx GPU driver would no longer be sufficient for that. That would be my guess as to what is happening here. Since you haven’t provided Torch version details, information about how you installed Torch, etc. its just a guess. Torch generally does not use the CUDA version you installed, it uses its own. It does use the GPU driver you have installed, however.

1 Like

I get it and thank you very much!

so try doing:

print(torch.version.cuda) .

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.