Hello all.
I am running RoseTTAFold on ubuntu with a CUDA error. The details are as follow,
Using backend: pytorch
Traceback (most recent call last):
File "/home/ganjh/RoseTTAFold/network/predict_pyRosetta.py", line 199, in <module>
pred = Predictor(model_dir=args.model_dir, use_cpu=args.use_cpu)
File "/home/ganjh/RoseTTAFold/network/predict_pyRosetta.py", line 67, in __init__
self.model = RoseTTAFoldModule(**MODEL_PARAM).to(self.device)
File "/home/ganjh/.conda/envs/RoseTTAFold/lib/python3.8/site-packages/torch/nn/modules/module.py", line 852, in to
return self._apply(convert)
File "/home/ganjh/.conda/envs/RoseTTAFold/lib/python3.8/site-packages/torch/nn/modules/module.py", line 530, in _apply
module._apply(fn)
File "/home/ganjh/.conda/envs/RoseTTAFold/lib/python3.8/site-packages/torch/nn/modules/module.py", line 530, in _apply
module._apply(fn)
File "/home/ganjh/.conda/envs/RoseTTAFold/lib/python3.8/site-packages/torch/nn/modules/module.py", line 552, in _apply
param_applied = fn(param)
File "/home/ganjh/.conda/envs/RoseTTAFold/lib/python3.8/site-packages/torch/nn/modules/module.py", line 850, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: the provided PTX was compiled with an unsupported toolchain.
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
CUDA and Driver’s information are as follow:
NVIDIA-SMI 450.142.00
Driver Version: 450.142.00
CUDA Version: 11.0
Thanks