I am trying to run the UF NLP GatorTron model using PyTorch on a GPU, but I encountered an issue where the Torch library is not compiled with CUDA enabled. The error message suggests that CUDA is not supported, even though I have installed CUDA and my GPU is correctly set up. Below is the error trace and the script I used.
Script:
from transformers import AutoModel, AutoTokenizer
Load the model and tokenizer
model_name = “UFNLP/gatortron-medium” # Replace with your chosen model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
Move the model to the GPU
model.to(‘cuda’)
Example input text
input_text = “Bone scan: Negative for distant metastasis.”
Tokenize and encode the input text
encoded_input = tokenizer(input_text, return_tensors=“pt”)
Move the encoded input to the GPU
encoded_input = encoded_input.to(‘cuda’)
Pass the encoded input through the model
output = model(**encoded_input)
Use the output for your specific task
print(output)
error :
warnings.warn(
Traceback (most recent call last):
File “C:\Users\user\Desktop\soft.com_shehbaz\GatorTron\gatorTron_model\sample.py”, line 9, in
model.to(‘cuda’)
File “C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\modeling_utils.py”, line 2883, in to
return super().to(*args, **kwargs)
File “C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py”, line 1174, in to
return self._apply(convert)
File “C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py”, line 780, in _apply
module._apply(fn)
File “C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py”, line 780, in _apply
module._apply(fn)
File “C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py”, line 805, in apply
param_applied = fn(param)
File “C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py”, line 1160, in convert
return t.to(
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\cuda_init.py", line 305, in _lazy_init
raise AssertionError(“Torch not compiled with CUDA enabled”)
AssertionError: Torch not compiled with CUDA enabled
- Why am I getting the “Torch not compiled with CUDA enabled” error despite having CUDA installed?
- What steps should I take to ensure that PyTorch is correctly compiled with CUDA support?
CUDA verison : 12.5
Torch Version : 2.4.0