Hello i have installed cuda version 11.2 on my machine which shows up in nvcc --version, however nvidia-smi
Anyone knows how to fix that?
Hello i have installed cuda version 11.2 on my machine which shows up in nvcc --version, however nvidia-smi
nvidia-smi is installed with the driver package and the cuda version displayed, is that used to compile the driver and nvidia-smi.
The GPU name is truncated, but as the name starts with “Geforce”, it has 24GB of VRAM and Pmax is 350W, it may be an RTX 3090.
What does “nvidia-smi -q” show?
it is 3090
Yes, it looks like you have a normally functioning 3090.
I know nothing about Torch, so you probably want to be more specific about the errors you are seeing from it, version, etc., in order to maybe get assistence there.
i stil do not know why it is not detected in nvidia-smi and apps
In what way is it not detected in nvidia-smi
? You have posted nvidia-smi
output that shows that the GPU is detected, and correctly identified.
command nvidia-smi (first image sent) does not show gpu model or any gpu usage (N/A)
The model name is truncated. That does not mean that it was not detected. It is also correctly identified, as far as that goes. If you want to see a full identification, you should run the deviceQuery
sample code, as you were already instructed, and as you already posted the output from, which shows your full GPU name.
Regarding NA for utilization, the GPU process utilization is all displayed and spelled out. The GPU process memory utilization is not displayed in all circumstances for GeForce GPUs and/or for GPUs in WSL, when the process is reflected from the windows side. This is all expected, normal behavior.
If you think something should be improved, you’re welcome to file a bug.
For what its worth, I already filed a bug to the nvidia-smi
team to have the GPU name display improved, and the bug was rejected.
Good luck!