How to use cuda to train a model with pytorch

Description

Hi,

I want to use the gpu of my system (Geforce RTX 3060 6GB) to execute the Jupyter notebook for training the model using Pytorch. But I am unable to execute the code using gpu. It is showing the device as cpu not gpu.
for example when i do, device = "cuda" if torch.cuda.is_available() else "cpu" i get “cpu”.
Can someone please help me ?

Environment

GPU Type: Geforce rtx 3060
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable): 3.9.11
PyTorch Version (if applicable): 1.12.0
Baremetal or Container (if container which image + tag):

Hi,

Please make sure CUDA is installed correctly and nvidia-smi shows GPUs.
This forum talks more about updates and issues related to TensorRT.
We recommend you to please reach out to PyTorch or CUDA related forums to get better help.

Thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.