Assertion error: torch not compiled with CUDA enabled

I’m asking in this community because I suspect that since my graphic card is not very popular it might not actually be supported by CUDA or other configs. and theres no official documentation about cuda support. in my case, although cuda is successfully installed, pytorch cannot connect to GPU.

i’m using NVIDIA MX450 on windows 10 pro with CUDA 10.2 with pytorch 1.12.1. this graphic card is compatible with cuda upto 12.0 and is compatible with this version of pytorch. in addition to that, i have

  1. added cuda into env path: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.0\bin and C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.0\libnvvp.
  2. installed cudatoolkit 10.2, torchvision 0.13.1 in conda
  3. in system variables, set CUDA_VISIBLE_DEVICES to 1, since from task manager nvidia gpu is numbered 1.

torch.cuda.is_available() returns false; and below is code with error:

my_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float32, device=“cuda”)

AssertionError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_13584\1593042090.py in
----> 1 my_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float32, device=“cuda”)
2 print(my_tensor)
3 torch.cuda.is_available()

~\anaconda3\lib\site-packages\torch\cuda_init_.py in _lazy_init()
209 “multiprocessing, you must use the ‘spawn’ start method”)
210 if not hasattr(torch._C, ‘_cuda_getDeviceCount’):
→ 211 raise AssertionError(“Torch not compiled with CUDA enabled”)
212 if _cudart is None:
213 raise AssertionError(

AssertionError: Torch not compiled with CUDA enabled