Hello friends, I’ve been trying to be self-sufficent in resolving this issue on my own but now I have come to the point where I have to seek advice from people much smarter than I (meaning you).
I just learned about CUDA and am interested in running it on Pytorch. Despite my efforts, torch.cuda.is_available() is returning False.
So here is what I have done thus far:
2012 MBP Retina
GeForce GT 650M
Updated to MacOS 10.13.6 (17G6030) to be compatible with latest Web Driver
Installed Web Driver 387.10.10.10.40.124 (only released about a week ago and indicates support for Toolkit 10.1)
Installed CUDA Driver 418.105 (latest version and it release notes saids “Supports all NVIDIA products available on Mac HW.”)
Installed Xcode 10.1, checked it was running Apple LLVM 10.0.0 per CUDA install doc
Installed CUDA Toolkit 10.1
ran $ kextstat | grep -i cuda and returned 159 0 0xffffff7f8162e000 0x2000 0x2000 com.nvidia.CUDA (1.1.0) 50F4AE08-3D20-3B13-B36D-439DEEB1D49C <4 1> (which i believe the return value validates the verification that CUDA toolkit can communicate correctly with CUDA-capable HW)
ran the nbody simulation with sucess
ran deviceQuery and bandwidthTest with PASS results
from cuDNN 7.5.0 placed cudnn.h in /Developer/NVIDIA/CUDA-10.1/include and placed libcudnn_static.a, libcudnn.7.dylib, libcudnn.dylib into /Developer/NVIDIA/CUDA-10.1/lib (my best guess of where these go as there was no installer, just a compressed archive)
It was all looking promise but alas it does not work. Any help at all will be greatly appreciated in resolving this issue. Thank you