Hi, I have several scripts using tensorflow, pytorch, etc leveraging CUDA/cdnn. They all worked with my gtx 1080. I upgraded to a quadro rtx 8000. Now the same scripts that loaded models and trained, all cause CUDA out of memory errors (unless I set params to very small content much smaller than limits of gtx 1080).
Specifically the out of memory error is always failing to allocate 32 gb of gpu ram. I have 48 gb but why is cuda always trying to allocate 32? These are trivial training scripts. The same scripts allocate 2 gb when I run them on the 8 gb gtx 1080.
Could this be because I only have 32 gb of CPU ram (less than the 48 of gpu ram?). And this is causing problems?
I just ordered 64gb of cpu ram and will receive it tomorrow and install. Will keep you posted, but any other ideas?
I have tried too many nvidia graphics drivers to list including the very newest. At the moment I set drivers to 441.22. I’ve tried many versions though, all claiming to support quadro 8000. Vr, however, runs great!
Windows 10, 64 bit here. Very weird.