Quadro rtx 8000 is out of memory with everything

Hi, I have several scripts using tensorflow, pytorch, etc leveraging CUDA/cdnn. They all worked with my gtx 1080. I upgraded to a quadro rtx 8000. Now the same scripts that loaded models and trained, all cause CUDA out of memory errors (unless I set params to very small content much smaller than limits of gtx 1080).

Specifically the out of memory error is always failing to allocate 32 gb of gpu ram. I have 48 gb but why is cuda always trying to allocate 32? These are trivial training scripts. The same scripts allocate 2 gb when I run them on the 8 gb gtx 1080.

Could this be because I only have 32 gb of CPU ram (less than the 48 of gpu ram?). And this is causing problems?

I just ordered 64gb of cpu ram and will receive it tomorrow and install. Will keep you posted, but any other ideas?

I have tried too many nvidia graphics drivers to list including the very newest. At the moment I set drivers to 441.22. I’ve tried many versions though, all claiming to support quadro 8000. Vr, however, runs great!

Windows 10, 64 bit here.

1 Like

ps I updated to cuda 10.2 and latest cudnn. still have issue

i increased the pagefile to 32 gb making the physical + virtual 64 gb to surpass the 48 gb of the card and the error went away! Posting for anyone else with this issue.