Why is my GTX 460 idling at 303 MB? (~1/3 total RAM)

Hello All,

I’m not sure where this thread is supposed to go (which explains my programs are so bad, ha) but the output of ‘nvidia-smi’ tells me that I’m at 303 MB of 1023 MB. The only thing is, this is an idle system and it’s never been this high before.

I’m using Arch Linux, kernel 3.9.9-1, and my GPU is a GTX 460 with driver version 319.32.

I am using the nvidia-settings control to override anisotropic filtering and to enhance antialiasing. These are both at max but this is nothing new for me, I’ve been doing this whole time and my idle usage has never been so high.

I’m using the Gnome 3 shell with plug-ins. Could plug-ins honestly be eating up this much RAM? Because I can start up my system, use the plug-ins and the usage won’t go up. It seems like with time the GPU RAM will fluctuate between 300 and 400 MB. Are these memory leaks written by poorly-constructed code? Because I’m not even sure how many of the average programs I use are OpenGL heavy.

Currenly I have Chromium, psensor, gedit, terminator and Clementine all running on top of base processes. Plus the shell and the plug-ins and the nVidia enhancements.

Is the x-server running on the card? If yes, that explains. Dsaplzing something requires a few bytes per pixel. I have 2 22 inch displays with max resolution and it uses more than 400 MB. When I do some Matlab rendering it grows as well.

This is gonna sound stupid but how do I check that?

open nvidia-settings. click on the name of gpu, if you x-server runs on it wyou will see a monitor atached and a line x-screens indicating on which screen it outputs. alternatevely open thefile /et/X11/xorg.conf and check which card is set in the device section.

if you have only one videocard in the system, the X server runs on it.

Ah, in that case then yes, I am running X off of the card. Is that where my high usage would be?

Yes. It depends on the resolution and size of the screen. In my case with 2x22 inch displays it is 600 MB. If you have 2 slots, buy a very cheap card (new or used) and use it for display. I saw some 610 at about 40 euros. or buy a Titan and use the 460 for display.

Or, if you can, shut down the X server, which will also give you the memory back. I typically SSH to my GPU workstation from another computer for this reason (as well as to avoid the watchdog timer).

Hmmm… Maybe a new GPU wouldn’t be so bad. But out of all the parts in my system, my GPU is by far the strongest. If anything, I really, really want a better CPU. I didn’t know it at the time but AMD chips aren’t my bag. They’re for the average common man and by that, I mean the kind that isn’t afraid his chip is going to melt. I feel Intel caters to the scaredy cats like me :P

So if anything, I’m buying some fancy Intel chip before another GPU. Thank you for telling me where all my RAM was going. I guess that’s just what happens as time goes by.

Okay, I do have one question though.

Let’s assume that I got two cards but they weren’t the same. Let’s say I got an even better card, like a radically better one. I was looking and a 660 Ti looks pretty sweet. 2GB of memory and was newegg lying when they said the memory clock was 6.8 GHz? I’m assuming I had to have read that wrong.

Nevertheless, could I designate certain tasks to run off of certain cards? Or could I transfer loads at will? Like, it’s cool it the 660 is the default dominant card but can I just transfer whatever it’s doing in its entirety to my 460? I’m talking about memory and everything.

I know for a fact that my CPU and HDD hook up well enough so that I can do a RAM dump into my swap partition which happens only when I’m running leaky programs with 100% cpu usage (a.k.a Steam and that one time I was like, “So, how big is too big for a structure?”).

Because, I want my 460 to do the grunt work of everyday tasks while I want the 660 for CUDA development and playing videogames. Man, an i7 would do great on that too…

I understand that trying to use both a 460 and 660 for the same task is dumb so I want to divide up tasks. Is there a way I can do that?

But seriously, DotA 2 on Linux actually forced my GPU to start using system RAM instead of its on-board pool. I went up to 970 MB last night and then I think it went up from there so it caused a memory swap which is fine by me. 1.6 GHz RAM vs. 1.4 GHz GPU.


I have a 640 GT as my primary card and a 660 Ti as my CUDA card. The 660 Ti card appears as device 0 to cuda programs, so in my programs i just use the cudasetdevice(0) and tehyy run on the 660 Ti. The other card appears as device 1. My X server runs on the 640 GT. I can use both cards for CUDA but if I run something on the card which runs the xserver the interface becomes almost unresponsive, like a few seconds delay for every click.
So in conclusion if you have 2 cards you can run the cuda programs on a designated card and the Xserver (and all the other stuff) on other card. For my motherboard I had to put the card on the PCI slot which has the id 1. 640