Been having great fun learning to play with CUDA… Thanks NVIDIA for an AMAZING tool!
It takes a switch of gears to see how your new limits are memory patterns and branching structure!
I’ve played with many of the (very impressive) demo projects, and even modifed a few.
I have tons of Newbie questions though, ones that have clues scattered everywhere but aren’t so obvious.
#1 One confusion I have is a HARDWARE question. The “5 second timeout” is alluded to but so very vague. My (super background, number chugging) computes will take DAYS if not WEEKS to run, but it’s reasonably feasible to break that into say 50ms chunks.
But my experiments show that while a kernel runs, your machine is FROZEN. Not just 100% pegged CPU, but the display itself is absolutely unchanging, no mouse pointer sprite movement, nothing. Is this supposed to happen?
I get the feeling it’s correct, since there are references to using “non-display” cards for CUDA, and making sure SLI is off, and so on.
But it just seems so strange that the display will FREEZE. is there no way to let CUDA work “in the background” somehow?
This issue may be because I’m using a laptop… a Thinkpad T61p with Quadro 570M. But it sure makes laptop CUDA terrible, even though everything else works fine.
On a desktop machine, is it best practice to do something like use one cheap card for display (no CUDA) and a second (or more) for CUDA? I would assume then that there’s no need to match the card types… I could use my old 7600GT or something for display
Since I will likely buy a new CUDA power-card now that I’ve had programming success, is there any hints about the new GT280 card coming out in 2 weeks? I don’t need that much compute power, BUT if the CUDA hardware capability model is updated (say, to support doubles, or 64 bit ints), it would be smart to get the latest capabilities.