CUDA-Noob, Slackware 12.2, getting ready to buy card

Hi,

I’m looking to get started with CUDA, and before I pull out the credit card and order the GPU I’d like to make sure I’ve got everything covered. I’ve had a fair amount of experience with math programming, Unix, Linux, etc., but none with graphics programming. I won’t be doing any “traditional” graphics programming with this card, just number crunching. Truth is, about the only thing I really know about graphics cards is how to plug 'em into expansion slots … :">

The PC I’m planning to do this with uses an AMD Phenom 9600 quad core CPU (2.3 GHz). OS is Slackware 12.2, kernel is 2.6.27.7-smp. 2 GB RAM. The card I’m thinking of buying is the Sparkle GeForce GTX 260+ Core 216:

[url=“http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=4405946&CatId=3775”]http://www.tigerdirect.com/applications/Se...&CatId=3775[/url]

A bit on the pricey side, but I need double precision floats, which means I need “compute capability 1.3,” and from the list in App. A of “NVIDIA CUDA Programming Guide 2.3,” it looks like I’m stuck with that expense.

Anyway, here’s what I’ve done so far. Downloaded the NVIDIA Driver; the CUDA Toolkit; and the CUDA SDK. (latest versions of everything) Slackware wasn’t on the list of OSes, so I used the Redhat Enterprise 5.x versions.

Everything installed fine. The NVIDIA Driver installed even though there’s no NVIDIA card (right now) in the machine – I suppose that’s no problem. I ran make for the SDK with the emulation option (“make emu=1”). The test programs ran ok , reported that everything had passed.

So … are there other things I need to check for my OS and hardware? Or can I go ahead and order the card? Many thanks,

SAM

I don’t know what you’re worrying about. The CUDA should work as soon as you have an CUDA-enabled card installed. So far you 've changed your OS, and it turns out to be fine.
So if your motherboard have the requiring slot for GPU, just go ahead and kick off.

All seems fine to me too.
There’s a ofcourse a thousand things that are possible to forget (like checking the motherboard has an appropriate slot as wico noted) that are considered “obvious”. But you seem to largely know your way around a computer :)

Having gotten the sdk stuff to run in emulation mode, I think you’re pretty much ready to go. And should you have any other issues, post back here :)

Just in case no one told you, double floating point is approx. 10 times slower than single. Just FYI.

Matt

I’m using CUDA with Slackware for 2 years, but with Quadro FX products, on the notebooks - no problem so far with either drivers or SDK (and I’ve used RedHat version all the time too). The only thing I’m not sure about your setup is regarding latest driver working together with 2.6.27 kernel - I am tracking Slackware -current all the time, and and now on 13.0, with 2.6.29 kernel, and had no problems this way. But certainly you could buy the card, and try first with your current Slackware installation, and then update if needed.

I just bought this card a few weeks ago. Just a few notes: A big power supply is needed 600W min. This card is long 280mm (11 in), not musch space left from the card to the disk bay.
Have fun

Since you need the double precision, I will tell you that today a new architecture was revealed, and speculation? has it that double precision will be far faster.
Threads on “Fermi” is likely to be sprouting like weed all day, so read more on those.

OK, got the card in, everything works. Thanks for the warnings. I had already tracked down the power usage and bought a 1 horsepower (750W) PSU. I knew the card would be a tight squeeze, but it looked like it would barely fit – which it did, barely. One of my hard drive bays is now inaccessible, but that’s ok, wasn’t planning on using it anyway.

Damn, the thing is fast! I did a few multiplications of big matrices, and got 200X - 300X improvement over the same program running on the CPU. I’m sure my CPU code wasn’t optimized for the best possible performance (I’m at best a mediocre programmer) … but then again, I’m sure my GPU code wasn’t optimized either.

Now, I’m left with one small problem. When I installed the driver, toolkit, and SDK, everything worked fine. Per the instructions, I installed the driver and toolkit as root, and the SDK as an ordinary user. When I rebooted, deviceQuery (run as user) no longer recognized the CUDA card.

However, what I found was that if I run deviceQuery as root after rebooting, it does recognize the card, and somehow makes it available to the user account as well – i.e., once I’ve run deviceQuery as root, I can go ahead and use CUDA normally in the user account; deviceQuery, my code, etc. all work for the user.

I haven’t done it enough times to convince me that this is absolutely reliable, but so far it seems to work. If it comes down to it, I’ve got no problem with adding deviceQuery to rc.local and letting that be the end of it. I’m wondering if anyone else has seen this issue, though? – maybe a SUID bit isn’t set correctly somewhere on some program?

I am tracking Slackware -current all the time, and and now on 13.0, with 2.6.29 kernel, and had no problems this way.

That’s good to know. So far, with the exception of the above problem, everything seems to work, but I haven’t given it a really thorough shakedown yet. If I run into any problems that are more than trivial, that’s the first thing I’ll try.

a new architecture was revealed, and speculation? has it that double precision will be far faster.

LOL – Figures something like that would happen. They probably put out the press release five minutes after I placed the order. :wacko: Well, from what I’ve seen, this one will more than fill my needs for quite a while.

SAM

My first thought about your “user has no access to card” problem is:
Is the card owned by group “video”, and your user not a member?

Specifically, you need write access to /dev/nvidia* as whatever user is running the CUDA programs.

Ahh, that’s it. I’m not running X, so those devices weren’t being recreated on reboot; when I run deviceQuery as root, it creates the /dev/nvi*s which is why everything works for the ordinary user afterward. Thanks!

SAM

I have a gentoo init-script that does this on boot, I can post it on monday when I’m back at work if it helps you.

I think I’m pretty much covered; so far all the code I’ve tried seems to work, etc. The missing /dev files explains the problem I was having; as long as I know that things were working the way they were supposed to and that there’s an easy workaround, I’m happy. If it’s no effort for you to post it I’d appreciate the sanity check on what I’m doing, but at this point I already consider it a resolved problem.

I did stumble across something else weird, though. I can’t read the temperature off the card, at least under Linux. nvidia-smi (with either -lso or -lsa) gives this output:

GPU 0:

Product Name: GeForce GTX 260

PCI ID: 5e210de

Failed to read GPU temperature!

Temperature: 0 C

I’m not worried about it; I’ve got a Windows 7 partition on the PC and I downloaded a few high-end graphics demos and ran them while monitoring the temperature. The worst case I ran into was by running Furmark for two hours straight; the temp went to a plateau of 80C, so I think my PC’s cooling is adequate. Still, a little weird. Nothing wrong with the card, I used two different tools under Windows to read the temp and got reliable, consistent, and sensible results. So if that’s the worst thing that happens to me under Linux I can live with it.

SAM

I also can’t read the temps on my GTX 295 cards with nvidia-smi. There are supposedly drivers now for lm_sensors to read the temp devices on each card (one per card, not device). If you run the lm_sensors scan, you’ll see them come up. I wasn’t able to actually read them, though, because the old kernel in RHEL5 did not have appropriate driver support yet. With kernel 2.6.29, you might have better luck, though. (lm_sensors can list the temperature devices even if you don’t have the appropriate kernel drivers, so give that a try first.)

There is an application called : “NVIDIA X Server Settings” somewhere in your applications menu. It will show you the temp with a nice graph.

In KDE4 it is under Configuration.