choices of video cards for osx what are my options

hello,

currently I’m using an nvidia GT8800 for my cuda development and osx leopard. Since these cards still go for 300$ (including tax) and the windows version of the same card can be had for as little as 100$, well the question is obvious I think,

can we use the windows/linux cards under osx for cuda only, I don’t much care about the graphic part. I just think its ugly to spend 300$ on one card at apple.com, when I can get 3 cards from newegg or tigerdirect with the same chip.

I’m sure somebody tried this before…

development system:

First generation Mac Pro
2 x 2.66 GHz Dual Core Intel Xeon
16 GB Ram
1 x NVIDIA GeForce 7300 GT
1 x NVIDIA GeForce 8800 GT

the other question is, in which order should I utilize the pci-x slots to get the most speed out of them?

thx!

I believe you need the apple version because of EFI and the fact OSX only deals with EFI. Running Linux on your hardware would as I understand it make it possible to run the ‘normal’ version. But that is only what I read. But that means it should be somewhere mentioned on the CUDA forums, so search for EFI ;)

thank you,

like always I found the answer to my question 5 minutes later my self. I was just not sure if the intel macs still have the damn EFI problem,

i love apple, but that’s just a shame.

Speaking of options, does anybody have any REAL experience with the Quadro FX 5600 for Mac in terms of CUDA utilization? (Keep in mind it retails around $3K!)

According to the configuration at the Apple store, it reads:

NVIDIA Quadro FX 5600
Featuring a massive 1.5GB frame buffer of GDDR3 memory, the NVIDIA Quadro FX 5600 is the ultimate workstation-level graphics card. It’s ideal for industrial-strength 3D design work, modeling, animation, and stereo 3D visualization. One of the most advanced graphics cards available, it has an integrated stereo 3D port, so you can use stereo goggles for stereo-in-a-window visualization applications. With two dual-link DVI ports, you can connect two 30-inch Apple Cinema HD Displays.

Ok, so there’s lots of memory, which may be good for my simulations as I can keep lots of data structures on the card. Is there any extra firepower on this chip compared to the NVIDIA GeForce 8800 GT that a CUDA programmer can utilize? I.e. is it worth the money and if so, for what reasons?

Any input is appreciated.
-David

I’m not sure what you mean by CUDA utilization exactly–pretty sure it works exactly as you’d expect. As for whether or not it’s worth it, do you need more than 512MB of memory in CUDA? If so, then the Quadro will be significantly faster than the 8800 GT if you can avoid swapping across the PCIe bus. If you don’t, then I would probably stick with the 8800 GT because it’s a Compute 1.1 device and supports PCIe 2.0 (I THINK the latest Mac Pros are PCIe 2.0, but I don’t have one to test).

Thanks for the input, I’ll probably lay off the more expensive card.
-David