Cuda/Tesla for MacOsX? Will Cuda for Tesla be developped

Hello,
I have a MacPro under MacsOSX. I would be interested in buying the GPU multiprocessor board Tesla C870 for parallel computing in the scientific field. From the hardware point of view, there seems to be no problem to use it since MacPro now have 3 PCI-Express port. However, there seems to be no development environment in Cuda (the C compiler able to program this board) for MacOsX. Is there any plan to release a MacOsX version of the Cuda environment?
Sincerely,
Thierry

In another thread (which I can’t find now) the NVIDIA developers said that native OS X support is very low on the priority list.

However, you may be able to use CUDA on a Mac via bootcamp in Linux or Windows XP. There was one reference to it here:

[url=“The Official NVIDIA Forums | NVIDIA”]The Official NVIDIA Forums | NVIDIA

(although Mark Harris was using unreleased drivers so this may not be supported outside NVIDIA.)

Of course, this isn’t much help if you don’t want to reboot to run CUDA applications…

If you’re going to buy a C870, you might as well get a whole PC with it too. The PC will be several times cheaper.

Dear Alex,

in principle you are right: PC’s are cheaper. However, for a few months Apple features MacPro with 2 QuadCore at 3 GHz at an extremely competitive price. So for scientific computing, I am not convinced that PC’s are the best platform anymore. Since MacOSX is an Unix system I do not see too much trouble with porting Cuda to that platform since the Linux version already exists.

Sincerely

What I meant was that the 2nd PC (or even a 2xquad-core workstation) will be cheaper than the C870.

Anyway, I don’t quite see what’s the point of MacOS for scientific computing. There’ve obviously no advantage if you’re just running the code, and I don’t think developing on macos will be any better or easier. (isn’t xcode a joke? there’s eclipse, but i don’t think it’s better than visual studio, espcially when vs is empowered by a few of its commercial plugins.).

Also, I don’t want to start a flamewar, but I find MacOS X’s juxtraposition of the beautiful, elegant, but overly dumbed-down mac interface and the overly complicated unix core a bit odd. I’d leave MacOS X either to people who don’t actually like using computers (arists, grandmas, etc) or to those who must have unix but hate gnome/kde. I don’t see how developing CUDA fits into either one.

Actually, to really start a flamewar, how about CUDA on Solaris? Hmm, NVIDIA?

Perhaps a more concrete (and useful) reason than Alex’s “Macs don’t appeal to me” argument is that while OS X has a lot of the UNIX userspace from FreeBSD, OS X at kernel/driver level is different entirely. So porting over the tools from Linux, like nvcc, would probably not be too hard, but CUDA support in the graphics driver would require much new effort.

That is my theory for why the CUDA developers are not going to worry about OS X for a while.

Then again, the graphics driver is already written and the extra code that supports CUDA concerns itself with talking to the G80 hardware and the rest of the driver much more than it does talking to macos’s kernel itself.

I mean you’re still right that it’ll be much more involving than porting nvcc (which would be little more than a recompile), and you never said it’d be as hard as writing the driver from scratch, but i’d like to remphasize that it won’t be anywhere as hard as writing the driver from scratch (or fixing nvcc’s many shortcomings). I’d say nvidia’s main motivation is that it thinks macs don’t appeal to nearly anyone concerned with cuda. Heck, we don’t even have Vista support coming any time soon and it’s for the same reason: what’s the point?

When/if cuda starts becoming more mainstream and end-user applications begin emerging that take advantage of it, then we’ll certainly see good support for all platforms. While it is still just for experimenting and special-purpose projects, there’s little reason for CUDA to move away from trusty old XP and Linux.

Edit: I hate how I sound anti-progress like those folks who say “bah, vi is all you need” or etc. I also understand that you wanted to do things besides cuda on your new workstation. My original suggestion was that if you’re gonna buy a C870, you might as well get a dedicated box to put it in and make it a dedicated CUDA dev environment. In that case, it shouldn’t be a problem to load up XP.

Also keep in mind that you don’t want to use an 8-core box with cuda. Cuda doesn’t like multithreading and four-core processors don’t reach as high frequencies and fb-dimms don’t really offer better performance. You’ll be better off with just a regular dual-core machine and instead loading up some worthwhile software on the workstation.