NVIDIA GeForce GT 120 in new Mac Pros Looking for technical details

I can’t seem to find any technical specs for the NVIDIA GeForce GT 120 used in the new Mac Pros announced today. I need to know if they have 1.3 capability and how many MPs they have, etc. Can anyone point me in the right direction ?

They’re definitely not 1.3. Not sure how many SMs they have.

Hey I just searched around, and it looks like the GT 120 is merely a rebranding of the 9500 GT model. This implies to me that it’s actually a downgrade from the 8800 GT that used to be standard in the MacPros. If you look at the new Apple product page, they seem to be touting the top ATI card now. So, it’s hard to tell before there are some official releases, but I’m pretty sure it’s not a step in the right direction for Mac CUDA developers. :-/

Thanks - that’s rather disappointing - I wonder if this has anything do with technical issues, e.g. driver support or perhaps OpenCL, or whether it’s just a case of trying to shave a few $$$ off the total price ?

The new machines are iMacs, not Mac Pros. I believe the old ones had an 8800M, so this is definitely an upgrade.

Here’s the page for the GT 130M, which I assume is what’s included in the higher end ones, since there’s no page for anything called a “GT 130” on the Nvidia website:

http://www.nvidia.com/object/product_geforce_gt_130m_us.html

It appears to have 4 SMs. There’s no mention anywhere of a “GT 120” or “GT 120M”, so I don’t know what that is.

If anyone can find out what compute level they support, that would be great to know. It’s probably 1.1, though I suppose I can hope for 1.2.

Peter

There are both new Mac Pros and new iMacs. The Mac Pros have a “GT 120” card which, as far as I can tell, is rather pathetic considering the machine that it’s in.

You’re right, sorry. The news story I read only mentioned the new iMacs and Minis.

This makes me think that the “GT 130” must be different from the “GT 130M” described above. Surely Apple wouldn’t put a mobile GPU into a high end Mac Pro. Would they?

Do you know something more than I do about what exactly the “GT 120” and “GT 130” are? If so, please share your information. They appear to be completely new model numbers, which probably (though not definitely) means they’re completely new models.

Peter

I’m looking at the Apple Store right now. Here are the options via copy-paste:

new Mac Mini: NVIDIA GeForce 9400M graphics

new iMac: NVIDIA GeForce 9400M graphics NVIDIA GeForce GT 120 with 256MB memory NVIDIA GeForce GT 130 with 512MB memory

new Mac Pro: NVIDIA GeForce GT 120 with 512MB

Judging by the 512 MB of VRAM I’m guessing what I’ve heard about the GT 120 being a rebranded 9500 GT is correct. For example, the GTX 260 came with 896 MB of VRAM. The more modern GTX 285 comes with 1 GB of VRAM. The only cards under discussion here with a possible 512 MB are the low end version of the GTS 250 and the high end 9500 GT. The only form that matches is the 9500 GT. And frankly I’m not happy with either option, I was hoping for something more modern to program CUDA with.

If there was an option of upgrading to something more modern (like was available in the last revision of Mac Pro) I would be less annoyed by this.

I was wondering the same thing. The only reference I can find to it on the nVidia web site is in a beta driver release for Windows 7
It’s in the supported products here: [url=“http://www.nvidia.com/object/win7_x86_181.71_beta.html”]http://www.nvidia.com/object/win7_x86_181.71_beta.html[/url]

It’s listed above the 9800GX - does that mean it’s a better product or just a newer one? Maybe it doesn’t mean anything…

The Apple segment of the nVidia site still has the old graphic card listed. Maybe Apple didn’t tell them today was the day for a new spec!

The ATI Radeon specs are confusing too. On AMD’s website the Radeon 4870 has 256Mb memory, while the Radeon 4870 X2 is the one with 512Mb GDDR5. Does that mean Apple are using the X2 version??

Put another way, the Mac Pro that I just ordered (after waiting half a year) has the same CUDA compute ability and number of multiprocessors as my wife’s Macbook Pro, which was released June 5th, 2007. Blech.

I am really disappointed as well. I was really waiting for the Mac Pro update to buy one… 8-core is just too nice. The old Mac Pros couldn’t handle modern GPUs so I expected the new ones would fix that, especially with Apple’s move to Nvidia chipsets for all their products, and their OpenCL sponsorship.

Nope, I was wrong. Apple seems to have gone backwards. The new Mac Pros only support the GT 120… a $75 entry-level video card!

Apparently according to one the Mac review sites, the GT120 is a new naming convention (too new for nVidia’s web site?) and the card is based on the 9500GT chipset which would mean PCI-E 2.0, 800MHz GDDR3 and about 25GB/sec.

Not sure that’s really the same as a 2007 Macbook Pro…

Why should I rewrite my code to run on the GPU if the CPU can run almost as many threads faster with no code changes? (16 logical cores in the new Mac, each with SSE; 4 multiprocessors in the GT 120)

Ah, I see. I thought you meant it was the same as the GRAPHICS in the '07 Macbook Pro…!
They have been putting ‘desktop’ level graphics in the Macbook Pro in the latest revision, but I’m not sure how good they really are.

That’s really too bad. I’ve been waiting for the new iMacs, and was hoping they’d include a GTX 260 or something similar as an option. As it is, I think I’m going to go with the 4850, since it’s a far more powerful GPU for very little extra money. Not useful for CUDA, unfortunately. But with Snow Leopard and OpenCL coming soon, I think it’s probably the best long term choice.

It’s interesting that Apple is now using Nvidia for low end GPUs and ATI for high end. Isn’t that the opposite of what it used to be? This is pure speculation, but I wonder if it’s because of energy use? The 200 series are great GPUs, but they’re monsters when it comes to power consumption, and Apple is making a big deal about how environmentally friendly their new computers are.

Peter

I just noticed that the specs were posted:
[url=“http://www.nvidia.com/object/product_geforce_gt_120_us.html”]http://www.nvidia.com/object/product_geforce_gt_120_us.html[/url]
[url=“http://www.nvidia.com/object/product_geforce_gt_130_us.html”]http://www.nvidia.com/object/product_geforce_gt_130_us.html[/url]

Almost identical to the 9500 (as expected). I’m also taking a closer look at OpenCL because of this.

Thanks for that. Useful, if somewhat disappointing.

I wonder how much progress Apple has made with its OpenCL implementation ? I haven’t heard any rumours about its status in the Snow Leopard betas, but I guess Snow Leopard is due to be released in the next few months so if OpenCL is going to be part of it then it should also be in a somewhat “beta” state by now ?

The spec has been released for OpenCL (see kronos), and I’ve heard that the headers are in the pre-release versions of 10.6. So I’d suspect the reason the new high end option is the Radeon HD 4870 is because it has “800 stream processors” which compares to an effective 112 streams in the 8800GT. It could also be a move by Apple to force the adoption of OpenCL in favor of a hardware specific solution like CUDA.

I know someone on the OpenCL standards committee. While he hasn’t given me any specific inside information, he has certainly implied that it was on track to be released as part of Snow Leopard, and on other platforms not long after.

Those numbers can’t be directly compared, since Nvidia’s processing units are quite a bit faster than AMD’s. The GTX 280 with its 240 processing units and the Radeon HD 4870 with 800 are actually fairly close in terms of total processing power.

I thought about that, but then decided it didn’t make sense. The last generation of iMacs had AMD processors by default, with Nvidia as an upgrade option. The new generation reversed that. So although the high end GPUs can no longer run CUDA, the total fraction of their GPUs that can run CUDA has actually increased.

Peter