High performance CUDA notebooks GTX 285M anyone in a 15 inch form factor?

Can anyone recommend any high performance CUDA notebooks on the market that do have a business look and feel to them?
And they should not be bigger than 15 inches. Display should have full HD resolution (1280x768 or 1376x800 would be too low).

We have briefly looked at Alienware products, but the case just looked much too “gamer like”.

Some of the more professional quadro based notebook “workstations” (e.g. Dell) were simply priced too high and only came in 17 inch form factors

We’ve recently found a company called Schenker notebooks which will put a core i7 (4 cores) and a GTX 285M in the same 15 inch case. We’ve just received one of these notebooks for testing purposes.

The idea is that we demo our engineering applications and demonstrators on a mobile platform. For our radio simulations CUDA can never be fast enough. It’s really hard though to find all that performance in a small package. If you have any more recommendations, please chip in.

Christian

Thats not a laptop however you might be able to run real stuff.

http://forums.nvidia.com/lofiversion/index.php?t89891.html

Look at the posts from DimitriRotow

Bulky boxes wouldn’t work for us: our clients often lug a set of three laptops around in carryon baggage. The entire demo setup consists of three laptops, of which two will need to be CUDA capable very soon.

We need something like 96 CUDA cores and 512MB minimum to do our magic. Compute 1.1 devices or better.

I may post a review of that Schenker notebook once we’re through with it.

Your best bet is probably a Quadro FX3800M or FX2800M based “mobile workstation” (so 96 or 128 core G92M derivatives), but I don’t think you will find them in anything smaller than a 17" panel form factor. I know Dell, Lenovo and HP all do 17" SKUs with those GPUs

this company also packs a core i7 and GTX285 into a 15 inch case. There is hope.

http://www.cyber-system.de/product.php?pro…6594&cat=39

However there’s no big brand name attached to it. Probably some imported modular OEM chassis + mainboard.

EDIT: happens to be the same chassis as the Schenker notebook I currently have on my desk.

Christian

Right now you can do really quite well with a CUDA laptop. I have not used it, but the $1080 i5 and the $1400 i7 Acer “gamer” laptops both come with the rather excellent GTX360M, which is a device 1.2 GPU with 1GB of RAM and 96 SPs at 1.4GHz… very powerful indeed! It’s essentially a GT240 GPU. (in fact it’s really the SAME GPU. The compute per watt of the GT215 is quite good.). I am rather amazed at not just the absolute GPU power, but also the bang-per-buck for these laptops, especially the i5 version. You really can’t do better right now.

That said, there are rumors (and rumors only!) that the next Fermi chip derivative (the cut down 256 SP version) will have a low-power process tweak allowing it to be used as a mobile chip as well… and we may see those chips in laptops as soon as June. But this is pure rumor and you shouldn’t depend on it.

Thanks SPWorley, this is excellent information. The chip should be able to handle GDDR5 memory, right?

the nVidia product page names it GTS 360M by the way.

The 285M GPU we’re evaluating now tops out at 60 GB/sec in GDDR3. Our CUDA application speed is mostly memory bound.

The GT240 desktop chip tops out at 54.4 GB/s due to its 128 bit memory interface, so I don’t expect the GTS 360M to reach anywhere close to 50 GB/sec. If anyone has memory benchmarks for the GTS 360M (device to device mostly), please post here!

Christian

Hi everybody.

WARNING: There is a BAD Driver issue GTX260M … GTX 285M equipped laptop card for CUDA when you run Linux.
According to this thread there is a bug in the nVidia drivers preventing the card from clocking to its full clock speed.
http://www.nvnews.net/vbulletin/showthread.php?t=141116

I am affected by this bug as well. Powermizer stays in the middle speed settings (0, 1) and does not reach
the high clock speeds (2,3). Nothing I did to the xorg.conf file was able to fix it.

nVidia have been sitting on this bug for about a year, but finally the BETA driver 256.25 tried to fix it
by making the performance level 1 use the highest available clock speeds (600 MHz core, 1000 MHz mem)
This driver is just a few days old, and it may save the day.

EDIT: CUDA seems to be working again but it appears the 600MHz core, 1000 MHz mem setting
is not actually applied on the GPU, despite nvidia-settings highlighting that setting. Actual clock speeds remain
at 275 MHZ core, 301 MHZ mem.

Looks to me like a botched attempt of fixing the long standing bug.

Christian

Hi everybody.

WARNING: There is a BAD Driver issue GTX260M … GTX 285M equipped laptop card for CUDA when you run Linux.
According to this thread there is a bug in the nVidia drivers preventing the card from clocking to its full clock speed.
http://www.nvnews.net/vbulletin/showthread.php?t=141116

I am affected by this bug as well. Powermizer stays in the middle speed settings (0, 1) and does not reach
the high clock speeds (2,3). Nothing I did to the xorg.conf file was able to fix it.

nVidia have been sitting on this bug for about a year, but finally the BETA driver 256.25 tried to fix it
by making the performance level 1 use the highest available clock speeds (600 MHz core, 1000 MHz mem)
This driver is just a few days old, and it may save the day.

EDIT: CUDA seems to be working again but it appears the 600MHz core, 1000 MHz mem setting
is not actually applied on the GPU, despite nvidia-settings highlighting that setting. Actual clock speeds remain
at 275 MHZ core, 301 MHZ mem.

Looks to me like a botched attempt of fixing the long standing bug.

Christian

Christian, can you change the clock manually by setting Coolbits in xorg.conf and using nvidia-settings to override the clocks? That may let you boost the “1” runlevel clocks up to be where the full clock should be. Yes, it’d be a hack workaround, but perhaps you’d get the performance back.

Christian, can you change the clock manually by setting Coolbits in xorg.conf and using nvidia-settings to override the clocks? That may let you boost the “1” runlevel clocks up to be where the full clock should be. Yes, it’d be a hack workaround, but perhaps you’d get the performance back.

added screenshot of that dreaded nvidia-settings tool, showing the “good” clocks for Performance Level 1,
but not actually applying it (see the actual GPU Clock, Memory Clock indicated)

[attachment=22195:snapshot1.png]

added screenshot of that dreaded nvidia-settings tool, showing the “good” clocks for Performance Level 1,
but not actually applying it (see the actual GPU Clock, Memory Clock indicated)

[attachment=22195:snapshot1.png]

Frequency override seems disabled in these drivers. No matter how I change the sliders, they jump right back to their default position when hitting “Apply”

Frequency override seems disabled in these drivers. No matter how I change the sliders, they jump right back to their default position when hitting “Apply”

These were just announced:

http://www.nvidia.com/object/product-gefor…tx-480m-us.html

352 cuda cores, 850 MHz clock, 76 GB/sec memory bandwidth (!!). Supposed to start appearing in mid-June. If these are still compute capability 2.0, then this will be the first mobile CUDA part with double precision, right?

These were just announced:

http://www.nvidia.com/object/product-gefor…tx-480m-us.html

352 cuda cores, 850 MHz clock, 76 GB/sec memory bandwidth (!!). Supposed to start appearing in mid-June. If these are still compute capability 2.0, then this will be the first mobile CUDA part with double precision, right?

Hmm too bad this product is probably too late for us to pick up. We’ve gotta deploy our software by end of June on available hardware.

This GTX 285 issue is already throwing me off schedule, I am in desperate search for alternatives.

With the “handbrake” applied, the GTX 285 gives me 15 GB/sec of global memory throughput. Ouch. That’s like a GT 220 low cost desktop card.

BTW I’d happily buy this GTX 480M product as a desktop card. It seems to be a decent compromise between power consumption and performance.

Christian

Hmm too bad this product is probably too late for us to pick up. We’ve gotta deploy our software by end of June on available hardware.

This GTX 285 issue is already throwing me off schedule, I am in desperate search for alternatives.

With the “handbrake” applied, the GTX 285 gives me 15 GB/sec of global memory throughput. Ouch. That’s like a GT 220 low cost desktop card.

BTW I’d happily buy this GTX 480M product as a desktop card. It seems to be a decent compromise between power consumption and performance.

Christian

There’s a similar hack for OSX.It may be useful to write a little OpenGL tool that does the bare minimum to keep the power mode in the right level.