NVIDIA GeForce 8300 mGPU Geforce 8300 GPU CUDA performance

From the power consumption/price/space requirements point of view, this motherboard is more than suited for small factor CUDA projects. My question is, how does its CUDA performance scale up against other low-end cards? This question was already asked on this forum, but no answer has been given…

Quoting from a Tech Report article:

Read the full text at


My take on this GPU is that it does not deliver the punch for serious CUDA crunching because of its 16 stream processors only (typical G92 based mid range cards feature 96, 112 or even more), but for development only purposes and prototyping this board may be suitable

Also this uses a shared memory architecture with typically 256MB, at most 512MB (with latest BIOS) available to the graphics chip. Shared memory architectures are known to be notoriously slow.

I priced out a complete system with such a chipset at 236 Euros including German sales tax (UST). On the Micro ATX form factor I only found GF8200 graphics, which is clocked a little lower.

61,11 €  Asus M3N78-EMH HDMI GF8200 AM2 FSB 2600MHz PCIe mATX

	15,20 €  AMD Athlon64 3000+ 1800MHz 512kB 35W AM2 tray

	57,27 €  mATX ZIGNUM SPHERE Desktop ZG-S-360B black (inkl. 250 W PSU)

	46,19 €  Kit 2x1024MB OCZ DDR2 800MHz CL4 Titanium EPP-Ready

  7,50 €  Xilence CPU-Cooler AM2

	12,39 €  Samsung SH-D163B/BEBE 16/48xblack bulk SATA

	36,75 €  250GB Samsung HD250HJ 7200rpm 8MB

incl. UST 19%: 37,75 €


Total: 236,41 €

I’d rather buy a 260GTX ;)

UPDATE: Hong Kong manufacturer Zotac has a Geforce 8300 mATX offering, but it is hard to get in Europe.


The 8300 has pitiful speed, you can’t get any slower.

I’ve done development on my laptop many times, and its GPU is similar power to the 8300. It’s just as quick to develop on, really, since that’s more CPU dependent for compiling. Application performance, of course, is very noticeably slower than a real card.
But for development it was fine.

If you want to compare performance, it’s useful to look at charts like
and multiply the shader count by the shader clock. That’s the simplest and roughest way to start comparing speeds, though it ignores compute ability, mem size, and mem bandwidth.

Thanks for the answers guys. I will get the board and see what is possible with it. Any findings will be posted here.