24GB !!! — NVIDIA Quadro M6000

I’m sure many of you saw this morning’s Quadro M6000 24GB announcement.

Wow!

In theory, at 300 GB/sec it would still only take 80 milliseconds to crawl the entire 24GB. :)

I am surprised that the forum software didn’t block this post as spam, despite three exclamation marks in the subject line :-)

Considering the introduction of this “mid-life kicker” product, and taking into account the lack of any news of a new CUDA version one might reasonably guess that Pascal-based products are still a good while off into the future. A brave new world providing rapidly diminishing traction from Moore’s Law.

The alternative post title was: “One weird trick to get a 10x performance boost from your GPU”.

:)

More seriously, I’m curious whether any of the apps in the announcement were explicitly managing GPU resources (swapping to host memory?) or if the speedups are mostly about already separable problems that can now run much larger grids working on much larger datasets.

I suspect it’s the latter but the quote about interactively working on visual effects makes me unsure.

I think the speedup might be simply from avoiding data transfer across PCIe when processing frames at 4K resolution.

That makes sense.

What’s currently the biggest RAM size you can get on a consumer nVidia device?

The high end Quadro’s and Tesla’s are completely out of my price range… ;)

Christian

Titan X is 12GB at around $1100
GTX 980Ti is 6GB at around $700
GTX 970 is 4GB at around $350
GTX 960 is 2GB at less than $200

So around $100/GB

Titan X has 6 GB per GPU though, I presume? EDIT: oops, nope. that was the Titan Z.

I am eyeballing a 980Ti for my next development/gaming PC.

The big applications that come to mind are first giant models in CAD, but most likely for 4K video editing. Uncompressed 4K is huge, and the more GPU buffer you have, the more interactive it is to scrub and merge clips.