What happened to vertex precision? (attempt to move from opengl forum)

I’ve been sifting through old documentation of workstation graphics cards (specifically the 3d labs Wildcat Realizm 800 - a monster of its time) and it, and may other workstation cards of the time bragged about their higher-precision floating point rendering capability, which led to more crisp, clean visuals. Is this still a thing? I never see it mentioned anywhere anymore and I’m wondering now if it’s just an after-thought and it can more or less be software controlled in CUDA?

Graphics and CUDA are pretty much orthogonal.

Hmmm…ok…I guess then I would wonder if there’d be a way to have CUDA doing the math for the higher precision rendering and then pass it back to something that dumped it on the screen?

I have no hand-on exposure to such computations, but since CUDA offers interoperability with both DirectX and OpenGL you should be able to perform high-precision rendering with CUDA. There may even be a starting point for your explorations among the many sample applications that ship with CUDA.

As far as I am aware, professional ray-tracing renderers that run on NVIDIA GPUs are in fact based on CUDA computation.

Yes, certainly graphics and cuda can interoperate in various interesting ways. My “orthogonal” statement was meant to suggest that there are essentially no “graphics” knobs that are directly exposed in CUDA. You cannot directly access graphics oriented hardware or data. And to the extent that you can access graphics data, that must come through one of the interop methods.

As njuffa points out, there are various CUDA-graphics interop sample codes that demonstrate how to use CUDA to do some form of graphical rendering, then passing the rendered data back to OGL/DirectX for further processing and display.

There are also several “pure software” 3D pipeline implementations out there.

Here is one research result: High-Performance Software Rasterization on GPUs

Very interesting…to all of it. I guess what I’m stuck on is…with these older graphics cards, they did all this stuff in hardware…so at some point where the “rubber hits the road” your low-precision geometry (from…let’s say Quake 2) would get shoved into this card’s 36-bit geometry engine. It still renders, it’s plays, it’s still quake 2, but it’s living inside this 36-bit pipe - it’s the only way opengl geometry gets handled on this particular card.

I know newer cards (newer than like…2005 anyway) have things called ROPs which I guess are kinda like more advanced geometry pipes, but when they’re mentioned, it’s more of a core-count than it is a bit-width. I’m wondering…what happened to the consideration of bit-width? Do ROPs only handle…so much precision per unit? Is that something variably handled in software now? I guess what I’m really saying is…it seemed like when a card advertised a 36-bit high-precision pipe, that was the ceiling for precision and…for that card, it was the point where eventually all geometry had to pass through so no matter what, even if your math was of a lower precision, it was still carried out in the same pipe, but it could be no higher than the per-ordained 36-bits or you’d have a stack overflow or something.

The sense I get from things now is…someone could write a shader…or…some other kind of code that would say “I want to define a square, but I want to define it out to 64-bits of precision…here you go video driver, pass this to the gpu for rendering” and the gpu would just chew on the numbers until it produced the final image.

Is this correct? I can’t help but feel like I’m missing something…perhaps in my understanding of ROPs, but more so in the historical changes that have taken place within GPUs.

Thanks to all who replied, sorry if I sound insane.

Not sure why your original question was migrated from the OpenGL forum, because this is where it seems to belong. The graphics guys should be able to catch you up on how vertex and fragment processing happens with current high-end workstation GPUs such as NVIDIA’s Quadro line. I guess the mention of the word “CUDA” triggered the move, so I would suggest asking the questions above again, without mentioning CUDA at all.

I think first you would want to clarify what “36-bit” vertex precision actually meant in those old specs. What numeric representation was being used: fixed-point or floating-point? Also, I believe it was common marketing practice at the time to add the precision for components of an n-vector, so if the hardware could process 3-vectors of single-precision floating-point data, this was advertised as 96-bit vertex precision.

As far as I am aware (but I last dealt with 3D graphics ten years ago), processing of vertices happens in IEEE-754 single precision these days. The units used for this are freely programmable now, as opposed to the fixed-function units presumably used in those old workstation cards. The same programmable hardware is also used for compute-centric APIs like CUDA. As per earlier comments in this thread, you could presumably write a highly-accurate software renderer using the double-precision arithmetic capabilities of CUDA. Not sure whether OpenGL shaders support double-precision computation these days. They might, but I don’t know. Again something you would want to ask the graphics guys.