GPU Info/Tutorial and OpenGL Process

I apologize if this isn’t in the correct section. But, I decided to use CUDA for a GPGPU implementation. However, I realize that I don’t know the process of how modern graphics cards interpret and process data. I need to understand more of it to know the limitations.

Can someone recommend a good online tutorial (or a book) to me? I don’t know much about vertex and texture buffers or how they are implemented. Is there any way to bypass the “rasterization” process and keep or extract XYZ coordinates? I don’t know if these processes are hardcoded (in ASIC device) or if it’s just a section of assembly code.

I basically need to know what parts of the graphics card I can control and can’t. In addition, just the basic processes of how modern graphics card (particularly NVIDIA) work.

This is related to the first question. I know that OpenGL is an API that the company (in this case NVIDIA) implements on their own. Is the implementation mainly on the software or hardware level? By this, I mean, does the software execute thousands of lines of code to draw a rectangle? Or does the software execute a couple of lines, and a circuit in the hardware does the rest?


CUDA is a library that prevents you from having to pretend you are programming a graphics card. I suggest you look at the examples in the SDK for how to program for CUDA. I know how to program CUDA, but have no clue about vertex & texture buffers or rasterization.

The other question is something for an NVIDIA person, if they are allowed to answer it.

Read the Programming Guide to learn about using CUDA and GPUs for computing.


Both the OpenGL and CUDA driver basically just submit commands into the card FIFO, there is surprisingly little that the CPU has to do except mangle some state flags and coordinates. The only reason that these drivers still cause a lot of CPU usage in some cases is because of the busy waiting that occurs when the device is not ready yet and it has to synchronize.