Pixel plotting in CUDA

I was just about to begin the rite-of-passage Mandelbrot program in CUDA, when I thought about the best way to actually draw the thing.

Without CUDA, I usually use SDL (a wrapper for OpenGL) to draw the pixels, but it occurred to me that since the final data is already in the GPU’s global memory, and that everything pixel-related goes to the GPU anyway as the final step, how nice it would be to get code inside the kernel itself to plot the pixels. Otherwise, it means, transferring the picture data back to the host, and presumably, back again to the GPU. Such a waste of time.

Is this possible? Maybe I’m being naive here. I essentially have a 24-bit colour 2D array of memory representing pixels on the GPU, and would like the GPU to display that memory as a picture (in a window preferably), and eventually as a (realtime) animation without any worries of slowdown.

Maybe I can still use SDL?

Oh hey, a fellow from fractalforums.com!

OpenGL or DirectX interoperability is what you need to use. Several SDK samples demonstrate how to bind a piece global memory to a texture, write to that memory from a CUDA kernel, unbind that piece of memory - and then how to render the texture onto the screen (usually by drawing a quad into a window). There is a bit of overhead associated with doing so, but it is usually much less than transfering the data back to the host.

For reasons of best memory coalescing, you’d definitely want to use a 32 bit RGBA texture. 24 bit writes are way more difficult to align properly (if possible at all).

OpenGL interoperability us definitely more cross-platform (similar to SDL), so that is what I usually prefer.

When I program CUDA code to draw stuff onto the screen, I usually take one of the SDK samples as a basis to start with.

Hi too! Actually I said hello to you first in the older thread here :D

Thanks for the info. Some of those code samples look quite involved (my illusion of simply dumping the pixel array from GPU memory to the display in one sweep was probably a little naive and optimistic unfortunately! Maybe a future GPU will overcome this), but I’ll have a deep look at them and see how I get on. Maybe also in the meantime, I can do it the ‘slow’ way.

Do you think I can still use the simpler SDL wrapper instead of OpenGL?