rendering from textures

I am doing image processing from a camera. After capturing the image I copy it into a CUDA array and use texture fetches in a kernel to write to a mapped pixel buffer. Then, I render the pixel buffer using the opengl subimage2d command to put it into the rendered texture.

Since the original image is in a CUDA array, and therefore texture memory, is it possible to somehow directly render from that location if I wanted to render the input and output of the kernel side-by-side? It would seem silly to have to copy to cuda texture memory, then copy the input in a kernel to a pixel buffer just to copy it back to texture memory.

Thanks in advance,
Mack

bump

I’ve never used these features so I cannot really any better help then telling you to read the manual. See section 3.2.7 of the version 2.3 programming guide for an introduction. The reference manual has the full documentation for all OGL interop functions.

No, there’s no way to render directly from a CUDA array in OpenGL. It’s annoying, I agree, but using pixel buffer objects is still pretty fast.

Directly interoperability with OpenGL texture objects is on the CUDA roadmap, but I can’t promise when it will be released.