Image Rendering/OpenGL interop

Hi All,

I’m a bit new to CUDA, and am trying to figure out the best way to tackle a research topic.

I’m looking to do volume rendering from a series of position-tracked 2D image slices, and I’m trying to figure out the best way to tackle that problem.

From my reading, it sounds like a good way to do that would be to use 2D texture arrays to store the image data, and then insert them into a 3D CUDA array mathematically (depending on their position), and display in OpenGL.

Ideally, I’d like to be able to have a live-updating display, where I could re-render the 3D volume each time I add a new image slice (which are being acquired using a DVI framegrabber). Additionally, what might be the best way to store position data, which I have real-time access to in the form of a quaternion.

Does anyone have any recommendations as to what direction might be the simplest and most efficient method to tackle this issue?

Any insight would be much appreciated.

in addition to being new to cuda, are you new to opengl as well?

how much time do you have available?
given the possibility of time constraints, your relative degree of novelty, and the relative difficulty of the problem, i would suggest considering:
i) segmenting the problem,
ii) prioritizing functionality and features,
iii) incrementalizing - adding functionality and features in steps

" then insert them into a 3D CUDA array mathematically (depending on their position)"
you may already find this time consuming to realize

“I’d like to be able to have a live-updating display, where I could re-render the 3D volume each time I add a new image slice”
this is really an extension of the primary problem, and may be time-consuming on its own
personally, i would tackle this only as a step/ version 2, as it is an independent or orthogonal dimension of the original problem statement, in my mind

Hi–thanks for the response:

To answer your question: yes, I’m relatively new to OpenGL–I’ve done some graphics work with it in the past, but not enough to have a thorough understanding of it. Fortunately, I have several weeks to devote my entire attention to this particular problem & learn the necessary components.

I should mention that I do have the “insert 2D frames into a 3D array mathematically (depending on their position)” portion already worked out in Matlab, so the math is sound–my goal is to transfer this into a C++/CUDA environment for performance & parallelization reasons, of course.

I guess what I’m trying to figure out is a general roadmap of what CUDA/OpenGL features might be the most beneficial to my particular problem.

I assumed that using texture arrays to store 2D images would be beneficial since they can be manipulated and displayed by CUDA and OpenGL, respectively, but I wanted to know whether it’s possible to segment out parts of individual images on a pixel basis from a 2D texture array, and then map them into a 3D texture array at various angles, which could easily be bound & displayed in OpenGL quickly.

Thanks!