I’m a bit new to CUDA, and am trying to figure out the best way to tackle a research topic.
I’m looking to do volume rendering from a series of position-tracked 2D image slices, and I’m trying to figure out the best way to tackle that problem.
From my reading, it sounds like a good way to do that would be to use 2D texture arrays to store the image data, and then insert them into a 3D CUDA array mathematically (depending on their position), and display in OpenGL.
Ideally, I’d like to be able to have a live-updating display, where I could re-render the 3D volume each time I add a new image slice (which are being acquired using a DVI framegrabber). Additionally, what might be the best way to store position data, which I have real-time access to in the form of a quaternion.
Does anyone have any recommendations as to what direction might be the simplest and most efficient method to tackle this issue?
Any insight would be much appreciated.