Hi all,
I am wondering whether existing an efficient way to rotate 3D data and sampling the rotated the data into the original image grid. One way to do it is to apply an rotation matrix to each point in the 3D data set. I guess rotation is probably implemented in the graphics card pipeline, is there any way to utilize the build-in rotation mechanism? Thanks.
The texture hardware does have built-in 2D linear interpolation capability. If you were working with a 2D set, you could load it as a texture, and then read back the interpolated values at the rotated grid coordinates. For 3D, you could maybe load your data set into several 2D texture planes, and the interpolation hardware would at least do the in-plane part of the rotation work for you. Your code would still have to manually handle the interpolation between planes, though.
Although the current beta 0.8 of CUDA doesn’t provide variants of texfetch() with 3 or 4 element coordinates, if you read the CUDA header files you can see that the low-level texture routines referenced there, __utexfetch*(), __itexfetch*(), and __ftexfetch*() all seem to accept 4-element coordinate types. I would have been surprised if it had been otherwise since the hardware clearly supports 3-D (volumetric) texturing and MIP mapping. So, while CUDA beta 0.8 doesn’t expose this, I think that there’s hope we’ll see it show up in a future version. This is just my own guesswork resulting from my own curiosity since I have a significant need to use 3-D texturing/filtering for my applications.
Check out texture_fetch_functions.h to see for yourself.
Well you will have to do the rotation matrix in your code anyway (to get the texcoords) but the lookup into the 3D texture should be cached while fetching with arbitrary rotated coords from device memory will give no coalescing at all in most cases.
And if filtering is desired you get trilinear interpolation (almost) for free with textures.