I have a cudaArray on GPU memory and I bind it with a 2D texture. I want to create a mipmap of this texture so that I can work it with different resolution and I soppose mipmap is the most efficient way of doing that.
Unfortunately, there is no example of doing this in SDK. Can some CUDA pro shed some light here?
You can’t mipmap from CUDA. However, if enough people ask, NVIDIA will take the effort to expose it. So register a bunch more usernames and keep asking ;)
I wonder what kinds of features are in the hardware that aren’t exposed to CUDA but in theory COULD be. many features would probably be ways of feeding data to ROPs mostly, but I’ve never seen a list of what COULD be done in theory.
It is likely a good idea to keep CUDA as generic as possible, if you expose a feature it’s hard to withdraw support for it later, so I am not pressuring NV to make everything added tomorrow. I’m just curious what other kind of power we’re missing (MIPmap interpolation in the texture read units, polygon rasterization in the ROPs?).
But do you think that 60% doesn’t include smem, cmem, etc? Or that the 30% cores don’t include aniso or mimpapping? But anyway, I guess the point I still the same. If we don’t include all the gfx functionality in CUDA, in the future we could have a CUDA-dedicated chip that’s 2-4x as fast. Perhaps CUDA should split into 2 different version in anticipation, and the non-gfx version should also throw out the dumb texture cache (which barely does anything right now).
So, I’m asking as well. The proper way to process 2D pyramids with a graphic card, is texture MIP mapping. Also, some algorithms might be simplified with hardware mipmap interpolation.
reading a paper about efficient scale space pyramid computation on GPU, I found a solution for exactly this problem.
It’s possible to create a Pixel Buffer in OpenGL and link it to CUDA, so graphics parts like creating mipmaps can be done using OpenGL, while further operations can be done using CUDA on the same buffer, without any data transfer.
There really isn’t too much that isn’t exposed.
It’s true there are quite a few texture features that haven’t been implemented yet, but on current hardware there’s no way that graphics features like attribute interpolation, rasterization, or raster operations like blending or Z-test can be accessed directly from compute programs. Compute programs kind of run in a different mode.
Of course, there’s nothing to stop you building vertex buffers or textures in a CUDA program and then rendering them using OpenGL or Direct3D to get the best of both worlds.
Interestingly, with DirectX 11, the opposite is now true - pixel shaders can do some compute-like operations, such as random-access writes and atomic operations, which opens some interesting possibilities.
-The instructions are already supported by the hardware!
-DX11 allows mipmap read in compute shader!
-It’s a very usefull data structure, with applications beyond graphics.
-Cubemap mipmaps would be cool too!