how to use mipmap in CUDA?

I have a cudaArray on GPU memory and I bind it with a 2D texture. I want to create a mipmap of this texture so that I can work it with different resolution and I soppose mipmap is the most efficient way of doing that.

Unfortunately, there is no example of doing this in SDK. Can some CUDA pro shed some light here?

Thanks in advance!

You can’t mipmap from CUDA. However, if enough people ask, NVIDIA will take the effort to expose it. So register a bunch more usernames and keep asking ;)

1 Like

I wonder what kinds of features are in the hardware that aren’t exposed to CUDA but in theory COULD be. many features would probably be ways of feeding data to ROPs mostly, but I’ve never seen a list of what COULD be done in theory.

It is likely a good idea to keep CUDA as generic as possible, if you expose a feature it’s hard to withdraw support for it later, so I am not pressuring NV to make everything added tomorrow. I’m just curious what other kind of power we’re missing (MIPmap interpolation in the texture read units, polygon rasterization in the ROPs?).

Texture compression. Anisotropic filtering. Alpha blending. Anti-aliasing. Vertex cache. Vertex interpolation. Buttloads and buttloads. (And that’s just stuff exposed from DX. Who knows what’s available internally.)

Better yet… how much die space could we free up if we threw out everything that’s not exposed from CUDA?

Very very roughly, 30% of the die is processing cores.

About 15% is used for ROPs.

About 20% for frame buffers.

About 25% for texture. (I don’t know if this is just texture cache or functional units too)

http://techreport.com/r.x/geforce-gtx-280/…hot-colored.jpg

The center unlabeled part I believe is the memory controller.

But do you think that 60% doesn’t include smem, cmem, etc? Or that the 30% cores don’t include aniso or mimpapping? But anyway, I guess the point I still the same. If we don’t include all the gfx functionality in CUDA, in the future we could have a CUDA-dedicated chip that’s 2-4x as fast. Perhaps CUDA should split into 2 different version in anticipation, and the non-gfx version should also throw out the dumb texture cache (which barely does anything right now).

So, I’m asking as well. The proper way to process 2D pyramids with a graphic card, is texture MIP mapping. Also, some algorithms might be simplified with hardware mipmap interpolation.

me2. Would it be faster than writing a kernel for it, and making use of texture interpolation?

reading a paper about efficient scale space pyramid computation on GPU, I found a solution for exactly this problem.

It’s possible to create a Pixel Buffer in OpenGL and link it to CUDA, so graphics parts like creating mipmaps can be done using OpenGL, while further operations can be done using CUDA on the same buffer, without any data transfer.

Here is how to link the buffers:
[url=“CUDA, Supercomputing for the Masses: Part 15 | Dr Dobb's”]http://www.drdobbs.com/open-source/222600097[/url]

There really isn’t too much that isn’t exposed.

It’s true there are quite a few texture features that haven’t been implemented yet, but on current hardware there’s no way that graphics features like attribute interpolation, rasterization, or raster operations like blending or Z-test can be accessed directly from compute programs. Compute programs kind of run in a different mode.

Of course, there’s nothing to stop you building vertex buffers or textures in a CUDA program and then rendering them using OpenGL or Direct3D to get the best of both worlds.

Interestingly, with DirectX 11, the opposite is now true - pixel shaders can do some compute-like operations, such as random-access writes and atomic operations, which opens some interesting possibilities.

Just talking to myself here :)

+1 for Mipmaps in Cuda!

Why?

-The instructions are already supported by the hardware!
-DX11 allows mipmap read in compute shader!
-It’s a very usefull data structure, with applications beyond graphics.
-Cubemap mipmaps would be cool too!

1 Like