I would like to try a particular displacement mapping approach explained here
I intend to read in and store displacement maps as single-channel float textures and compute mipmaps based on the max rather than average operator. My intention is to initially compute intersections with the base mesh and then add tiled surface detail in tangent space by doing algebraic ray-box intersections with the mipmapped box heights stored in a quadtree. All of this is to avoid treating the displacement itself as a mesh, which would then not be tiled and would probably exceed my memory limits (I will be using multiple megascans 8k displacements and maybe even larger).
So I have a few questions:
- Does this sound like a reasonable approach?
- Is there any way to access RTX ray-box intersections?
- What’s the best way of initialising, storing and accessing the mipmap/quadtree (can CUDA do this?)
My configuration is Optix 7.2, CUDA 11.2, Driver Version: 460.27.04, Ubuntu OS. GTX 2060 Super GPU.
This does sound like a reasonable approach, and a fun project. If you are expecting to run out of memory and have very large datasets, you should be able to save a lot by writing a custom intersector to trace through a displaced mesh. It might end up being more complicated than it seems though.
I would recommend supporting triangles tessellated explicitly (in addition to your textured displacement) so that you can get hard data on the difference between the two methods, both in terms of memory and in terms of performance. As I’m sure you already expect, there will be a considerable hit to traversal performance when tracking through MipMap textures and multiple sub-primitives in your intersection program.
You do get RTX ray-box intersections when you submit bounding boxes for your custom primitives. Is that what you mean about getting access? Or are you looking for a ray-box function call you can use inside your intersection program? OptiX does not provide an interface for a standalone hardware ray-box test. For that kind of thing I would recommend writing your own, tailored to exactly what you need.
CUDA is able to do the texture sampling and use MipMap textures, though I don’t know off the top of my head if you’ll run into any issues sampling max-pooled MipMap levels. I’m certain it’s possible to do what you want if you write it yourself, but I don’t know what complications you’ll have if you want to use the CUDA texture API and get the best hardware support - if you need any filtering, for example. There are some texturing samples here, perhaps look at the “Bindless Texture” sample first: https://docs.nvidia.com/cuda/cuda-samples/index.html#basic-keyconcepts__texture