Optix 6.0 - Process each object one by one

Hello,

I have recently implemented a surface renderer with Optix to visualize the VTK-HyperTreeGrid data structure.
This is a tree-based AMR, a grid of octrees, which represents a voxelized 3D scene.

To do this, I create an object for each octree, and I do a DFS to find the nearest visible node when a bounding box is intersected.
I use the closest-hit function to render the node.

But there is my problem, a DFS on GPU (made by a single thread per object) is pretty expensive. And Optix waits all rtReportIntersection or miss from intersected bounding boxes to determine the closest intersection and render it. So a DFS is made for each object intersected by each ray.

Here is my question : Is there a way to process objects one by one, from the nearest bounding box to the farest ? And, when an intersection is done, stop the process and return the result ?

Best regards,
Antoine Roche

Not sure I get what the result should be.
Could you simply define only the filled voxels as as primitives into an OptiX geometry acceleration structure (GAS), maybe per object if required under an Instance AS (IAS), let’s say by defining the surface between two voxel volumes as two triangles, and then shoot rays at that to find the nearest intersection?
That way both BVH traversal and triangle intersection would fully run in hardware in RTX boards.

See previous discussions about voxels here:
https://forums.developer.nvidia.com/t/best-practice-voxels-use-triangles-or-custom-geometry-for-calculations-not-rendering/115230
https://forums.developer.nvidia.com/t/using-rtx-acceleration-for-voxel-tracing/74007

Thank you for your answer.
The goal of my renderer is to use less memory as possible to render a scene. Because scenes I try to render are massive, and could not fit on GPU. In that case I can’t use triangles which would make data even bigger. More, I would like to transform the surface renderer into a volume renderer later.
That’s why I use a TB-AMR data structure instead of triangles.

Your answer made me think of an another question : Is Optix suited for this kind of purpose ? I must admit that I didn’t look for the “best” API for my purpose before diving into Optix. Am I going in a wrong direction ?

If there is actually no real geometry you’d need to intersect with, there isn’t really a need for a ray tracer specialized in only that.

If it should become a volume renderer and it’s based on octrees, it’s most likely easier to use a ray marching approach.
Giving a natural organization into a hierarchical grid, you could implement an out-of-core approach over a working set of individual volume bricks fitting into your GPU memory by loading only the required ones and march through those.

The complexity of that depends on how the rays are scattered inside the volume. If that should render real volume scattering with all the frills (absorption, multi-scattering, in-scattering, emission, phase functions) the secondary rays will quickly diverge and require neighboring volume data all over the place.

Implementing this as an out-of-core algorithm is quite a challenge. You would need to limit the maximum number of rays to the worst case of the maximum working set, which is when each ray needs a different brick. So if there should be many rays, the bricks would need to be smaller.

This would be possible with a lot of work in native CUDA.
Doing something like that with OptiX 7 would make it even harder, not saying it’s impossible, but that’s adding another level of complexity because that actually means out of core geometry AS. I wouldn’t even start thinking about doing that with OptiX 6 though.

I really appreciate your answer, thank you.
I will rethink my way to render the data structure, thanks to your advice.

Best regards,
Antoine Roche

I have just found NVIDIA GVDB Voxels, a framework for sparse voxel rendering, which uses Optix.
I will do some research about it.