Volumetric scattering handling and accumulation 3D buffer

My group does research on physically-accurate photon Monte Carlo simulations for biomedical optics and tissue imaging. We’ve written open-source CUDA and OpenCL codes (http://mcx.space, GitHub - fangq/mcx: Monte Carlo eXtreme (MCX) - GPU-accelerated photon transport simulator) to perform fast 3D ray-tracings of photon packets in voxelated and tetrahedral mesh spaces, and the code outputs 3D accumulated light intensity buffers as well as individual photon data, if captured by a detector.

We are currently considering re-implementing our CUDA code in OptiX to take advantage some of the hardware resources, such as RT cores. I just did a quick glimpse on some of the example codes and would like to ask two basic questions to plan on this project:

  1. The rays in my simulation not only change trajectories when hitting a surface, but perform random scattering along their paths, following known distributions in scattering length & angles. I am wondering which program handles scattering between objects? can one set a distribution of scattering lengths and scattering phase functions?

  2. My ray-tracer needs to output a 3D light intensity array (can be 4D if time-resolved solution is needed), by accumulating photon energy losses along its path. I know OptiX supports customized per-ray data payload, so I suppose I can add a “weight” variable to track the remaining energy of the photon packet, but I am wondering, which program I should tap in to accumulate the energy loss in the voxels/tetrahedra along the path of a photon?

I am sorry that my reading on OptiX is very limited as of now, forgive me if these are too basic.

I am also appreciate any examples that are related/close to my application. thanks

Welcome @FangQ,

I think it might help to study the OptiX programming model a bit. Here’s the section in the OptiX programming guide: https://raytracing-docs.nvidia.com/optix7/guide/index.html#basic_concepts_and_definitions#program-and-data-model

For the most part you can put any code you like in any of these programs, and decide where it makes the most sense for your application. There are a few constraints on which OptiX device functions you can call in each program type. These might affect your choice of where to put certain code. https://raytracing-docs.nvidia.com/optix7/guide/index.html#device_side_functions#device-side-functions

For #1, scattering along paths, you would probably do this in your raygen program. You can decide on your ray direction and length based on your distribution, then trace a short ray. If it misses everything in the scene, then you can start a new ray from the endpoint of the previous ray. For #2, accumulating energy loss - you can choose where it makes the most sense. You might, for example, manage energy loss in several different places. You could store the energy loss of one path segment in your ray payload during the closest hit shader, and then accumulate the energy losses for the entire path and store the result to global memory in your raygen program. It sounds to me like the optixPathTracer sample in the OptiX SDK might be a good one for you to study.


David.

@dhart, thank you so much for the pointers, that makes perfect sense now - basically, each scattering path segment is treated as a new ray, and each ray can reflect/refract when intersecting with triangles in the domain. For energy accumulation, I suppose I can backtrace the losses at the closest-hit and miss programs, and distribute the total energy loss along the space between where it is now, and its previous position.

I would like to ask about building acceleration structures - my acceleration structure is somewhat different from a typical triangular surface scene. My ray-tracer deals with voxelated grid or a tetrahedral mesh. In either of the cases, each step in the ray-casting, a ray only needs to test intersections with the 6 rectangular facets of the voxel that encloses it, or the 4 triangles of the enclosing tetrahedron. I always keep the index of the voxel or tetrahedron that encloses the current position.

In the case of a tetrahedron, for example, my element list may look like (just pick two elements as examples)

1, 2, 3, 5  # nodes 1,2,3 and 5 make the first tet
2, 3, 7, 5  # nodes 2,3,7 and 5 make the second tet
...

each row in the above element table represent a tetrahedron, with the indices of the 4 nodes that belong to the tet.

When my ray’s start position is in the first element, I only need to do ray-triangle intersecting test for the below 4 triangles

1,2,3
1,2,5
1,3,5
2,3,5

so, for every step, no more than 4 ray-triangle tests are needed. The mesh is pre-determined, so is the face list per element.

Is there an OptiX interface for me to manually specify this type of data structure as an AS, and restrict any-hit to only hit the facets in the “current” tet?

thanks again

For voxel / tet meshes there are a couple of very different approaches you might take in OptiX.

One approach would be to write a custom intersection program for a voxel or tet – let’s call it a cell, where cell means either voxel or tet. This means that the cell is considered a primitive, and you would provide your own bounds for the cells. The advantage of this is you have complete control over what it means to intersect with a cell, as well as complete control over the data you use to define a cell. You can, for example, use your own compression & indexing schemes for the nodes, and you can restrict your hits to only the facets of the current cell. The downside is that it might not be as fast as using the built-in triangle intersection.

Another approach would be to tessellate your cells, and provide the volume mesh to OptiX as an explicit triangle mesh. This way you could take advantage of the RTX hardware accelerated triangle intersections. This is likely to consume more memory than the custom intersection approach above, but could potentially give you much better performance. In this case, you might not actually want to try to restrict triangle testing to the “current” cell, you might want to just let OptiX trace through the individual faces as fast as it can, and figure out which cell you hit later. For example, for best performance, you might consider not using an anyhit shader at all, but instead using your closesthit shader to convert an intersected triangle ID into your cell ID. (And you could imagine whether you want to use single sided triangles, which would map directly to a cell without any logic, or double-sided triangles, where you would need to compare the ray direction to the triangle normal in order to determine which cell was hit.)

There might even be other options I haven’t imagined. I guess I hope my main message is that you probably can do what you want, there are just some tradeoffs you’ll want to try to explore at a high level a little before going too far down the road.


David.