keeping track of multiple materials at a time

Hi everyone, I’m migrating a ray-tracing project to OptiX, and in this project I have meshes asociated with up to two materials. One of the materials references the actual material of the mesh, and the other references the surrounding material. At the moment I’m associating GeometryInstances with the first Material. Is there a way to add the second “surrounding” material as a GI parameter?

In addition to that, I need to keep track of the material the ray is actually at. I thought of doing so within the Ray Payload, but I don’t know if that’s possible.

Thanks in advanced!

Please have a look into the OptiX Advanced Samples on github.
All links here: [url]https://devtalk.nvidia.com/default/topic/998546/optix/optix-advanced-samples-on-github/[/url]

The material system I implemented inside the OptiX Introduction examples only needs a single index to define a material behaviour and the renderer supports a small material stack to handle nested volumes which keeps track of the current and surrounding volumes to calculate the effective index of refraction during transmissions and do proper absorption when hitting anything.

This is the material stack used for the proper IOR and absorption handling of nested materials.
(That renderer doesn’t support volumes with coplanar or overlapping geometry, only disjunct and fully contained volumes.)
[url]optix_advanced_samples/raygeneration.cu at master · nvpro-samples/optix_advanced_samples · GitHub
This is where the current volume information is used to calculate the effective eta:
[url]optix_advanced_samples/bsdf_specular_reflection_transmission.cu at master · nvpro-samples/optix_advanced_samples · GitHub

Thanks for the quick response! I’ll have a look at those samples now. One more question, is it posible to do ray marching in OptiX along with the typical ray casting?

Yes, you could trace rays to do ray marching in OptiX, but if the problem is to step through axis aligned bricks of volume density data in one direction and there is no other geometry inside the scene, then OptiX would be slower than implementing that marching yourself natively in a compute API, because you know where the data resides and could calculate the ray’s entry and exit points yourself and march though the volume data instead of shooting rays in OptiX.
If you use OptiX to find the entry and exit points of these bricks only that would be super fast, and the marching could be done in the ray generation program, which is more of a compute kernel in that case.

Great, I’ll try and look into it. Thank you very much for your answer! I’ll try to get those entry and exit points of every surface with OptiX and then do the marching on the ray gen.

If there is no other geometry inside the scene than axis aligned blocks of density data in a regular grid, I would really recommend to use native CUDA instead.
The calculations to find the intersections with an axis aligned box are the same you would need to program as intersection program (when not representing the volumes extends with triangles).
There would be no issues with coplanar faces, because you always calculate entry and exist points for a complete box and not when hitting something. You wouldn’t need to care about watertight intersection tests (which is only the case for GeometryTriangles), and you can do all sorts of warp sized calculations and use shared memory inside native CUDA which you can both not apply inside OptiX kernels.

I’m actually using all GeometryTriangles, since all my Geometries are composed solely of triangles, so I don’t really have Intersection and Bounding Boxes programs. Do you still think that I should use native CUDA given that?

I can only give recommendations. Finding the intersections in a regular grid of axis aligned boxes is the least of the problems in a pure density volume based rendering algorithm.
I don’t know what would work best for your specific use case. As said, if there is more geometry than just the volume data boxes in the scene, maybe in the future of your application, using a ray tracer will come in handy.
You could also go hybrid and determine the intersection with a ray tracer and once you have the ray segments to march, do that processing in CUDA. There are so many possibilities.

If your scene isn’t the typical medical data case but uses irregular geometry, then you’d need a ray tracer anyway. In that case the volume absorption or volume scattering calculations can happen inside the ray generation program or with some more overhead maybe even in the closest hit program when entering a volume, but that adds recursions you possibly want to avoid. Volume scattering is also not really ray marching. That is more like a random walk through the scattering volume according to a volume distribution function, if that is your use case. But there are many different algorithms to do that as well.

That’s OK, I really appreciate all your advice! I think I’ll try to go for the hybrid approach and see how it works. Again, thanks!!