Changing materials from within shaders in OptiX 6

Using the old intersection programs, it is possible to pick the intersection material based on arbitrary parameters:

rtReportIntersection(someFuntcion() ? mat_a : mat_b);

For instance, I often want to select a material based on the ray_type.

What is the equivalent when using RTgeometrytriangles? Can a material be changed in an attribute program? Or by some combination of ray flags? Or would it be necessary to have a separate geometry instance for each ray type?

Not sure what your intended use case is with that approach, but I would not architect a renderer that way.

1.) any_hit and closest_hit programs are per ray type already, so they behave differently if you have different program implementations per ray type.

Doing different things inside the intersection or attribute program “per ray type” is not going to work. That would require to identify the ray type dynamically and that would need to be done with data on the rtPayload which in turn is not accessible in the intersection or attribute program domains.
EDIT: I missed the rtCurrentRay field ray_type. See below.

2.) Depending on your material system, it could be possible to have only one Material per GeometryInstance while still supporting arbitrarily many BSDFs.

Take a look at my OptiX Introduction examples which demonstrate how to do that for a simple case.
Links to presentation video, slides and source code here:[url]https://devtalk.nvidia.com/default/topic/998546/optix/optix-advanced-samples-on-github/[/url]
The block diagram at the end of the README page is really the whole renderers architecture.
(The OptiX Advanced Samples are not updated to support OptiX 6.0.0, yet. The CMake version detection needs changes due to the new library naming scheme using the fully qualified major.minor.micro version now.)

A single material index is selecting what the BSDF and parameters are, and while I store that material index at the GeometryInstance to reduce the number of unique Material nodes to two in the whole program (not counting the area light), that material index could be calculated any way you like as late as required inside the single any_hit or closest_hit programs per ray type, or if you’re adventurous even per ray.

This would completely remove the need for rtGeometryTrianglesSetMaterialIndices required to support multiple materials with GeometryTriangles. I’ve never used more than one Material per GeometryInstance.
From experience with my MDL-capable path tracer implementation using they same mechanisms, I can say that this is also applicable to much more complex material systems than shown in the OptiX Introduction samples.

Sidenote from your other thread:
Please do not use variables for the ray type IDs. Replace them with hardcoded defines since they should never change.
Unfortunately OptiX SDK examples were doing that in the past unnecessarily and esp. in OptiX 6.0.0 that should better not be done.

I could use rtMaterialSet*HitProgram to specify a different material program for each ray. However, I want to use the same program with different parameters (i.e. a different material index depending on the ray type). Creating multiple copies of the same ray program for this seems wasteful.

The ray type is a stored by rtCurrentRay, which is (or at least was) accessible within the intersection program.

Actually, ray type is just one of the factors I consider to dynamically change what material index is used. I also consider the ray t parameter in some cases.

Your MaterialParameter buffer could be a neat way to get around the assignments I’m doing in the intersection program. I will look into implementing this.

I’ve removed the RTvariable, but I notice that there are still some messages in the verbose output where a ternary operator chooses between several ray types. Potentially the MaterialParameter buffer can solve this in some cases by reducing the number of ray types.

Ok, my bad on the rtCurrentRay. I hadn’t used the ray type field in such a way.
The different ray types know how to handle things in their individual anyhit and closesthit programs in my renderers.

If it’s the same program object, the code will be there only once.
But that is really the whole point of different ray types. If you use the same anyhit and closesthit programs for multiple ray types, then they don’t need to be different ray types. Only if there are different sets of anyhit and/or closest hit programs.
E.g. my radiance ray for opaque materials has only a closesthit program, and the shadow ray only has an anyhit program. The anyhit programs are different for cutout opacity materials, where the radiance ray needs an anyhit program as well. But that’s it, two ray types, and one of two Materials in the scene at the resp. GeometryInstance for all material instances in the scene.

You could simply define the behavior of the anyhit any closesthit programs by setting the necessary information inside the rtPayload before calling rtTrace and then have that information available in the shared anyhit and closesthit programs.

If you are able to calculate the desired material index inside the intersection or attribute program, you could also simply write out that material index as an attribute, which again can be used to select the necessary behavior defined by a table of bindless callable programs inside the resp. anyhit and closesthit programs.

It took me quite some iterations to arrive at that renderer architecture and so far I have not encountered a rendering algorithm which wouldn’t be possible to implement that way. Additionally the kernel size is really small compared to the amount of features (in a bigger version of that), and using one closesthit program for all materials will re-converge the code execution after possibly divergent bindless callable program calls, means this is also good for performance.

It all boils down to the question if you would be able to handle that with one Material per GeometryInstance and the minimal amount of ray types.