I think the better question topic would have been “How do I implement explicit sampling of arbitrary mesh lights?” and that’s what I’m going to explain.

Simply forget about traversing through the scene graph for that. Even if that could be done with Selector nodes, that wouldn’t be fast. Definitely don’t do that.

Let’s assume global illumination in a uni-directional path tracer with next event estimation.

The lighting evaluation happens for two conditions:

1.) Implicit light hits, means a sampled ray randomly hits an emissive surface.

The emission distribution function (EDF) evaluation for that case happens inside the closest hit program of that material. There could also be a BSDF in that material so the EDF is normally evaluated first and initializes the returned radiance.

Inside the closest hit program you have everything available which is needed to calculate the vertex attributes in world space coordinates.

2.) Explicit light sampling, and this is where it gets a little involved.

You would store all lights inside a buffer of structs. The fields in that struct would need to contain all information to uniformly (or importance-) sample a point on the light source.

You need to be able to calculate a light sample point in world space and evaluate the EDF for that.

If you want to transform and instance lights multiple times, then you need the following:

- The object to world transformation.
- If you want to support non-uniform scaling (not recommended, breaks the CDF over triangle areas!), you also need the inverse transpose matrix to properly transform the normals. (Note that tangents are not transformed with the inverse transpose. They are in the same plane as the geometry.)
- The object coordinates of the geometry. This is the data in the attribute buffers assigned to the Geometry in your OptiX scene graph. You can access that explicitly via bindless buffer IDs. That is one reason why I’m putting all attributes into one buffer.
- If your mesh is indexed, the indices defining the topology of your mesh. Same thing, use bindless buffer IDs.

Means except for two integers this doesn’t need additional memory and no traversal to access any of the light geometries.
- To be able to uniformly sample the light surface you need a cumulative distribution function (CDF) over the areas of the individual triangles.

(If the EDF is not uniform over the area and static it would be possible to do importance sampling. If the EDF is not static (procedurally animated, a movie, etc,) forget about importance sampling.)

That CDF also takes care to never sample triangles of zero area automatically.

(If you want to support non-uniforma scaling, that pre-calculated CDF wouldn’t work, so I would highly recommend to use only uniform scaling, then you don’t need the inverse transpose matrix either.)

Now you would need to implement a function which explicitly samples an arbitrary mesh light.

You need four random numbers in the range [0.0f, 1.0f) for the following.

- Let’s assume you sample one of many lights per next event estimation. That simply picks one light definition inside you buffers of light structs with the first random number.

Note that when picking one of many lights, the emitted radiance needs to multiplied by the inverse probability of the chance to sample that one light. That is, multiply the emission with the number of lights. Do not merge that probability into the light sample’s probability density function (PDF) result! That’d be incorrect in the final lighting calculation. (Find example code in my OptiX Introduction examples.)

- The second random number is used to pick a triangle.

This is done by sampling the CDF over the triangle areas. The resulting index is the triangle index in your index buffer accessible via that bindless buffer ID in the light definition from step one.

(That CDF is only needed if the triangles are not all the same area.) Note that the light sample’s PDF also contains the ratio between triangle area and whole light area.

Now with that index you can load the vertex coordinates and other attributes from the attribute buffer via its buffer ID in the light definition.

Since you want to have transformed instances, you need to transform the object space coordinates into world coordinates.

Now you can calculate the effective area of that triangle which is needed to project it to solid angles. You also need the geometric normal to get the cosine theta for the foreshortening of the area.

Both available from the cross-product of two triangle edges.

Depending on the complexity of your light (EDF only on the front face, or on both, possibly with different EDFs on both sides, etc.) you calculate the cosine theta first and reject the light sample when looking at a back side. (Set the pdf to 0.0f.)

Now with the next two random numbers, you sample that triangle uniformly and get the barycentric coordinates.

With that you can interpolate all necessary attributes needed to evaluate the EDF. The code already exists in your intersection and the closest hit programs doing the EDF evaluation. Copy the minimal necessary parts into your explicit mesh light sampling function.

That function needs to return the light sample world position, direction from surface point to light, distance, emitted radiance, and pdf.

You would then need to check the visibility of the light sample and do your usual direct lighting calculations.

The OptiX Introduction examples I wrote contain a “LightDefinition” structure which is using bindless buffer IDs for CDFs for importance sampled HDR environment maps. The attribute and index buffers of mesh lights would need to be added in the same way.

[url]https://devtalk.nvidia.com/default/topic/998546/optix/optix-advanced-samples-on-github/[/url]

To give you an impression of possible complexity, I implemented arbitrary triangle mesh lights in a renderer supporting the NVIDIA Material Definition Language (MDL) which supports four different EDFs, mixing of EDFs, different EDFs on the front and back face of thin-walled geometry, cutout-opacity(!), and any procedural evaluation of the emission color and intensity. The explicit light sampling for that extremely complex case alone is just 200 lines of CUDA C++ device code because of elegant usage of bindless callable programs (similar as for BSDFs).

For simpler material systems this should be even less.

The EDF evaluation happens in a per-material bindless callable program generated at runtime when loading the MDL material into the scene. No need to do it like that if you have only one kind of EDF (normally diffuse).

If the explicit light sampling function is implemented as bindless callable program, you can call it in either the the closest hit or ray generation programs. That comes in handy when calculating direct lighting for in-scattering of sub-surface scattering materials or more deferred light transport algorithms.

Some words of warning:

- Since the triangle area is used in the denominator when calculating the light sample PDF, you need to make sure that the floating point precision isn’t exceeded for very tiny triangles. The CDF makes sure to not sample zero area triangles.
- If the distance between the light and the surface point gets too small, the normalization of the direction vector fails.
- Depending on the mesh topology (e.g. closed concave mesh) more than half of the explicit samples will be invalid.
- Mind that the default triangle intersection routine is not watertight. When putting EDFs on the inside of geometry only you will get light leaking between triangles. (Well, that’s a rather esoteric case.)