Rendering artifacts with 2D plane

I’ve been playing around with the new introduction samples and have hit a strange problem.

I’m trying to switch at runtime between a raytracer (optixIntro_03) and pathtracer(optixIntro_04). I’m switching between renderers using 2 entry points and I’m using unmodified versions of the 03 and 04 sample programs except that I made the miss program render blue for the raytracer. I’m sharing the same bounding box and intersection programs. When the render type is changed, I switch miss programs and update each GeometryInstance with the appropriate material before rendering again.

Each render type works correctly if it’s the first one used but when I switch from one to another there are rendering artifacts.

Here’s a short video
[url]OptixRenderSwitching - YouTube

These artifacts show up only if the ground plane is 2D. When I make the ground a 3D box, switching back and forth between renderers works perfectly.

These artifacts are typical self intersection patterns.

The sysSceneEpsilon is used to offset the continuation ray’s start interval value t_min in the optixIntro_04 example. That epsilon offset is scene size dependent.
If you run any of the original examples 04 or higher and decrease the scene epsilon factor in the GUI (System → Scene Epsilon) to 0.0 you’ll see them on all geometries.

Also note that the default settings of that 04 example is showing ambient occlusion by limiting the maximum path length to 2 and using all white Lambert materials in a white environment.
But it’s actually a full global illumination brute force path tracer, only using just a Lambert material.
Try to set the floor plane material number 0 to some other color and then increase the maximum path length (hmm, I shortened the GUI names too much) and you’ll see the proper color bleeding from the floor onto the objects.

Now, the optixIntro_03 is doing nothing like that because it is only shooting primary rays.
The GUI is empty except for the Mouse Ratio factor.
It’s also not doing antialiasing through jittering of primary rays’ sub-pixel position, it’s just shooting primary rays through the center of the pixel and is done after one launch.

That 03 example is only meant to show how to get Geometry nodes into the scene and connect GeometryInstances with Materials and the closethit program. I would not “combine” that example with any of the following. Not sure what you did there, but you possibly forgot to reset the sysIterationIndex to restart the accumulation.

The tutorial examples were meant to show the progression of starting with OptiX from scratch to a very elegant and easy to extend uni-directional path tracer. Each of the examples completely replaces the previous one architecturally until 07.
08 and 09 just add motion blur and denoising on top.

If you want to integrate the resulting path tracer into an own application I would start with optixIntro_07. Anything before that is just leading to that final architecture.

If you want to have a normal vector visualization mode in that path tracer, you would simply add some new BSDF implementation to the buffer of bindless callable program IDs with maybe a “mode” parameter which is selecting some value to return on the sampling function and terminates the ray.
That wouldn’t be a directly lit material, so evaluation would do nothing. Do not set FLAG_DIFFUSE and it will not be evaluated. Use the dummy evaluation program from the specular reflection material which just returns null.
If you let that visualization output into the radiance of the per ray data, it’s actually behaving like an implicit emissve material then.
Anyway, it’s some way to handle Arbitrary Output Vectors (AOV, name it bsdf_aov maybe).

Some notes on adding geometry dynamically to the scene.
The examples are currently not showing instancing of identical Geometry via Transform nodes as described in the slides 8 to 11 in my GTC 2018 presentation.
For example, in the optixIntro_06 are two spheres. That wouldn’t have been necessary because they have the exact same geometry, scaling is done via the Transform.
If your application can add arbitrary numbers of these basic geometries, it’s highly recommended to reuse the Geometry and Acceleration nodes, if you’re not already doing that.