I’m trying to combine the scene logic from sutil with the optixPathtracer rendering algorithms from the optix examples.
So far i succeded with loading my own .glb file through a slightly modified version of the scene component and building everything in a working optixMeshViewer example. After this i modified the module building, program and pipeline creation functions in sutil scene.cpp to work like the ones in the optixPathTracerExample.
While these were quite easy i struggle quite a bit with the SBT creation. I can’t figure out how to bring together the two approaches to creating the hitgroup_records. It seemes to me that based on the differences in accelleration structure creation(from meshes vs from vertices) both examples create them in a different way.
the sutil approach (works with the whitted rendering algorithms)
sdt::vector<HitGroupRecord> hitgroup_records;
for( const auto mesh : m_meshes )
{
for( size_t i = 0; i < mesh->material_idx.size(); i++ )
{
// create record
// pack header for radiance
// add mesh specific data to record
// push rec to hitgroup_records
// pack header for occlusion
// push rec to hitgroup_records
}
}
the optixPathTracer example (works with path tracer algorithms from optixPathTracer.cu)
HitGroupRecrod hitgroup_records[RAY_TYPE_COUNT * MAT_COUNT];
for( int i = 0; i < MAT_COUNT; i++ )
{
// pack header for radiance
// add mesh "global" geometry data to record
// set data in hitgroups records using id for radiance
// set data in hitgroups records using id for occlusion
// pack header for occlusion
}
So while the first approach creates a vector of size meshes * material count * ray count, the second one creates and array just with size material count * ray count. So if i understand correctly the whitted example passes single mesh data to the cuda programms while the optixPathTracer examples passes all the triangle data and has to use optixGetPrimitveIndex() to find out the triangle that was hit inside the program.
My questions regarding this are:
Are those assumptions correct?
What are the benefits of creating one record for each object?
How are the records mapped to the execution of a program (e.g.: __closesthit__radiance()?
The main difference between the two applications with respect to the shader binding table (SBT) is that the optixPathTracer only uses a single geometry acceleration structure (GAS) with multiple SBT hit record entries to handle the different colors of the polygons, while the optixMeshViewer is using a two-level acceleration structure, means a single instance acceleration structure (IAS) as root traversable where each mesh GAS is referenced by an instance in that IAS. Therefore the SBT layout needs to be different.
Please read this chapter of the OptiX Programming Guide https://raytracing-docs.nvidia.com/optix7/guide/index.html#shader_binding_table#shader-binding-table
and concentrate on the SBT index calculation formula in chapter 7.3.
That is crucial to understand with which values from the instance and optixTrace arguments the traversal decides which hit or miss record to call inside your SBT.
Work through the examples given there. The SBT is very flexible and there are some possible structures when using instancing.
Thanks a lot for clarifying this! Also thanks for pointing me out exactly to the section of the programming guide that answers my question. I will read through the chapters again and use it to understand the samples. I feel it could help if there would be more elaborate comments in the samples or references to the examples in the programming guide, to learn this faster. I’m quite new to the raytracing domain though, so it may just be me getting a little confused there.