The optixPathTracer is too involved for you as a first example then.
Please work through the simpler OptiX 7 SDK examples first. Those give a basic introduction how to get from a solid colored background (optixHello) to your first triangle (optixTriangle). Play with the code and change things like vertex coordinates to see what happens and only then move to more advanced examples.
The OptiX 7 SIGGRAPH course you’ve found shows the same things step by step in a different application framework.
OK, some 3D graphics fundamentals to get you started:
The optixPathTracer builds a hardcoded scene where the five surrounding walls (front is open), the geometry of the two boxes and the single area light are basically all rectangles.
The coordinate system is right-handed and has positive x-axis to the right, positive y-axis up and positive z-axis pointing to the front. Means with the origin at (x, y, z) = (0, 0, 0) any point y == 548.8f like in the ceiling example is 548.8f units above the origin.
The coordinate system in your image is also right-handed but with z-axis up. Conversions between these two are a 90 resp. -90 degrees rotation around the x-axis.
OptiX doesn’t care about y-axis or z-axis being up. That’s all your decision to make.
(Sidenote: Care needs to be taken about the triangle face winding (order of vertices in a triangle) because that defines which side is the front face (default in a right-handed coordinate system is counter clockwise winding) and while this is often used in rasterizers to omit the backfacing triangles, this is also possible in OptiX but normally not used in raytracers. That’s also why you always need to be aware of the handedness of your coordinate system, because changing that also changes the vertex ordering.)
Because ray-triangle intersection is supported by the RTX hardware and much faster than implementing a custom intersection program for rectangles (which is possible), the program splits each rectangle into two independent triangles.
That’s why there are six vertex coordinates for the two independent triangles per rectangle. Note that there are two coordinates of the rectangle corners appearing twice in that array for the ceiling and all other rectangles.
(Personally I would have ordered these vertices differently, because it’s better style to not define an edge connection through the diagonal of the rectangle. Note that both triangles start at the same corner? I digress.
This duplication of the vertices when using independent triangles can be avoided when only storing the four rectangle vertices and building indexed triangles by indexing into that vertex pool. This is generally more efficient for memory size and access performance, esp. for big meshes.
Example code using 12 indexed triangles to build a simple closed box here:
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_runtime/src/Box.cpp#L176)
All 3D coordinates of the 32 triangles in the scene are hardcoded inside the code, like for the ceiling example you cite.
The coordinate definition is using the Vertex
structure defined at the beginning of the code which looks like this:
struct Vertex
{
float x, y, z, pad;
};
The fourth component is not used and always set to 0.0f.
Actually that data is accessed as CUDA float4 vector type inside the device side (search for float4* vertices;
) and should have better be defined as float4 type in the host as well, but only the xyz-components of these vertices are used inside the closest hit program. Search for this code:
const float3 v0 = make_float3( rt_data->vertices[ vert_idx_offset+0 ] );
const float3 v1 = make_float3( rt_data->vertices[ vert_idx_offset+1 ] );
const float3 v2 = make_float3( rt_data->vertices[ vert_idx_offset+2 ] );
(Sidenote: When defining points in as 4D homogeneous coordinates to allow for easier matrix transformations, the fourth w-component is normally set to 1.0f, but in this case only the xyz-components are ever used inside the device programs.)
I consider the following as special case and most scenes are setup differently!
There are four different materials inside the Cornell Box scene: Red, green, grey and the light.
Look for g_emission_colors
and g_diffuse_colors
in the code.
To be able to assign a different material to each individual triangles in a single GAS there need to be multiple shader binding table (SBT) entries for such a GAS.
Another method would be to have multiple GAS with each containing the triangles with the same material, but that requires instances inside an instance acceleration structure (IAS) on top referencing these GAS to place them into the scene. Then that top level IAS is the root traversable for the raytracing launch.
You should use such a structure later when you build more complex scenes like in your picture. The chair could be defined only once and then instanced four times into the scene for example. That saves memory, is faster to setup, and similarly efficient to ray trace.
More information about how the materials are assigned per primitive in this optixPathTracer example here:
https://forums.developer.nvidia.com/t/how-to-index-stb-record-in-each-optixbuildinput-in-optix7/181300/2
Just to set expectations straight, it would take me much longer to explain how 3D graphics and OptiX work, than to implement what you’ve asked for.
If there are specific OptiX questions which are not explained by working through the OptiX Programming Guide and all examples you can find, then feel free to ask these specific questions here. Everything else like the coordinate space things above are too fundamental for this developer forum.