Ray tracing by Optix help and guide

Hi

I am trying to work on a project to implement path tracing using Optix ray tracing. My project has a 3D scene, a light source and a light signal receiver, both the source and the receiver are approximated as single points inside the scene. The source emits light rays and some of the rays arrive at the receiver. The goal is to record all the ray paths between the source and the receiver. The picture below illustrates a similar application.

Screenshot from 2021-07-03 15-04-29

This idea has some differences from many ray tracing applications discussed here. Most ray tracing applications aim to achieve the rendering lighting effects of the objects inside the scene, while this one cares about only certain points.

How can I set a light source inside the scene with a fixed location?

How can I set a receiver point inside the scene? Instead of the final rendered scene, the final results focus on only a single point.

How can I save the traced path information in Optix? The information includes distance, angles, hit points of all the paths. This can be a huge amount of data if using many rays.

I am not from a coding background and have no experience with Optix. At this stage, I only get the tutorial examples in the ‘Optix7 courses’ and the Optix 7 SDK examples running. To work out every details of the Optix and to write the codes appear daunting for me.

Is it possible to start with an example in the tutorial by adding the source point and the receiver point?
For example, by modifying an example from the Optix 7.2 SDK.

How can I set a light source inside the scene with a fixed location?

You define a light structure with all the information which are required to describe your light type.
If that is an omnidirectional point light with a constant power, then a float4 per light with the .xyz components as the light position in world space and the .w component as power would be enough.
Then you allocate a device buffer with all the lights in your scene, fill that with your light information, and set it as CUdeviceptr on your launch parameters to be able to access that from your OptiX device programs.

How can I set a receiver point inside the scene? Instead of the final rendered scene, the final results focus on only a single point.

That works just the same way. You define a structure with all information about your receiver, then you allocate a device buffer and fill in all the receiver’s information. Again, if that is just a point in space, then use a float4 (because that loads faster than a float3) and put the position into the .xyz components and set .w to 1.0. Latter can be ignored inside your device programs.
If you only have a single receiver which needs to be handled per launch, then you can also just put that float4 directly into the launch parameters.

How can I save the traced path information in Optix? The information includes distance, angles, hit points of all the paths. This can be a huge amount of data if using many rays.

In OptiX all input and output always happens through pointers to device memory buffers you need to allocate up front. You cannot allocate memory dynamically during the launch! If the result vector can be differently sized per launch index, it’s possible to build lists inside pre-allocated memory though. Given the level of your question that would be a more advanced topic which could be discussed when needed.

If your per launch workload or results are getting too big to handle, you would normally split the work into multiple launches.
For path tracers, you would normally follow one path per launch in an iterative progressive Monte Carlo algorithm (e.g. an algorithm producing one sample per pixel per launch.)
You could also structure your algorithm into a wavefront approach where each launch shoots the next segment and adds the connection to the light source. Means with each step in your algorithm, you would get zero or one path segment information more and can save that.

You define how much work you do inside each launch. Though to saturate modern GPUs you shouldn’t use a too small launch dimension. For a path tracer like that, I wouldn’t start with anything under 65k launch indices.
You will most likely be limited by the amount of data you need to write per launch though.

I am not from a coding background and have no experience with Optix. At this stage, I only get the tutorial examples in the ‘Optix7 courses’ and the Optix 7 SDK examples running. To work out every details of the Optix and to write the codes appear daunting for me.

Ok, that is going to be a steep leaning curve then.

What you’re asking for is a path tracer with direct lighting (next event estimation) and that is comparably simple for an experienced OptiX developer given the point light and point receiver types.
It’s more of a question how you get your scene data into render graph and how to calculate and especially store all result components you want. Also are there any specific reflection behaviors in that scene or is that plain diffuse reflection?

Is it possible to start with an example in the tutorial by adding the source point and the receiver point?
For example, by modifying an example from the Optix 7.2 SDK.

First of all, if you’re not limited to specific older driver versions, it’s always recommended to use the latest OptiX version which is 7.3.0 at this time. (Could be that the OptiX 7 SIGGRAPH course is not updated to that, yet. There have been some small API changes in 7.3.)

The optixPathTracer is doing something like that. It shoots rays from a pinhole camera into a scene, then implements a unidirectional path tracer with only diffuse reflections (Lambert) and direct lighting from an area light, and stores the resulting radiance.

You could change the pinhole camera to your receiver point. If that is always on a geometry surface like the table in the image above, then the ray generation program would need to be changed to shoot into the upper hemisphere above that surface point and make sure that it’s not blocked by that surface itself by offsetting it by some small value (epsilon) above the surface.

Then you would need to change the area light to a point light, which also makes sampling the light a lot simpler.

The biggest change would be for the actual results. Since that OptiX SDK example is only producing a single radiance result for the whole path, you would need to change the per ray payload to a structure to contain your per ray segment data and store all required information for each additional connecting path segment inside the ray generation program.

Please have a look into the following two threads as well, where similar questions have been discussed.
(Ignore that “angular derivative” I mentioned in the first one. That also works with fully diffuse reflections.)
https://forums.developer.nvidia.com/t/sphere-intersection-with-ray-distance-dependent-radius/60405/6
https://forums.developer.nvidia.com/t/reflection-in-optix-prime/170265/2
This would be important for implementing visibility tests most efficiently:
https://forums.developer.nvidia.com/t/anyhit-program-as-shadow-ray-with-optix-7-2/181312

Also have a look into the sticky posts of this sub-forum which link to more OptiX 7 examples.
My OptiX 7 advanced introduction examples also implement unidirectional path tracers with or without next event estimation, but they are a little more complicated due to their material implementation.

Thank you for such a patient and detailed answer. As you said, it is a steep learning curve for me.

I am running Optix 7.2 and I am looking into the ‘optixPathTracer’ example in the SDK. I try to figure out how I can implement based on the example codes. For the first stage, I try to understand the ‘optixPathTracer’ example code.

If I use the simple scene in the code, inside the same scene I try to add a point of source and a point of receiver. But I already find that I don’t understand the geometry and the coordinate system implemented in the code. For example, the following codes are taken from the example:

const int32_t TRIANGLE_COUNT = 32;
const int32_t MAT_COUNT = 4;

const static std::array<Vertex, TRIANGLE_COUNT* 3> g_vertices =
{ {
// Floor – white lambert
{ 0.0f, 0.0f, 0.0f, 0.0f },
{ 0.0f, 0.0f, 559.2f, 0.0f },
{ 556.0f, 0.0f, 559.2f, 0.0f },
{ 0.0f, 0.0f, 0.0f, 0.0f },
{ 556.0f, 0.0f, 559.2f, 0.0f },
{ 556.0f, 0.0f, 0.0f, 0.0f },

// Ceiling -- white lambert
{    0.0f,  548.8f,    0.0f, 0.0f },
{  556.0f,  548.8f,    0.0f, 0.0f },
{  556.0f,  548.8f,  559.2f, 0.0f },

{    0.0f,  548.8f,    0.0f, 0.0f },
{  556.0f,  548.8f,  559.2f, 0.0f },
{    0.0f,  548.8f,  559.2f, 0.0f },

I understand these codes implement the floor and the ceiling in the scene. But I don’t understand how the array <vertex, triangle*3> identifies a plane. I don’t know what does a single array as { 0.0f, 548.8f, 0.0f, 0.0f } specifies. The first is a vertex and followed by 3 points of a triangle? So the floor has 6 triangles connected to 2 vertices? What is a vertex in this case?
Because without understanding the coordinate system, I am not able to set the locations of the point of source and receiver.

Many thanks for the help.

The optixPathTracer is too involved for you as a first example then.
Please work through the simpler OptiX 7 SDK examples first. Those give a basic introduction how to get from a solid colored background (optixHello) to your first triangle (optixTriangle). Play with the code and change things like vertex coordinates to see what happens and only then move to more advanced examples.
The OptiX 7 SIGGRAPH course you’ve found shows the same things step by step in a different application framework.

OK, some 3D graphics fundamentals to get you started:

The optixPathTracer builds a hardcoded scene where the five surrounding walls (front is open), the geometry of the two boxes and the single area light are basically all rectangles.
The coordinate system is right-handed and has positive x-axis to the right, positive y-axis up and positive z-axis pointing to the front. Means with the origin at (x, y, z) = (0, 0, 0) any point y == 548.8f like in the ceiling example is 548.8f units above the origin.

The coordinate system in your image is also right-handed but with z-axis up. Conversions between these two are a 90 resp. -90 degrees rotation around the x-axis.
OptiX doesn’t care about y-axis or z-axis being up. That’s all your decision to make.

(Sidenote: Care needs to be taken about the triangle face winding (order of vertices in a triangle) because that defines which side is the front face (default in a right-handed coordinate system is counter clockwise winding) and while this is often used in rasterizers to omit the backfacing triangles, this is also possible in OptiX but normally not used in raytracers. That’s also why you always need to be aware of the handedness of your coordinate system, because changing that also changes the vertex ordering.)

Because ray-triangle intersection is supported by the RTX hardware and much faster than implementing a custom intersection program for rectangles (which is possible), the program splits each rectangle into two independent triangles.
That’s why there are six vertex coordinates for the two independent triangles per rectangle. Note that there are two coordinates of the rectangle corners appearing twice in that array for the ceiling and all other rectangles.

(Personally I would have ordered these vertices differently, because it’s better style to not define an edge connection through the diagonal of the rectangle. Note that both triangles start at the same corner? I digress.
This duplication of the vertices when using independent triangles can be avoided when only storing the four rectangle vertices and building indexed triangles by indexing into that vertex pool. This is generally more efficient for memory size and access performance, esp. for big meshes.
Example code using 12 indexed triangles to build a simple closed box here:
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_runtime/src/Box.cpp#L176)

All 3D coordinates of the 32 triangles in the scene are hardcoded inside the code, like for the ceiling example you cite.
The coordinate definition is using the Vertex structure defined at the beginning of the code which looks like this:

struct Vertex
{
    float x, y, z, pad;
};

The fourth component is not used and always set to 0.0f.
Actually that data is accessed as CUDA float4 vector type inside the device side (search for float4* vertices;) and should have better be defined as float4 type in the host as well, but only the xyz-components of these vertices are used inside the closest hit program. Search for this code:

    const float3 v0   = make_float3( rt_data->vertices[ vert_idx_offset+0 ] );
    const float3 v1   = make_float3( rt_data->vertices[ vert_idx_offset+1 ] );
    const float3 v2   = make_float3( rt_data->vertices[ vert_idx_offset+2 ] );

(Sidenote: When defining points in as 4D homogeneous coordinates to allow for easier matrix transformations, the fourth w-component is normally set to 1.0f, but in this case only the xyz-components are ever used inside the device programs.)

I consider the following as special case and most scenes are setup differently!

There are four different materials inside the Cornell Box scene: Red, green, grey and the light.
Look for g_emission_colors and g_diffuse_colors in the code.

To be able to assign a different material to each individual triangles in a single GAS there need to be multiple shader binding table (SBT) entries for such a GAS.
Another method would be to have multiple GAS with each containing the triangles with the same material, but that requires instances inside an instance acceleration structure (IAS) on top referencing these GAS to place them into the scene. Then that top level IAS is the root traversable for the raytracing launch.
You should use such a structure later when you build more complex scenes like in your picture. The chair could be defined only once and then instanced four times into the scene for example. That saves memory, is faster to setup, and similarly efficient to ray trace.

More information about how the materials are assigned per primitive in this optixPathTracer example here:
https://forums.developer.nvidia.com/t/how-to-index-stb-record-in-each-optixbuildinput-in-optix7/181300/2

Just to set expectations straight, it would take me much longer to explain how 3D graphics and OptiX work, than to implement what you’ve asked for.
If there are specific OptiX questions which are not explained by working through the OptiX Programming Guide and all examples you can find, then feel free to ask these specific questions here. Everything else like the coordinate space things above are too fundamental for this developer forum.