Reflection in Optix Prime

Hi, first of all I am a total amateur so my apologies if my questions sounds dumb. I am trying to make a simple reflection program using Optix Prime from Optix 6.5. I shoot rays from a spot within a cube model that I load into the program as a .obj file. I want to bounce the rays within the cube and see if the reflected rays would intersect with a certain spot (which is also set to be within the cube).

My idea was to get the X,Y and Z coordinates from my hit points and generate rays towards the reflected direction. Is there a function to get these parameters? If so please guide me to it. Thank you.

Let’s answer this from bottom to top.

My idea was to get the X,Y and Z coordinates from my hit points and generate rays towards the reflected direction. Is there a function to get these parameters? If so please guide me to it.

In a ray tracer there are two ways to get the hit surface position:

  • Use the standard float3 hitPosition = ray.orign + tIntersectionDistance * ray.direction line formula in world space, or
  • calculate the hit position in object space with the vertex positions and barycentric coordinates and transform it into world space.

Most ray tracing examples use the first method.

I am trying to make a simple reflection program

If you talk about specular reflections, the formula calculating the new outgoing direction from an incoming direction and the surface normal at the hit point is really simple. The normal doesn’t even need to be in the same hemisphere as the rays.
Search for reflect inside the OptiX SDK header (*.h) and CUDA source (*.cu) files.
In OptiX 6.5.0 it’s defined in OptiX SDK 6.5.0\include\optixu\optixu_math_namespace.h
In OptiX 7.2.0 it’s defined in OptiX SDK 7.2.0\SDK\sutil\vec_math.h

using Optix Prime from Optix 6.5.

I would generally not recommend using OptiX 6.5.0 or the OptiX Prime API for new projects. The latter API doesn’t support the hardware functionality on the RTX boards and has been discontinued with OptiX 7, which in turn uses a much more modern API than any previous OptiX version.
There are optixRaycasting examples inside later SDKs which use the full OptiX API (not Prime) for ray-triangle intersection testing if you need only that. This is more flexible and will be faster on RTX boards.

I shoot rays from a spot within a cube model that I load into the program as a .obj file.

There is no need to load a cube model from a file for that. You could simply define the 12 triangles required for that manually. E.g. something like this: https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_runtime/src/Box.cpp
This defines the front faces on the outside (counter-clockwise winding in a right-handed coordinate system).
Doesn’t matter for specular reflections.

If you’re planning to use different models in the future, then OK, but if this is about specular reflections connecting singular points, that whole idea won’t work anyway.

I want to bounce the rays within the cube and see if the reflected rays would intersect with a certain spot (which is also set to be within the cube).

If you mean your ray origin and that certain spot inside the cube volume are points, means infinitely small, with no area (a “singular” point) this method will not work at all. You would need actual geometry to hit something randomly inside a ray tracer.
With specular reflections there is zero probability to hit a point in space exactly when randomly sampling the ray directions. The integral over the specular distribution is a Dirac, means the probability is zero everywhere except for the single case where the points actually connect.
Or in other words, there is exactly only one direction which will connect two points in space for each specular reflection.
On the other hand, inside a perfectly reflecting cube, there are infinitely many specular paths connecting two points due to infinitely many reflections being possible on all six surrounding walls.

Now the fun part. These initial directions of the specular connecting paths inside a perfectly mirroring cube can be calculated directly, without the need for any ray tracer!
For that you would only need to mirror the position of the “spot” at the faces of the initial cube and then the mirrored cubes, and repeat…
Your initial cube is 1x1x1, the first mirror operations generates one layer of cubes around that giving a total of 3x3x3 cubes, then 5x5x5, and so on. That can go on indefinitely, means with each layer of cubes, the mirrored spot positions get farther away from your ray origin in the center cube.

Each initial outgoing ray direction of these paths would then simply be this formula:
For every spot coordinate in any of the mirrored cubes: ray.direction = normalize(spot_mirrored - ray.origin);.

Now if you meant diffuse or glossy reflections, then forget that specular special case above.
That would work similarly to a path tracer where your spot can be thought of as a light and then the connecting paths could be done like in a unidirectional path tracer with direct lighting (next event estimation) for example
I’m not going to explain light transport algorithms here. You’ll find these in literature and ray tracing example everywhere. Even in my own examples:
OptiX 5/6: https://github.com/nvpro-samples/optix_advanced_samples/tree/master/src/optixIntroduction
OptiX 7: https://github.com/NVIDIA/OptiX_Apps

1 Like

Thank you for your reply. It was really helpful and guided me a lot on my path to learn ray tracing. I have another question, is there any way to show that I am actually doing CUDA parallel programming for my rays? Is there a function to show which threads of my graphic card is working on ray tracing?

Yes, you can use Nsight tools for that. Start with Nsight Systems to see the timing of CUDA API calls and kernel launches in relation to your CPU functions. Then use Nsight Compute to investigate the thread-level and instruction-level performance of an individual kernel. Here are links to those tools:

https://developer.nvidia.com/nsight-compute

–
David.

1 Like

The OptiX API abstracts many of the underlying CUDA mechanisms (threads, warps, blocks) with a “single ray programming model”.
OptiX actually doesn’t give you the information which ray is running on which thread.
The abstraction is intentionally formulated in a way that the scheduling is completely internal and free to change at any time.
Please read the Programming Guide: https://raytracing-docs.nvidia.com/optix7/guide/index.html#introduction#2006

Means you, as a developer, only need to be concerned about what a single ray should do in your device programs.

As long as your launch dimensions inside the OptiX 7 optixLaunch() call (resp. rtContextLaunch1|2|3D() calls in earlier OptiX versions) is not just a single ray but many more, they will all be parallelized by OptiX internally.
There is no CPU fallback inside the full OptiX API, means you always use CUDA parallel programming internally.

The Nsight System and Nsight Compute tools will show the overall performance behavior of your application resp. individual kernels.

Thank you for the help. Really helped me out a lot.