How to Emit a Single Ray from a Light Source and Set the Camera as a Separate Observer in OptiX?

I’m new to Optix and encountering some difficulties. I hope the community can guide me. Here are my two questions:

  1. Can I emit a single 1D ray from a light source like a laser? Now, I can only use if - statements to make rays emit only from specific points, which I think is a poor implementation.

  2. How can I make the camera just an observer while setting the light source at another coordinate? I want the “eye” position to differ from the ray emission point.

Thanks for any help!

Hi @ffofocus, welcome!

For the first question, I’m not sure I understand. What does 1D mean in this context? Are you asking whether you can cast multiple rays that all have the same ray origin and ray direction, as if simulating a laser?

There’s nothing wrong with starting many ray paths using the exact same initial ray, but that does bring up a couple of things to think about. First, casting the same ray repeatedly will yield the same intersection point repeatedly. So an alternative, to avoid recomputing the same initial hit point over and over, is to first cast a single ray to find where the laser hits a surface, and only then cast multiple scattered rays from there that bounce around the scene. Second, you’ll need random scattering to ensure your rays don’t all go the same direction after the first bounce. To be more physically accurate, you also might want to perturb the initial ray’s position randomly by a small amount. I’ll stop here though, since if I misinterpreted your question, I’m just rambling.

I’m also not sure about the if statements part, what does it mean you can only use if statements? With OptiX specifically, and GPUs in general, the main problem with if statements occurs when the condition is different from thread to thread within a warp or wave. This causes the problem known as ‘execution divergence’, which reduces performance since threads that make conditional choices often have to stall and wait for neighboring threads that make different choices. It’s recommended to try to organize your code and data to ensure all threads execute the exact same code within a warp whenever possible. There are clever options that can help in some cases, but sometimes this is tricky or impossible (neighboring threads often hit different objects for example), so take this only as a vague high level goal, and don’t worry too much if you don’t see a solution. OptiX does offer a tool to help here, called Shader Execution Reordering.

For the second question, first keep in mind that OptiX has no notion of a camera or an observer. Those concepts are up to the application to provide. OptiX lets you test visibility along lines, and control what happens when rays hit or miss objects in your scene, but you get to put multiple lines (rays) together and decide where they start and end, and what the endpoints mean. While it’s extremely common in rendering to define a linear perspective camera somewhere with an “eye” point at the center of projection, there’s nothing stopping anyone from defining their ‘camera’ very differently, and indeed some people do. You have complete control over what shape and where the emitters and collectors are, and how you connect them using rays.

Some of our SDK samples are using this perspective camera convention, and they will trace ray paths starting at the eye/camera, and let them bounce around in the scene, tracing rays toward the light sources at each bounce to look for light or shadow. Just note that this eye+camera setup belongs to the SDK sample, not to the core OptiX library. You can very easily trace rays from your emitter (lights) toward the collector (camera), or trace rays from an emitter into the scene and see which ones hit the collector, or trace rays outward from both the emitters and collectors and link the paths up in the middle (aka “bidirectional path tracing”). The way to use a different ray origin position from the eye point is to write code in your raygen program that decides where each ray starts, and (if you want, for example) place it on an emitter rather than on the eye/camera image plane. Some applications like to do texture “baking” where they launch rays from the surface of objects in the scene, and collect the results into texture maps.

I hope I understood your questions correctly, but feel free to correct me or elaborate if I’m not giving you any helpful info yet.


David.

Thank you so much for your help! I realize my initial description of the problem wasn’t clear enough. To clarify, I’ve attached a simple diagram—the “1-dimensional ray” I mentioned is actually just that single ray in the image. What I’m trying to achieve is ray tracing where a single-point light source emits a ray that hits just one pixel.

Regarding the camera being independent of the light source: I understand your explanation now, but I’d love to know if there are any reference materials or code examples to help me grasp how this is implemented.


Regardless, you’ve already helped me a ton. Thank you once again for your time and guidance!

It’s definitely possible to shoot a single ray and have it return results in a single pixel. Are you thinking of shooting a single ray, like just conceptually, or a single ray per thread, or are you thinking of using only one thread and shooting a single ray for your entire workload (i.e. the launch)? I’ll mention a couple of thoughts about this, but I think I’m still not quite understanding the overall setup, so again let me know if I’m wandering off into the weeds.

I think what you’re asking about is how to do what’s known as “forward ray tracing”. Is that correct? Here are diagrams of forward and backward ray tracing: https://cs.stanford.edu/people/eroberts/courses/soco/projects/1997-98/ray-tracing/types.html

I’m wondering if perhaps the question you’re getting at is how to map the hit point of the ray onto a pixel? Since your ray emitter is not the camera eye point, but a light source in the scene, then the ray is not initially associated with a specific pixel. Once you hit a point in the scene and want to connect the results to a camera, you’ll need to figure out which pixel the hit point is visible through. Am I getting closer? If this is part of the question, then there are a couple of possible approaches. One possibility is to shoot another ray towards the camera, and then test it for intersection with an image plane object in your scene. You could use the resulting UV coordinates of the hit point on the image plane to easily calculate which pixel the ray belongs to. Another possibility is, instead of tracing any rays at all, use a little math to map the hit point onto a pixel. For example, plug the hit point directly into your camera’s inverse projection transform, and what comes out should be a point in the local space of your camera, which you can then map to a pixel coordinate the same way you might map a UV location to a pixel. If your camera is linear (such as orthographic or pinhole perspective) then you can represent the world-to-camera transform multiplied by the camera-to-pixel transform as a single matrix that you can use to convert a world hit point into pixel coordinates (with a divide by Z, if you’re doing perspective).

Hopefully it goes without saying, but the performance benefits of using OptiX or any GPU ray tracing method depends on shooting large batches of rays; for best throughput you’ll typically want to cast hundreds of thousands of rays at least.

I don’t know of any examples of forward ray tracing that use OptiX 9 specifically. We used to have a texture baking example in OptiX Prime (which has been deprecated). You could read that to get a sense of the algorithm, but the code won’t run directly in the current version of OptiX. Here’s an article about that, with pointers to the code repository (note you will need to do a small amount of spelunking in the code history to see the actual code).

And here’s an old forum thread about baking that might have some relevant suggestions: Baking to Texture


David.

Hi @dhart , thanks so much for the detailed information to @ffofocus 's question. I want to continue the discussion by asking about the atomicity of pixel updates when it comes to forward path tracing. What’s the usual approach to tackle the problem of multiple threads writing to the same pixels and avoid race conditions using OptiX with minimal overhead?

Hi! Good question, so it really depends on how much contention there is (how many threads, how synchronized they are, etc.), and how your renderer is organized. I’ve seen a few people use straightforward atomic intrinsics and say that they were surprised the perf wasn’t worse. It’s also pretty common to try to organize the work to be warp-coherent and to do a warp reduction and have only one elected thread in the warp handle the I/O. That way you reduce the contention by up to 32x (the width of a warp) before using atomics on global memory. Some people will also let threads write into unique locations, and then do the reduction afterward, so that no atomics or any other kind of locking is needed, but this of course increases memory footprint considerably and so isn’t always appropriate or even practical.

I’d say start easy and measure the results. See if atomic access to the pixels doesn’t destroy your perf, and if it does, then start considering the more clever approaches.


David.