OptiX basic visualization problem

There are two ways to approach that problem of finding intersections of rays from that antenna to the plane.

1.) When you want to do that inside your current implementation which renders the above scene screenshot, then you would shoot rays from your camera into the scene, and inside the closest hit program assigned to your plane, you would shoot a visibility test (“shadow”) ray from that surface hit point to the antenna world position.
Means the tmax stops at that distance.

If nothing is blocking the visibility, then there is a direct connection between these two points and you could evaluate any required information, like distance etc. If the antenna has a specific distribution function, you would be able to calculate things like signal strength etc. from some additional antenna orientation and that connecting ray direction and distance.

Doing it like this will only care about direct connections of hit surface points to the antenna position.

Just think of the antenna as point light and all these calculations as direct lighting.
Think of it as spot light if there is some specific distribution function.

Example code doing exactly that direct lighting can be found in all my examples.
This would, for example, be the direct lighting calculation of a diffuse BRDF:
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/rtigo12/shaders/brdf_diffuse.cu#L180
and if you assume that your antenna position is a point light, then this explicit light sampling routine would be used to fill the appropriate light sample fields of a singular point light:
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/rtigo12/shaders/light_sample.cu#L301
Since that is not part of the scene geometry it cannot be hit implicitly, so only direct lighting makes this work.
There is no need to implement that sampling as direct callable program when you have only that one type of antenna.

2.) If you wanted to implement the antenna distribution function (let’s say it’s spherical) and capture the hits arriving on the plane you would need to represent the plane as some output buffer, think of having a 2D texture of a discrete resolution (e.g. 1024 x 1024) mapped onto your plane.
You could of course shoot random rays from the antenna according to the distribution function into the world and check if you hit the plane geometry, and then use atomics to add each hit to the respective texture cell, because that is a scatter algorithm (multiple rays could hit the same cell) but that doesn’t only sound inefficient, it’s really bad.
Instead you would again generate rays starting on the plane, let’s say in the center of each texel and shoot them all to the antenna position and see if there is a direct connection (visibility test succeeded) and whatever other information you need to calculate. This is a gather algorithm because each output buffer cell maps to specific rays.
Now you would have a buffer or texture with the data and could assign that to hit surface points on your plane inside the renderer which displays the above scene.
Means this would require two different optixLaunch calls with completely different raygen programs but the same geometry.

When shooting rays from a surface always take care to prevent self-intersections of the ray with the geometry you started on. That’s usually done by offsetting the ray origin or tmin value a little from the start point on the surface. Same on the other side of the ray (tmax) when using geometry lights (see the comments in my direct
lighting example code.)

So in summary: Prefer gather over scatter algorithms. Handle these problems similar to direct lighting.

Have a look at these related posts:
https://forums.developer.nvidia.com/t/electromagnetic-wave-simulation-using-optix/221893
https://forums.developer.nvidia.com/t/sphere-intersection-with-ray-distance-dependent-radius/60405/6
https://forums.developer.nvidia.com/t/closest-approach-of-ray-to-a-point/231750

1 Like