# Sphere intersection with ray-distance dependent radius

Hi all,
I need to use sphere intersection. Ideally, I would need to make the radius of the sphere depend on the distance to the ray(s) origin, say, proportional to it (them). I need to take into account reflections on other objects too.
I am not sure if it can be done at all: obviously, I need a hit to check the distance traveled by the ray, but this hit only occurs if it has previously intersected the sphere with a given radius.

I am new to these topics, so I am quite clueless. Is it possible at all? Any ideas? Can you point me to some papers/resources of similar problems?

Thanks a lot

That sounds strange. It would be helpful to understand what algorithm this should solve or what the expected result data would be.

I really hope you mean that the spheres should be scaled depending on the distance to the camera position (primary rays’ origin) or any other fixed point, and that would be super simple.
Just calculate the radii of your spheres up front (e.g in a CUDA kernel when there are many), fill them into a Geometry primitive buffer, and let the AS be built by OptiX.
Rendering that should be comparably fast, depending on the number of spheres, even if the camera changes each frame.

If that’s not what you mean and the radii should really change dynamically while shooting rays through the scene, means different radii for different rays, resp. the distance along a path plus constraints about previously hitting such a sphere, I can follow up with an explanation how to possibly go about this. “Slow”, “weird”, and “inconsistent” will be adjectives in that description. ;-)

Hi,
Thanks a lot for your reply. I have a couple of doubts yet.
I am simulating radio propagation. The receiver is modeled as a sphere. I should use a radius which depends, not on the actual ray distance (I was confused), but at least on the distance between transmitter and receiver. For reflections, I can use the total distance up to the last hit plus the distance between the hit and the receiver.
It apparently can be implemented in the sphere intersection program, as long as I have access to the ray payload, where I can accumulate distances from previous hits. I leaves the bounding box program untouched, with a large radius, but I guess I can live with that inefficiency. Unfortunately, according to the documentation I cannot access the payload at the intersection programs, not even read-only.
Can it be done somehow? Or should I give up and use a constant radius as an approximation?

This leads me to the second question: as an approximation I can use as radius some fraction of the distance between transmitter and receivers. But now I have multiple transmitters and receivers. I can do as follows: I replicate the receiver geometry for each transmitter and group them, I assign the radius correspondingly, and then I put a selector on top of them. So I have something like this
selector
/ |
g1 g2 g3
/ \ /\ /
r1 r2 r1 r2 r1 r2

In the visit program of the selector I redirect the rays to the receivers according to the transmitter, which is encoded in the rtLaunchIndex dimension.
I expect to have around 100 transmitters, which results in 100*100 receivers. But since the geometry is just a sphere, this does not seem too much.
Or is it? As I said I am new to this, Is this an efficient way of doing this?

Correct, the intersection program has no access to the per ray payload, so that method isn’t possible.
http://raytracing-docs.nvidia.com/optix/guide/index.html#programs#4043
Chapter 4.1.3 Table 5 Semantic variables.

I’m out of time for today.
What I still haven’t understood is what the relation between path distance and receiver radius is.
What’s the exact formula for the receiver radius in relation to the traveled distance?

It would be quite simple to implement in the intersection program, had we access to the payload. Without this, I may implement this just using the distance between the sphere and ray origin (something we have), even for reflections. But then think about a receiver close to the transmitter, so it has a small radius, if you consider a ray going down at an angle, reflecting on a wall and then up, it is probably going to miss the small radius receiver. But that is wrong, because you do not really have a ray, but a wavefront hitting a wall and coming back

If I understand that correctly you shoot a limited number of rays which should actually represent a continuous wavefront. The intersection with the receiver is basically tracking the run length of that wavefront. For obvious reasons you cannot shoot an infinite amount of rays to represent the solid wavefront to make sure you hit the receiver antenna which has a limited size in the real world.
Instead you’re trying to solve the registration of a wavefront hit at a receiver after some run length of the whole wavefront spanned between neighbouring rays by scaling the receiver according to some angular derivative (which would need to be tracked along the ray).

I do not see an elegant way to do this receiver scaling within OptiX’ acceleration structures. It would basically always require to look at all receivers in the worst case which would be similar as using NoAccel for them, or actually not putting them into the scene at all and intersect all rays with all of them (which is actually what I’m going to describe below).

But if I understood the problem correctly, you should be able to solve that wavefront propagation through a scene from transmitter to receiver more directly.

You know where your transmitters and receivers are located inside a scene.
For simplicity I’m assuming the transmitters and receivers are infinitely small points and for a beginning let’s also assume there is only one transmitter and one receiver to simplify the following explanation.

The transmitter sends a wavefront into the scene. The main question here would be how it would do that.

Possible special case:
The shortest connection between the transmitter and the receiver would be if they can see each other.
That direction would be one specific ray of a continuous wavefront.
This visibility condition can be checked very quickly with one ray type, (same as a shadow ray) by shooting such a ray from transmitter to receiver. All it needs is an anyhit program which terminates if anything is between the two endpoints. If the visibility check succeeds that means your wavefront from your transmitter to the receiver would be your first wavefront registration with the shadow ray’s distance.

Otherwise other objects inside the scene blocked that visibility ray.
Now you could actually shoot that or any other ray from your transmitter with a different ray type to capture closest hits.
If it hits an object, that hitpoint would need to be evaluated similarly for a connection to the receiver.
You would need to track an angular derivative along your ray. Means for each surface hit point, you would be able to calculate a cone of directions which represents a bundle of infinitely many rays representing the wavefront again.
Now you would need to calculate the reflection direction and the angular derivative gives you that cone spread angle around that new reflection direction.
Check if the receiver is in that cone of directions (simple dot product).
If yes, pick that direction and shoot a visibility ray to see if there is a direct connection.
If yes, that would be your second registration of a wavefront hit at a longer run length.

If your simulation end condition is not reached yet, pick some random ray inside the cone of valid directions and repeat.
This is actually also how you start at the transmitter for any other direction than the special case of direct visibility.
The angular derivative of the primary rays depends on the number of rays you start from your transmitter, the more the whole simulation uses, the smaller that initial cone angle.

This is a progressive Monte Carlo path tracing algorithm and as such you can scale this to fit your simulation needs in different ways.
For example handle each transmitter individually, handle each receiver individually, handle all at once, shoot different numbers of rays per launch, etc.

Does that make sense?

Please have a look at this GTC 2014 Presentation which sounds similar:
S4359 “Real-Time Electromagnetic Wave Propagation Using OptiX for Simulation of Car-to-Car-Communication” from Manuel Schiller.
Search for the session ID here: http://on-demand-gtc.gputechconf.com/gtcnew/on-demand-gtc.php