Optix Prime - Inconsistencies on ray intersection computation with secondary rays.

I’ve been having issues with Optix Prime when computing intersections with secondary rays originating from multiple sources. I am computing the start position of these rays from the hit position of the previous ray. The direction of the each ray is towards each point receiver in my scene ( I can have multiple for both ).

The issues I am having is that the rays are being inconsistently intersected by my triangle mesh. Sometimes they get all the way through to my receiver, other times they do not even if it appears that through visualization they should. This appears to be occurring at a roughly 50% rate. This even occurs if I place my source and receivers far above the triangle mesh where the triangles should be unlikely to be obstructing their paths. When checking to see if a ray is received at the point source I am simply checking in each hit result for a miss result ( tri_id = -1 ).

I have also verified that my rays are normalized ( or as close as possible ).

I will note that my triangle coordinates are in the ECEF coordinate frame which tends to have relatively large single precision magnitudes.

I am also visualizing the results using a MATLAB script which draws the rays from each source to each hit point, then a secondary ray from each hit position to each point receiver and where it hits (or not). For visualizing the triangles I am using the trimesh function.

What could be causing these issues to occur in Optix Prime?

Hi Timothy,

You should expect that using the barycentric coordinates to compute your hit point will yield higher precision than computing it with start + raydir * t. I recommend trying that first; the conversion back to Cartesian is trivial.

Since you’re using single precision floating point in both cases, you will in general never have an exact point on your surface, only a point that is within some error tolerance. Even using the barycentric coordinates will give you a point that due to rounding error will be below your exact surface, likely around 50% of the time.

How far below the surface are the hit points you’re currently getting?

Certainly using geocentric coordinates for your mesh vertices is not making this problem any easier. That definitely will cause loss of precision in the ray calculations. If you can transform your scene to have an origin that is nearer to your mesh or your camera, you will have a lot more precision to work with, even when using barycentric coordinates to calculate your hit point. If you use geocentric coordinates in meters, the best possible rounding error of the least significant bit is on the order of a quarter meter, and in practice I would expect to see much worse than that. I came up with that number by visiting this float calculator: https://www.h-schmidt.net/FloatConverter/IEEE754.html entering 6371000 (earth’s radius in meters) in the “decimal” field, and then changing the lowest bit in the binary field.

Also note that means that the smallest feature of your mesh you can even represent is ~0.5 meters across (2x rounding error). If you’re tracing or rendering human sized objects, you will see a lot of quantization error. You’d want to make sure at this scale that your field of view is at least several kilometers wide in order to not see any artifacts in your geometry, if you have a typical output image size like 1k or 2k pixels across.

For what it’s worth, the watertight option is intended to guarantee that rays can’t accidentally sneak through a mesh when rays are close to edges in the mesh. It is related to precision problems in triangle intersection, but is not intended to nor necessarily going to improve the precision of your results.

My values are extremely close to the surface of the mesh. I’m not sure on the exact values yet ( I’d have to probably comptute in double precision to figure that out. ) It’s likely that they’re probably within the 0.5m range you specified though.

I’m not tracing anything that small though ( at least not yet ). I am tracing DTED level 2 terrain data with a resolution of 30 meters. I could also use lower resolution data as well.

Also, I’ve thought about moving to a coordinate system centered at a non-geocentric origin although I’d have to rewrite a large amount of code in our model to do that. The model I am working on is built to simulate radars, so the transmitter and receiver objects can be several hundred kilometers apart with terrain loaded in between. ( say 400km). If I were 400km out then that would give me about 0.03125m of precision. Do you think that would be enough?

I also used the barycentric coordinates method and got slightly more rays to be received. It made a difference of about 5-10% or so. It still appears to be limited to about 50% of the rays that hit the terrain, though.

I wouldn’t know how much precision is enough for your sim, that’s more of a question of what needs to happen with the output and whether you need to quantify your tolerances and sources of error. I just wanted to help out with some specifics on the limits of single precision floats, since it can be quite surprising how limited they can be when your world has a range of scales that seems reasonable. Modeling artists run into this problem in 3d modeling software all the time and they have rules of thumb for how big their worlds should be. For example, do some Googling on the terms “z-buffer precision” or “logarithmic z-buffer”. You will even find a few articles on planetary and terrain rendering that might help a bit. Even though you’re not using a z-buffer, the floating point precision issues are more or less the same.

I’m not sure I understand, how do you determine whether a ray is received? What does it mean to hit the terrain or not? You’re talking about the hit point being close to or above the surface versus being far away or beneath the surface?

If you are testing whether your hit points are below the surface with high precision, you are always going to find ~50% of them below the surface (the other ~50% will be above the surface, not on it). There are (generally speaking) no points exactly on the surface when you use floats, so you need to define points to be on the surface when they are within a reasonable epsilon of distance from the surface, regardless of whether they’re above or below the actual surface. Typically in ray tracing, this epsilon shows up in the form of a positive non-zero t_min value that is used for the next ray, the one reflected or refracted from your hit point. This way when you have a hit point that is slightly below the surface, you avoid intersecting the same surface again when you send a reflected ray.

BTW, to see how much the barycentrics improve things over origin + direction * t, measure the average distance of all hit points from the surface, in double precision.

What I mean by a ray being received is after hitting the terrain it doesn’t hit anything else on it’s path back to the receiver from that terrain point from the original transmitted ray. Sorry if that was confusing! The receiver has no geometry other than being a point in space, it is basically generating signal parameters such as relative power, doppler, phase shift, and other factors for each ray and then combining that into a time domain response.

I’m not bouncing these rays multiple times when they intercept something, as the received power will be negligible at the distance scales being used. ( EM signals attenuate it a rate of R^2 per bounce ) I may do primary rays as well from the terrain, then spawn more secondary (diffuse) rays if those hit something to the receivers from those intersected points.

Now i understand why you can provide t-min values for the rays. I’m fairly new to ray-tracing environments, so I wasn’t sure how to set up an epsilon for the intersections tolerance. I believe that would fix my issues with the rays likely intersecting the same triangle they just left. Using 1 meter here would likely be sufficient for my needs and avoid most of those issues I was noticing with intersections happening in the 0-1m range. I’ll try this tomorrow and let you know if my problem is solved.

Thank you for your help!

Setting a tolerance of 1.5m-2.0m does indeed appear to have fixed my problem. Thank you for your help again! All of the rays except for a few outliers that I expect to not intersect with their own triangle they were reflected from now do not. This should be good enough for our purposes.

Excellent, I’m glad that worked out!

It sounds like you might not need anything trickier. Most people get by just fine with a static epsilon, but if you need something better in the future, if you have any more issues with your remaining outliers, there are more options. One possibility you can check is whether rays you send from a hit point are leaving at a grazing angle. You could optionally use a dynamic next t_min value that increases for grazing angle rays. A second option might be to move the hit point from below the surface to the nearest point above the surface that is representable using single precision floats. You’d probably have to do that calculation in double precision, but then you could change your next t_min value to 0. The main reason to get that tricky would be if you ever need strong guarantees or if you have legitimate ray lengths that could ever be less than your epsilon value. If accuracy is more important than performance, there are ways to improve accuracy.

Good luck, let us know if there’s anything else. And we’re always interested in seeing the results- if you publish a paper or end up with interesting visualizations from your simulations, and you’re willing to share, we’d love to see them.