I’m having trouble with self-intersection if I use tMins which are reasonably small.
For instance,
Ray newRay = Ray(hitPoint+newRayDirection0.01, newRayDirection, rayType, 0.01, RT_DEFAULT_MAX );
vs
Ray newRay = Ray(hitPoint+newRayDirection0.001, newRayDirection, rayType, 0.001, RT_DEFAULT_MAX );
The last one seems to give me self-intersection problems. Is there possibly something wrong inside OptiX, for intance, too much added padding to AABB’s which make it self-intersect and cause problems. If would expect to get a few more digits of precision. Or maybe I’m doing something wrong.
That make_Ray tmin parameter is highly scene dependent. That’s the reason why it’s filled with a variable “scene_epsilon” in most examples.
That offset from the new ray.origin is in ray.direction already. There is no need to offset the origin the way you do with “hitPoint + newRayDirection * 0.01”. You can omit that calculation and pick a bigger epsilon.
One method to make self intersections less ray.direction dependent is to offset the origin along the face normal on the front resp. negative face normal on the back side instead of offsetting only along the new ray.direction. That offset would still be scene geometry dependent. But that won’t work generally for all surfaces since it could introduces gaps.
A more robust method for planar surfaces like triangles would be to track a geometry ID in the per ray data payload and ignore intersections on itself.
You can never rely fully on the shading normal for offsetting the origin as Detlef mentions. Using the geometric normal instead/as-well is for sure the most common solution which can fix most problem scenarios. This is my personal preference of solution as during a triangle intersection test you obtain the plane normal already so this offset is already calculated for us.
Remember though the offset is scene dependent as Detlef mentioned. Also it is related to float accuracy which is not linear/constant over distance. I’m sure there is a better way (?) but I have often resorted to a variable epsilon value which increase with distance to intersection. This accommodate for the floating point accuracy decreasing at larger values:
I have always wondered if there was a better way to determine what ‘accuracy’ a float value has for a given value/distance. So for example for an intersection at t_hit = 1,000 the accuracy could be 0.01 and at 100,000 the accuracy might be 0.5 but I am not sure where you can easily find any actual numbers from.
UPDATE: It turns out this is much simpler than I had envisioned. Initially I had attempted to use the log of the distance etc to resolve the issue for the epsilon but the solution I have found that works whent eh view is thousands of units away or up close is below:
I had tried this ebfore but used FLT_EPSILON*t_hit. As there are two values in which numeric precision can be lost (the previous calculated t_hit for intersectiona and the rays new-thit) it appears that doubling this results in pretty good results. This adjustment allows you to use as small a sceneEpsilon as possible without getting noisy shadows at distance.
This could mean you might not get self shadowing on the geometry if used naively. If done at the primitive level you may get artifacts at polygon edge cases where the neighboring polygons is intersected. However if you use the knowledge of hitting the same geometry to introduce a secondary (i.e. larger) tmin in the intersection your intersection program these issues may be able to be avoided somehow.