Okay so a few thoughts here, but no super easy answer.
Speaking generally, there should always be an offset from a hit point to the ray origin of a shadow ray, or other secondary ray. Out of curiosity, did you use an offset or a non-zero t_min value, or were you starting the shadow ray exactly at the hit point and using t_min=0 for the pictures above? I just looked, and in one of our internal curve samples, we used a hard-coded t_min value of 2e-4. This is a hacky and less-than-ideal solution, but the code has survived this way for a couple of years without needing to change, so YMMV.
There are two ideas for how to create a better offset in the case of cubic curves that should probably be employed. First, the offset should probably move in the direction of the surface normal at the hit point, i.e., to place the new origin close to the hit point, while guaranteeing the new point is some epsilon outside the curve. (I’m sure you know that the calculated hit point can end up inside the curve, or slightly under the surface, due to floating point error and precision of the intersector and precision of the hit point calculation.) Second, the offset should probably be proportional to the curve radius, since errors in the intersector that can cause self-intersection problems are likely proportional to the radius.
With those two things, a small hard-coded factor should be somewhat reasonable. Mainly you will just want to make sure the factor is large enough to eliminate the self-intersection problems while being small enough to avoid any visible light leaks anywhere. It really depends on your scene, but in the case of curves, sometimes light leaks are less of a problem than with triangle meshes, since curves are less likely to overlap and form corners where leaks will be visible.
Now, if you did these things, your new image will end up looking different from your 2nd image above (the reference image without the early terminate flag.) This is because adding the offset will introduce (physically correct) self-shadowing of the curves; it will put the larger specular highlights that are on the back side of the curves relative to the light into shadow. You probably know this already, but just for completeness and for the benefit of others reading, I’ll mention that this does bring up a tiny point about casting shadow rays you might want to consider, if you didn’t already - most people will call a surface shadowed if the surface normal points away from the light source sample. In other words, you wouldn’t even cast a shadow ray in the first place, if the dot product between the vector towards the light and the surface normal at your hit point is negative. If that trick were employed in the above images, you might not have noticed any issues with the early terminate flag, since it looks like most or all the artifacts are on the side that should be in shadow.
I hope that helps. Please let me know if adding an offset actually solves the problem, or whether I’m speculating and ranting about the wrong issue. If the offset does work, we would be interested to hear back if you have trouble using an offset, and whether you would prefer that the precision of the shadow-ray curve intersector matched the camera-ray curve intersector exactly. We believe this will not get rid of the need for an offset between the camera ray hit point and the shadow ray origin, but it would probably reduce the magnitude of the offset needed. The tradeoff for greater precision is just slightly higher run time for the intersector - we can improve the precision of the curve intersector’s shadow ray calculations at the cost of a little bit of performance. We are open to tuning the balance here based on user priorities like yours.
–
David.