Hey all,
Lurking on this forum has been a wealth of knowledge learning about OptiX - time for me to ask my own questions! I have a few, so let me try to break down my application:
I have a need to intersect large volumes (100,000 to 10,000,000) of rays with complex surfaces to very high precision. I have a very low number of surfaces in my scene (<100). The surfaces are defined with complex spline shapes, and thus there is not a closed-form intersection algorithm. In my current CPU version I iteratively solve for a zero-crossing of my ray parametrization (ray distance T) and my parametric surface.
This is not a real-time application, but I’m hoping to see better throughput in GPU than CPU. I’d like to see if I can leverage OptiX, but I’m not quite sure what the right formulation is. Is there prior art for solving an intersection like this on a per-ray basis? I’m worried about significant performance degradation due to iterating within the intersection program.
Another option I’ve considered is to transform my surface into an approximate triangular mesh, intersect with the mesh, and then refine the intersection on the surface either in CPU or GPU. This would hopefully allow good convergence with a small, fixed number of zero-search iterations. I’ve yet to quantify the performance delta here.
Question 1: Any advice on how to approach formulating this (admittedly nonstandard) problem? Or am I barking up the wrong tree with OptiX?
Question 2: I’ve demonstrated that single-precision float is insufficient for the final output for this application. Prior posts have indicated that while double precision math is possible in CUDA programs for intersection, etc, the ray formulation is always single precision. Can I circumvent this by adding a double precision direction and origin to the arbitrary Ray data, and updating those separately from the single-precision value (perhaps in a refinement stage in the mesh approximation I proposed above)?