I’m new to Optix and Cuda. In my computation, I’m using SPH methods and do not need rendering and triangularization. My scene is only made by group of points with known coordinates and radius and the initial positions of rays and their directions. The reflection normal is not the geometric sphere normal but defined by the neighbor particle positions and the coordinate of incident points. Therefore before each reflection I need to get the incident point coordinate to calculate the reflection normal at each hitting. Is there any OptiX method that can return the positions of Incident points of each hit but without doing the whole tracing? Or How can i design the pipeline of calling OptiX APIs?
Thanks very much!

My scene is only made by group of points with known coordinates and radius and the initial positions of rays and their directions.

I’m assuming SPH stands for Smoothed-Particle Hydrodynamics.

No, OptiX does not provide any method to calculate that specific reflection normal.
OptiX is a general purpose GPU ray casting SDK and you’re responsible for implementing the desired algorithms.

OptiX would only provide some initial ray-primitive intersection result which requires some geometric primitive to intersect with.
Like if you represent the “point coordinates with radius” as sphere primitives and intersect rays with them, that would provide some initial hit point.

Determining the reflection normal from incident points (I assume you mean the particle the ray hit?) and its neighbors would require some neighbor search algorithm you would need to implement yourself.

That would most likely require some spatial lookup structure over your particles to find the neighbors quickly.
Things like a kD-tree or spatial hash maps for a kNN search algorithm as used in photon mapping final gather passes come to mind.

Thanks for your suggestion.
One more question that here for the dynamically updating reflection normals, I need to know the coordinates of the hit points of the ray and the particles. Therefore I want to know whether OptiX has an API that could return the spatial locations of the hit points when each intersection happens, and after I get that coordinates I can then determine the new reflection normals, and then continue the reflect.

Yes, the whole purpose of ray tracing is determining the intersection distance along a ray where a geometric primitive surface was hit.

In OptiX 7 device code you shoot a ray with the optixTrace call. That contains ray origin, direction, t_min and t_max values for the interval to test intersections on, some more arguments and payload registers to exchange per ray data between OptiX program domains.

If you hit something, OptiX invokes your closest hit program in which you can calculate the hit coordinate as you like, for example with float3 world_position = optixGetWorldRayOrigin() + optixGetWorldRayDirection() * optixGetRayTmax();

Which closest hit program inside the pipeline is used, is determined by the Shader Binding Table (SBT) you construct. What other things you calculate inside that, like shading normals, is completely your responsibility.

Continuing from the determined hit point would normally be done by returning the next ray’s origin and direction to the ray generation program which can then continue a ray path through the scene by shooting the next ray with the given information until it misses any geometry or some pre-defined path length is exceeded. That’s an iterative path tracer which is preferable for memory and performance reasons over recursive algorithms.

If you want to use OptiX’ built-in sphere primitives to represent the particles, have a look into the optixSphere example inside the OptiX SDK 7.6.0 which shows how to render a single sphere. It won’t get easier than that.

Inside the optixSphere.cu file, line 144 calculates the closest hit point in world space exactly as I described above.

Thanks very much for your guide！I have implemented this method, but the programmed would be automatically killed after running for several times. I’m still analyzing that.