I could try to explain the background:
I need to see a ray as a “ray tube”, so the ray / ray tube has a cross section. A pinhole camera would launch “diverging” rays, so the cross section of these launched rays increases with distance from the “eye” of the camera. In contrast, rays of an orthographic camera would not diverge (“pencil beam rays”).
When a ray hits a planar surface, it’s divergence would not change. But for example when a ray hits a convex surface (a sphere, for example), the reflected ray would diverge, the cross section of the reflected ray would increase with distance from the hit point.
For electromagnetics, I do not use the camera to generate the image. The camera launches rays and we compute a scattered field by integration over the footprints of the rays at the hit points (projected cross sectional areas of the rays at the hit points). So I think the camera launches rays and copies / adds scattered field contributions of the rays (from the “PerRayData”) into a buffer.
As mentioned, I need the face curvature radii at the hit point in order to determine the divergence of the reflected ray. Examples for face curvatures: The face radii of a planar surface are infinite, the radii of a spherical surface equal the radius of the sphere.
Yes and I want to use a triangular mesh with vertex normals to describe the geometry. I think that OptiX already has the necessary information internally because OptiX is able to interpolate the vertex normals.