I am using optix 7 as a non-rendering computation tool in a robotic application. Basically after the first hit point of a ray, I get (x,y,z) of hit point and then I want to feed it into calibrated projection function (x,y,z)->(u,v) to the imaging plane. It’s a affine transform + rational projecting function. Is there anyway I can accomplish this in GPU?

I am closely following examples in the github rtx_compute_examples repository.

Thanks for any help!

Basically after the first hit point of a ray, I get (x,y,z) of hit point

Could you take one step back and explain how you’re calculating the primary ray origin and direction which leads to this first hit point?

If this is dependent on the affine transform and rational projecting function (effectively a camera implementation with orientation and projection) then this is trivial because the primary ray generation would already map to the 2D image plane by construction of the rays.

But instead if you mean you have a random point cloud of world space hit results from some ray distribution independent of the orientation and projection of the image plane, then you would need to have these (x, y ,z) results inside some device buffer and implement the projection onto that image plane starting from the hit points.

Since that would be a scatter algorithm where multiple hit positions could write to the same image plane (u, v) coordinates then, depending on the requirements, that could get more or less complicated.

For example if that would just accumulate all hits, it’s a simple atomicAdd, if it should only track the nearest hit, that would require some depth sorting, or if it should actually only track the visible hits from the image plane, then another intersection test against the scene geometry would need to be done.

If you provide a more complete description of your algorithm and requirements, it would be easier to answer how to do that with OptiX and CUDA.