Most of the examples are using a pinhole camera description which uses the camera position (“eye”) and three vectors U, V, W which span the upper-right (mathematically the first) quadrant of a left-handed coordinate system where U points to the right, V points up and W points forward.
That upper-right quadrant spans the normalized device coordinate range [0.0, 1.0] for U and V.
For the full camera plane, all four quadrants, the normalized device coordinate range is from [-1.0, 1.0] and that is where that * 2.0f - 1.0f
in the normalized device coordinate calculation comes from.
That UVW coordinate system doesn’t need to use normalized vectors (and they don’t even need to be perpendicular, so this could even define some sheared viewing frustum which could come in handy for stereoscopic view frusta with less foreshortening differences between eyes. I digress.)
The launch index (0, 0) is then usually at the bottom-left corner of the camera plane, means lower-left origin starts at smaller addresses on the resulting image memory, which matches the OpenGL texture image orientation.
Looks like this (with the legacy OptiX API terms):
Maybe this code is clearer:
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_runtime/shaders/raygeneration.cu#L52
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_runtime/shaders/lens_shader.cu#L40
It’s unclear what your raysCoords
actually are, positions or directions?
Also really double precision?!
Let’s assume these are actually float. Then I would first change the definition to use float3 types to make the rest of the code easier.
You would upload them into a buffer on the GPU device and put the device pointer to that buffer into the launch parameters to make them accessible to any program in your pipeline, here the raygen program.
If these are normalized directions originating from a single point in world space (doesn’t look like it, they aren’t normalized), then you would simply put these raysCoords into the direction of each primary ray.
origin = params.my_ray_coord_origin;
direction = params.raysCoords[linear_launch_index]; // This assumes launch dimension == raysCoords size (here 10,000).
But if the raysCoords
are actually positions in world space, what exactly do you want to do with them?
Do you want to shoot rays from a single world position to these world coordinates?
Then it looks like this:
origin = params.my_ray_coord_origin;
direction = normalize(params.raysCoords[linear_launch_index] - origin); // Assumes none of the raysCoords match the origin or the normalize() produces NaN.
Or do you need to project the raysCoords
world positions into an existing camera setup?
That would require a projection into the camera coordinate system, then onto the camera plane normalized device coordinates to determine if any and then which pixel is hit and then it depends to what you size your launch dimension, means if you need individual results per ray or accumulated results per pixel. Latter would be a scatter algorithm requiring atomics.