Camera origin and direction in cartesian coordinates

I am trying to set the camera origin for different launch. The reason is I need to start ray at different origins.

Looking at the optixTriangle example, I tried to set origin.x and origin.y in this way:

origin.x = static_cast( idx.x) / static_cast(dim.x) - 0.5f;
origin.y = static_cast( idx.y) / static_cast(dim.y);

I assume the range to cover is a box with range X [-0.5f, 0.5f] Y[0.0f, 1.0f], but the final plot shows the box is rotated.

I guess the origin.x and origin.y need to multiply some vectors to get the cartesian coordinates?


1 Like

The launch index [0, 0] in OptiX examples is normally starting at the lower left corner of the rendered image.
That’s matching what OpenGL glTex(Sub)Image2D() calls expect, means the texel data can be uploaded straight from an OptiX output buffer becuase it has the same linear memory layout.

Please have a look at page 18 of this GTC 2018 presentation which shows a picture of the usual pinhole camera layout in OptiX examples: OptiX Introduction

Do you mean you want to have a parallel projection of the rays?
But using the original pinhole camera origin and U,V,W vector description?

In that case you would need to change the routine inside optixTriangle

static __forceinline__ __device__ void computeRay( uint3 idx, uint3 dim, float3& origin, float3& direction )
    const float3 U = params.cam_u;
    const float3 V = params.cam_v;
    const float3 W = params.cam_w;
    const float2 d = 2.0f * make_float2(
            // Mind that this is the lower left corner of each pixel
            // This should better be (static_cast<float>( idx.x ) + 0.5f) and (static_cast<float>( idx.y ) + 0.5f) to shoot through the pixel center.
            static_cast<float>( idx.x ) / static_cast<float>( dim.x ),
            static_cast<float>( idx.y ) / static_cast<float>( dim.y )  
            ) - 1.0f;

    origin    = params.cam_eye; // All rays start at the camera position
    direction = normalize( d.x * U + d.y * V + W ); // The direction of each ray per launch index goes through the center of the pixel (when changing "d" it as I said above.)

to something like this:

    origin    = params.cam_eye + d.x * U + d.y * V; // Shift the origin parallel to the pinhole camera projection plane.
    direction = normalize( W ); // Shoot parallel rays in view direction.

Note that the U,V,W vectors are not normalizes and not necessarily orthogonal.
That pinhole camera representation would allow sheared views as well, but that is normally not used inside the OptiX examples.

1 Like

Thanks! This seems working for me.
If I get the intersection point of triangle by barycentric coordinate, how can I convert to cartesian coordinate?

Take the example of optixTriangle.
The triangle is:
vertices = {{-0.5, -0.5}, {0.5, -0.5}, {0, 0.5}} -> v0, v1, v2
If the barycentric coordinate is a float2 {a, b}
The final coordinate is:
a * {-0.5, -0.5} + b * {0.5, -0.5} + (1 - a - b) * {0, 0.5} <- is this order correct?
My test seems to show that the order is:
a * v1 + b * v2 + (1 - a - b) * v0

In the example, the payload is set to be {a, b, 1.0f}, not sure why the third dimension is 1.0f?

Edit: I guess I find some explanations here. Please confirm the case?
" These two equations are perfectly similar but it’s hard to understand why we usually write (1−u−v)∗A+u∗B+v∗C instead of u∗A+v∗B+(1−u−v)∗C without the above explanation."

Yes, that second formula is the correct one.
The barycentric coordinates reported by OptiX’s built-in triangle intersection are beta and gamma.
Please compare with the OptiX SDK examples using optixGetTriangleBarycentrics().
That was the same for all previous OptiX versions’ triangle intersection routines as well.

In the example, the payload is set to be {a, b, 1.0f}, not sure why the third dimension is 1.0f?

That is just a hardcoded blue value for the output result.z component sored as color to match the behavior of the miss program retuning a color. See line 93 inside the optixTriangle example.