optix ray direction

I find in several examples, ray direction is defined as below.

float2 d = make_float2(launch_index) / make_float2(launch_dim) * 2.f - 1.f;
float3 ray_direction = normalize(d.xU + d.yV + W); //U=(1,0,0);V=(0,1,0);W=(0,0,-1)

What I understand is:

d is a point corresponding a the thread index in the region [-1,1]*[-1,1] and that makes ray_direction one of the many directions of a lower hemisphere.

But out of my intuition, ray_direction = pixel position - eye. In the above case (actually from Optix SDK example 2), how is the ray_direction dependent on eye position?

I’m quite sure I’m wrong. But could you help me understand ray tracing correctly? My misunderstand most probably comes from wiki ray tracing (graphics) page.

Thanks!

In sample2 the camera used is pinhole_camera.cu which defines the rays like this:

float2 d = make_float2(launch_index) / make_float2(launch_dim) * 2.f - 1.f;
float3 ray_origin = eye;
float3 ray_direction = normalize(d.x*U + d.y*V + W);

Yes, and launch index (0, 0) is the lower left pixel in that 2D range (which matches OpenGL’s texture coordinate origin so that the final display with a texture blit works automatically.)

Not really.
The vectors UVW define a plane with its center being length(W) away from the origin.
With the eye set as ray origin that builds a pyramid or view frustum.
The vectors UVW don’t need to be normalized or orthographic like in your example, you can build any sheared view frustum with this.
The ray direction is normalized because that’s required by all following calculations.

1 Like

I’m not clear about ray tracing basics.

ray_direction = normalized(pixel position - eye) ?
in Optix, one thread treats one pixel?
in UVW, W determines ‘center’ of a ‘rectangle’ (not plane)?
U and V determine the range of the rectangle?
the rectangle covers the area from corner (min(d.x)u,min(d.y)v) to (max(d.x)u,max(d.yv) ?
If all the above is right, d.x
U + d.y
V + W is one pixelray_direction should be normalized(d.x
U+d.y
V+W-eye). Where is my mistake?

Thanks!

By ‘origin’, you mean ‘eye’, not the origin of the coordinate system? That makes sense.

Mind that OptiX is fully programmable in all this!
There are many ways to define a camera coordinate system.

ray_direction = normalized(pixel position - eye) ?<<

Whatever you define the pixel position to be, yes, this is a one way to define some camera.

in Optix, one thread treats one pixel?<<

If you programmed it to do that, yes, normally you’d let one launch_index handle one primary ray when generating images.
(OptiX is a generic ray casting SDK, it doesn’t necessarily need to synthesize images, check the collision example)

in UVW, W determines ‘center’ of a ‘rectangle’ (not plane)?<<

UVW spans an arbitrary parallelogram and W points from its local coordinate system origin to the center of that parallelogram. (That also defines a plane if you do not limit the region with the extends of UV.)
A rectangle is a special case of that, mind that W doesn’t need to be perpendicular to that parallelogram, alas that UVW could define a sheared view frustum. It’s a simple but mighty construct.
Your example uses orthonormal vectors so just have a square of 2x2 units in 1 unit distance along the negative z-axis, similar to what OpenGL does, in right-handed world coordinates looking down the negative z-axis. (The UVW camera coordinate system itself (projection) is actually left-handed.)

U and V determine the range of the rectangle?<<

Yes, the size of the upper right quadrant of the parallelogram.

the rectangle covers the area from corner (min(d.x)*u,min(d.y)*v) to (max(d.x)*u,max(d.y*v) ?<<
Right.

If all the above is right, d.xU + d.yV + W is one pixelray_direction should be normalized(d.xU+d.yV+W-eye). Where is my mistake?<<

You’re mixing positions and vectors.

That -eye is implicit by using the eye position(!) as ray origin when defining the ray.
It starts at the eye and points into the ray direction which is a vector(!).

Vectors do not define positions in space, they just point into some direction.
(If you know what homogeneous coordinates are, vectors have w == 0.0, positions have w != 0.0.)

Let’s make it more figurative, you define a camera coordinate system (projection) with those UVW coordinates. It’s like holding an image frame and looking through it. The UVW vectors define how(!) you hold it, not where. Only if you place it relative to your eye position which becomes the root point (origin) of that local camera coordinates system it becomes a fixed view into the world.

If that is not clear maybe grab some standard computer graphics books first.

Thanks for the detailed explanation. You’ve made the image very clear!

Hello,
In general ray tracing book it is mentioned that in the case of pin hole camera U can be found out by the cross product of W and up vector, but in optix it is obvious that this is not the case, how does that U and V vectors are calculate in optix for pinhole camera implementation?

Typically, The direction of U can be found using the cross product of up and W. However, the length of U (and V) depends on the field of view you want in your image. For instance, if your horizontal field of view is theta, then choose U so that tan(theta/2)=length(U)/length(W).

Thank you for your response? in pin hole camera implementation we have to give angle for vertical field of view angle, how can we find the angle of horizantal field of view? I know its a basic question, but I am stuck in it

The field of view you use is your choice as an artist. A typical value would be around 60 degrees. Smaller values will look “zoomed in” and larger values will look “zoomed out”.

Thankyou for your reply