Pinhole Camera Model from the tutorial examples has a bug

I would like to pick a fight with the pinhole_camera function shipped with the OptiX SDK examples ;)

Round 1:

Let’s have a look a the function

RT_PROGRAM void pinhole_camera()
{
  size_t2 screen = output_buffer.size();

  float2 d = make_float2(launch_index) / make_float2(screen) * 2.f - 1.f;
  float3 ray_origin = eye;
  float3 ray_direction = normalize(d.x*U + d.y*V + W);

  optix::Ray ray(ray_origin, ray_direction, radiance_ray_type, scene_epsilon );

  PerRayData_radiance prd;
  prd.importance = 1.f;
  prd.depth = 0;

  rtTrace(top_object, ray, prd);

  output_buffer[launch_index] = make_color( prd.result );
}

Especially, these lines are my target:

size_t2 screen = output_buffer.size();
float2 d = make_float2(launch_index) / make_float2(screen) * 2.f - 1.f;

The variable “d” should go from -1 to +1 to cover the whole horizontal/vertical FoV. The issue is that the function won’t return a range from -1 to +1 but a -1 to +0.9XXX dependent on the screen width/height. The larger the screen, the smaller the error. The reason is that the screen variable goes from “1” to “width/height” and the launch_index range goes from “0” to “width-1 / height-1”. This means that the whole FoV orientation gets a little shift by one pixel since it cannot reach +1.

The correct calculation should go like this in my opinion:

float2 screen = make_float2(output_buffer.size()) - make_float2(1.0);
float2 d      = make_float2(launch_index) / screen * 2.f - 1.f;

With this change, we get a perfect distribution from “-1” to “+1”.
Are my calculations correct or did I miss something?

That calculates the coordinate of the lower left corner of the pixel.

You would normally add 0.5f to the pixel coordinate to hit the pixel center or add a two dimensional offset in the range [0.0f, 1.0f) to the pixel coordinate for progressive sampling over the pixel area which fills the whole screen coordinate range then.

Examples for both cases here:
[url]optix_advanced_samples/raygeneration.cu at master · nvpro-samples/optix_advanced_samples · GitHub
[url]optix_advanced_samples/lens_shader.cu at master · nvpro-samples/optix_advanced_samples · GitHub

When reading the documentation from your example it states a range from [-1, 1]. When I add a 0.5f offset, it won’t reach +1 since fragment won’t reach screen maximum value. At least it will be uniformly distributed since both positive and negative max will reach the same value.

Example for 800x600 screen with launch_index=799/X and 0/X:

(799.5/800) * 2.0f - 1.0f =  0.99875f
(0.5/800)   * 2.0f - 1.0f = -0.99875f

So my suggestion is:

  1. Maybe update the documentation since a -1/+1 is never reached in the raygeneration.cu example.
  2. The basic example which uses “make_float2(launch_index) / screen * 2.f - 1.f;” is therefore not perfectly correct since it’s going from [-1, 0.9XXX] and has a different angle to the right and to the left - rigth? :)

Of course sampling a single ray inside the center of a pixel is not reaching the screen rectangle coordinates at 0, 0 and width, height, because the samples are at the fragment location half a pixel to the inside of the full rectangle, which is the right thing to do when only shooting a single ray per pixel.
This is the same as if you would rasterize that screen area. The center of the pixel is where your attributes get interpolated for filled primitives.

You will reach the outer most edges only when sampling the whole pixel area as shown in the progressive renderer of the second example link.
That will automatically do antialiasing with the samples per pixel amount over the progressive accumulation of the rendered image.

The OptiX SDK example samples at the bottom left corner of each pixel. It should have added 0.5f to sample in the pixel center. That’s all.

Perfect thank you this is the answer I was expecting!
Since the example is sampling only the bottom left corner, you get an offset by the half of the pixel which makes the result “incorrect” by this small amount.

Case closed!

All the best,
Jakub