Simulating The Sun Using An SDK Example

Hello, I am developing a project which finds the footprint of sun on a certain object. I started with optixBoundValues example in SDK examples of Optix 7.2.
In my simple scenario, there is a light source and a flat plane. Light is up in y-axis away from the center of the plane and this generates a footprint on the plane with v1(75,0,0) and v2(0,0,75).

When I move the light in just x-axis, I expect an elliptical-like footprint. However, the initial footprint moves along with the light move. As far as I understand, it happens because I didn’t change v1 and v2 vectors defined in light struct. Example output of my code is below.

My main goal is to simulate the sun. In real scenarios as you may guess, when the sun moves, its footprint changes elliptical to circular and then circular to elliptical again, not moves with the sun. As I understand, this is controlled by the v1 and v2 vector in SDK example. If so, first I would like to ask how I can calculate these v1 and v2 between the light source ( the sun) and the plane. Else, I am open to any suggestions.

Also, is there a reference paper in the literature for this calculation in the SDK example?

I hope I was able to explain the problem clearly.

Thanks a lot in advance.

Hello @gokkayae and welcome back.

If you don’t mind I am moving your post into the Optix forums, they have more experience with these specific SDK examples.

Thanks!

1 Like

Thanks. I actually understand how the light struct works. The code constructs the parallelogram light source using corner, v1 and v2.
Now I just wonder the reference of calculation in .cu file for further reading.

To your screenshots:
First of all that small light shape looks like a triangle, not like a parallelogram.
Then the lighting effect on your surface is not at all where it should be if that parallelogram light is the only light in your scene.

Looking into the optixBoundValues example, that is implementing a very simple path tracer with a Lambert surface material (seen by the cosine_sample_hemisphere() function in the device code) and a single parallelogram light source using direct lighting and four different material colors and emissions.

If you look through the C++ sources, you’ll see that the light is represented in one part by the parallelogram data structure which is used for explicit light sampling only.
This is the definition of the parallelogram light inside the host code:

    state.params.light.emission = make_float3( 15.0f, 15.0f, 5.0f );
    state.params.light.corner   = make_float3( 343.0f, 548.5f, 227.0f );
    state.params.light.v1       = make_float3( 0.0f, 0.0f, 105.0f );
    state.params.light.v2       = make_float3( -130.0f, 0.0f, 0.0f );
    state.params.light.normal   = normalize( cross( state.params.light.v1, state.params.light.v2 ) );

and this is the code doing the explicit light sampling for the direct lighting:

        ParallelogramLight light = params.light;
        const float3 light_pos = light.corner + light.v1 * z1 + light.v2 * z2;

        // Calculate properties of light sample (for area based pdf)
        const float  Ldist = length( light_pos - P );
        const float3 L     = normalize( light_pos - P );
        const float  nDl   = dot( N, L );
        const float  LnDl  = -dot( light.normal, L );

Now that is not all. To be able to hit the light geometry implicitly (when the ray randomly goes into the direction of the light after sampling the continuation ray of the BRDF), it is also defined as two hardcoded triangles inside the geometry data here:

    // Ceiling light -- emissive
    {  343.0f,  548.6f,  227.0f, 0.0f },
    {  213.0f,  548.6f,  227.0f, 0.0f },
    {  213.0f,  548.6f,  332.0f, 0.0f },

    {  343.0f,  548.6f,  227.0f, 0.0f },
    {  213.0f,  548.6f,  332.0f, 0.0f },
    {  343.0f,  548.6f,  332.0f, 0.0f }

You’ll see that these hardcoded coordinates match exactly.
Both these places must be changed to have the explicit light sampling and the implicit light geometry hits work together correctly! I would guess you had only changed one of the two.

Mind that when changing geometry positions inside the scene, the geometry acceleration structure must be updated or rebuilt! You cannot simply change the parametric representation of the light alone if it’s represented with geometry inside the scene.
https://raytracing-docs.nvidia.com/optix7/guide/index.html#acceleration_structures#dynamic-updates

I wouldn’t have hardcoded the geometry triangle but would have calculated the resulting triangles from the parallelogram parameters. Shown in this more advanced example: https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/nvlink_shared/src/Application.cpp#L540

Note that the implicit light hits will return the colors and emission values from the material index 3:

static std::array<uint32_t, TRIANGLE_COUNT> g_mat_indices = {{
    0, 0,                          // Floor         -- white lambert
    0, 0,                          // Ceiling       -- white lambert
    0, 0,                          // Back wall     -- white lambert
    1, 1,                          // Right wall    -- green lambert
    2, 2,                          // Left wall     -- red lambert
    0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  // Short block   -- white lambert
    0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  // Tall block    -- white lambert
    3, 3                           // Ceiling light -- emissive
}};

and the emission of that is not white but RGB (15, 15, 5):

const std::array<float3, MAT_COUNT> g_emission_colors =
{ {
    {  0.0f,  0.0f,  0.0f },
    {  0.0f,  0.0f,  0.0f },
    {  0.0f,  0.0f,  0.0f },
    { 15.0f, 15.0f,  5.0f }
} };

and weirdly enough, the surface color of the light geometry (the last entry again) is not black. (I would change that as well.)

const std::array<float3, MAT_COUNT> g_diffuse_colors =
{ {
    { 0.80f, 0.80f, 0.80f },
    { 0.05f, 0.80f, 0.05f },
    { 0.80f, 0.05f, 0.05f },
    { 0.50f, 0.00f, 0.00f }
} };

Now with all that explained, if you want to simulate lighting by the sun there are better ways to implement that.

I wouldn’t use a geometric primitive to represent the sun simply because you wouldn’t actually place that with the actual physical size and distance into the scene but would need to scale and place it nearer to not run out of floating precision bits of your scene units.

Instead the sun is usually implemented as a directional light with a normalized direction vector from surface point to the light and a proper cone spread angle to simulate the solid angle of 0.53 for the sun “disk” when seen from earth.
Means explicit light sampling would need to sample directions inside that cone only.

Changing that direction vector would then directly represent the elevation and direction of the sun relative the earth surface point.

There would not need to be geometry inside the scene for the sun, which means implicit light hits of the sun would be implemented inside the miss program by checking if the angle between ray direction and sun direction is smaller than the cone spread angle (that’s a dot product and a comparison).
That effectively places the sun infinitely far away. Because there wouldn’t be any geometry representing the sun, there also wouldn’t need to be any rebuild of the acceleration structures required! You could simply change the sun direction inside your OptiX launch parameters and restart the rendering.

The sun would always result in a perfectly circular shape when seen directly (except for projection distortions of the camera implementation).
The effect on the surface, means the change from more circular lighting effects (when the sun and view directions are perpendicular to the surface, i.e. when surface normal and light direction and view direction are the same) to more spread out elliptical shapes is a matter of the bi-directional reflection distribution function (BRDF) and the angles between normal and view direction (to the observer) and normal and light direction (to the sun). This gets much more pronounced when using glossy reflections instead of purely diffuse materials.

In addition to the simple sun disk light there would be much more elaborate sun and sky models which simulate the atmospheric scattering as well if you’d want to implement a more physically correct sun and sky lighting. Search for articles about Mie and Rayleigh scattering.

1 Like

Sorry for late response. There was something else i need to do.

I implemented a code as far as i understand from what you suggest. The scenario is as below. where black line is surface normal, blue is incident ray, cyan is specular reflected ray (with 0.53 cone angle) and yellow is light direction.

Code is simple. If the light direction is inside the cone, that is acos of the dot product of the light direction and the reflected ray is less than 0.53, then add the weighted light emission. Else i set the diffuse color.

I calculate the weight as below.

I select a random point on a sphere (Sphere Point Picking -- from Wolfram MathWorld) and then add to the light center. So, i find a random light position inside the sphere like the optix example does for parallelogram light position. If it is wrong or there is a better way, please tell me. Also, I am not sure about my assumption of LnDl is 1.0. I assumed so because i thought i did not need to rotate the sun towards the surface. In other words, i thought it was no matter how much a sphere was rotated. If wrong, what should the light normal be in order to calculate LnDl?

The result of this implementation is below.

The shape strictly depends on how the light position is selected. Also, even though there is an elliptical brightness in the center, whole shape still looks circular. As changing the observer (camera) location, i get different combinations of sun-surface-observer positions. But still, the shape on the surface is circular.

Could you suggest any solution or refer me to a reference which mention how to handle this?

What light transport are you implementing?

Let’s name your vectors.
N == surface shading normal (black)
L == normalized direction vector to light, sun (yellow)
I == continuation ray of your BRDF sampling (cyan)
O == direction vector from surface to observer, negative ray direction (blue)

Two cases:
1.) Explicit lighting.
That’s when your continuation ray hits the sun. That condition can be checked very easily.

The dot product of two normalized vectors is the cosine between these two vectors.
If that value is 1.0, the angle between them is zero, if it’s 0.0 the angle is 90 degrees, if it’s -1.0 the angle is 180 degrees.
Means if dot(L, I) == 1.0 then L == I and you hit the center of the sun with your continuation ray.

(Note that mathematically, the probability to randomly hit an infinitely small point or a fixed 3D vector exactly, is zero (if the floating point precision wouldn’t be limited). These are singular lights (point and directional lights), which do not exist in the physical world and can only be handled with direct lighting.)

If the sun has an angular diameter of 0.53 degrees when viewed from earth, you just need to check if the continuation ray is inside the cone with that angular diameter.

Now to open that dot(L, I) condition to a cone you need to compare its result to a threshold slightly smaller than 1.0, means a bigger opening angle than 0.0.
That threshold value is 1.0 minus the cosine of the half of the sun’s angular diameter, means
threshold = 1.0f - cosf(degrees_to_radians(0.53f * 0.5f)); // == 1 - cos(radius).
That’s a constant which can be calculated up front.

The full check if the continuation ray hit the sun would be this:

prd.radiance = make_float3(0.0f);
// Sample surface and get continuation ray "I".
// For a pure diffuse distribution (Lambert), this is a cosine weighted hemisphere.
// ...
// Explicit light hit?
if (threshold < dot(I, L))
{
  prd.radiance += sun_emission;
  return; // End of path when hitting the sun.
}

When not implementing direct lighting this will eventually converge against the correct result when shooting enough rays. This would be a brute force path tracer with no direct lighting. Because the sun’s solid angle is very small, this will need an enormous amount of rays though.
This would be your reference against which you would need to compare any additional light calculations.
You could make the sun radius bigger for tests to see that this works correctly.

Else I set the diffuse color.

That would imply the material is pure diffuse (Lambert) and there is another white environment light around your scene with no occluders. Don’t do that when debugging the sun’s contribution alone.

Also, I am not sure about my assumption of LnDl is 1.0.

Not sure what that is, but the normal comes into play during the continuation ray sampling and when contributing direct lighting.
There is a cosine factor describing the falloff of light contributions depending on the incident angle of the incoming light direction inside the rendering equation.
https://www.pbr-book.org/3ed-2018/Light_Transport_I_Surface_Reflection/The_Light_Transport_Equation

2.) Implicit lighting (“direct lighting”, “next event estimation”)
Here the light vector L needs to be sampled. To do that for the sun, you would need to generate vectors which are uniformly distributed inside the cone around the direction vector to the sun with the sun’s angular diameter.
That’s effectively sampling directions on a sphere cap with the angular diameter of the sun.

I would recommend reading the Physically Based Rendering book and its source code for light sampling:
https://www.pbr-book.org/3ed-2018/Light_Transport_I_Surface_Reflection/Sampling_Light_Sources
Sampling cone directions is explained here:
https://www.pbr-book.org/3ed-2018/Monte_Carlo_Integration/2D_Sampling_with_Multidimensional_Transformations#SamplingaCone

If you have the sampled direction vector to the sun, you can then do direct lighting calculations for each surface hit point with it if the continuation ray isn’t already hitting the sun.

1 Like