Shadow Ray from Closest Hit

Hello all,

Please forgive if this is a simple question, but I would like to create a shadow ray with OptiX 7.2 such that it is called from the closest hit shader.

For example, I have some value (say, solar illumination) that will determine if the shadow ray is traced. I would also like the result to show an image beneath a plane that may have hole(s) in it - multiple calls to closest hit maybe (?, I am not sure).

A cut-out that will show geometry below (if it exists) as well as the shadow(s).

Can this be done?

Snippet of pseudo-code:

extern "C" __global__ void __closesthit__ch( ) { 
    ...
    // calculate solar illumination here
    float solar = dot(normalDir, sunDir);
    if(solar > 0.0f) {
        // Call trace with "shadow" ray(s) somehow
    }
    ...
}

Thanks for any assistance, maybe I’m a little slow but this seems to have me stumped :)

Hi @picard1969,

Yes you can trace a shadow ray from your closest-hit program. To do this, you call optixTrace() inside your if(solar) block, and you will pass your shadow ray type to optixTrace. Be aware that this counts as a recursive trace call, and so you will need to adjust your maximum trace depth and stack size appropriately. You will also need to understand ray types and adjust your SBT to allow for two types: primary rays and shadow rays. I would recommend opening the OptiX Programming Guide and searching for a few of these terms using the search box feature at the top right. NVIDIA OptiX 7.5 - Programming Guide For example, search for “ray type”, and search for “recur”, and click on and read through all the results you get.

Whether your shadow ray needs to use closest hit is up to you. Often you don’t need closest hit calls, because all you need to know is whether an occlusion occurred. You can do that using only a miss program for the shadow ray type.

Also, you don’t necessarily need to trace your shadow ray in your primary ray type’s closest hit program. You could pass the primary ray’s hit results back to raygen via the payload, and then trace your shadow ray in raygen. This is optional, either way is okay. calling optixTrace() from your primary ray’s closest hit program is easier, like in your pseudocode. The advantage of moving it to raygen is then you can eliminate the recursive call, and you can reduce your stack size and maximum trace depth. Sometimes doing this “iterative” formulation is also more performant than the recursive formulation.

For cutouts, you can study the SDK sample called optixCutouts. This is an example of using an any-hit program to dynamically cut holes in a surface. You can evaluate holes any way you like, you simply call optixIgnoreIntersection() if you want there to be a hole in the surface, and that will cause OptiX to not treat it as a hit, not call your closest hit program, and continue tracing looking for another hit.

–
David.

1 Like

Thanks @dhart for the response and information. Some great stuff contained, however I still have some issues. I have been racking my tiny brain for a couple of days on this one and I must be overlooking something obvious.

Currently I have placed the shadow ray when __anyhit__ is called. It looks like it properly renders a cutout in an image but nothing underneath when there is valid geometry beneath, which should not be happening. I suspect this could be happening because I am not allowing the ray to recurse as it should be (just a guess).

The __closesthit__ shader snippet pseudo code follows:

extern "C" __global__ void __closesthit__ch( ) {
    ...
    // Calculate solar illumination here
    float solar = dot(normalDir, sunDir);
    if(solar > 0.0f) {
        ...
        RayDataShade shd;
        // Store attenuation in shd
        shd.attenuation = 1.0f;
 
       uint2 pay = splitShade(shd);   // acts like example split and merge functions from OptiX 7.2

        optixTrace(topObject, hitPt, sunDir, fmaxf(epsilon,0.0f), DEFAULT_MAX, 0.0f, 
                           OptixVisibilityMask(0xFF), OPTIX_RAY_FLAG_DISABLE_CLOSESTHIT,
                           RAYTYPE_SHADOW, NUM_RAYTYPES, RAYTYPE_SHADOW, pay.x, pay.y);

        if(shd.attenuation > 0.0f) {
            // Render some stuff 
            ...
        }
    }
}

The associated __anyhit__ shader pseudo code follows:

extern "C" __global__ void __anyhit__shd( ) {
    RayDataShade *shd = mergeShade(optixGetPayload_0( ), optixGetPayload_1( ));
    if(shd->attenuation == 1.0f) {
        shd->attenuation = 0.0f;
        optixTerminateRay();
    }
}

I apologize if this is something stupid I am not seeing, but it has been a long day :) - any help would be great.

You’re not actually handling the cutout opacity in your any hit program.

If you want to support materials with cutout opacity, then both the radiance and the shadow ray for those material need to have an anyhit program attached.
In both anyhit programs you need to calculate the current opacity and if it’s a cutout (a hole in the surface, usually for opacity < 1.0f) than you must call optixIgnoreIntersection() to let the ray continue through the surface.

If it’s opaque, you need to do nothing in the anyhit program on the radiance ray to let the traverse keep searching for the closest hit.
But inside the shadow ray anyhit program you should call optixTerminateRay() to let the opaque surface parts block the visibilty. No need to keep traversing the BVH when the visibility is false from any surface in the shadow ray t_min, t_max interval.

Please read this thread which explains exactly the same things and also touches on light attenuation pitfalls in Whitted renderers:
https://forums.developer.nvidia.com/t/anyhit-program-as-shadow-ray-with-optix-7-2/181312
(This forum has a search field in the top right!)

Please follow the links to my OptiX 7 example code in there as well. They contain everything to support stochastic cutout opacity in a progressive path tracer.

Thanks @droettger for information and link.

I will better use the search option next time.

Thanks again @droettger for the assistance.

I am calling an AnyHit shader from ClosestHit shader with the following code:

...
uint2 payload = splitPointerShade(&shade_prd);

optixTrace(topObject, hit_point, sunDirWorld, fmaxf(sceneEpsilon,0.0f), RT_DEFAULT_MAX,
                 0.0f, OptixVisibilityMask(255), OPTIX_RAY_FLAG_DISABLE_CLOSESTHIT,
                 RAY_TYPE_SHADOW, RAY_NUM_TYPE, RAY_TYPE_SHADOW, payload.x, payload.y);

Does anything look obviously off with this code?

Thank you again for all your help and patience with my OptiX 7 struggles.

You wanted to say that you shoot a new recursive visibility testing ray inside a closesthit program which should invoke some anyhit program when there is an intersection.

No, that looks perfectly normal if all you have on your shadow ray is an anyhit program which would only be required to handle cutout opacity or Whitted-style attenuation of transparent materials.

For that to work you’d need to set the number of recursions to the correct value for maxTraceDepth inside the OptixPipelineLinkOptions and have setup the shader binding table correctly to have the resp. shadow ray programs on the effective SBT indices calculated per ray

It should also have initialized all parameters affected by the anyhit program on the per ray payload to the miss case before the optixTrace in case there is no miss program.

What exactly is not working?

Thank you for the response. It appears that shadows are not being rendered.

The OptixPipelineLinkOptions are set to have max trace depth of 5.

Maybe I don’t have the SBT set up correctly?

Questions of the kind “I did something and it’s not working” aren’t going to get you solutions on a developer forum. :-)

This would need at least the complete code of the closest hit program doing the optixTrace, the anyhit program for the shadow ray which isn’t working or isn’t called, the per ray payload structure used for that ray and the split and merge functions, and the code doing the SBT setup. (ZIP the source code files together and attach the archive.)

Thanks @droettger, you are absolutely correct “I did something and it’s not working” is never good for getting solution(s) on any developer forum. I will get the files together and attach the archive.

I have included the relevant files that I am calling to try and render the shadows - some details I couldn’t release but I don’t think they have anything to do with the shadowing per say. I don’t know, maybe I am completely wrong.

The SBT for the hitGroup is being created in a function called createIAS(..) for instance acceleration objects and has the following pseudo code:

HitGroupData hg_sbt;
std::vector<float4j> h_array;
for(int I = 0; I  < 4; ++I) {
    float3j v0(node[3*i],node[3*i+1],node[3*i+2]);
    h_array.push_back(v0);
}
cudaMalloc(reinterpret_cast<void**>(&hg_sbt.d_vertices),sizeof(float4j)*h_vertices.size());
cudaMemcpy(reinterpret_cast<void*>(hg_sbt.d_vertices), h_vertices.data(),sizeof(float4j)*h_vertices.size(), cudaMemcpyHostToDevice);

SbtRecordHGData hg_record_sbt;
hg_record_sbt.data = { 0 };
memcpy(&hg_record_sbt.data,&hg_sbt,sizeof(hg_sbt));
res_optix=optixSbtRecordPackHeader(ogl->hitgroupPGs_[0], &hg_record_sbt);

Please note that the above SBT is just dummy data at this point and that the associated code will not compile as is. It is only so that I can hopefully better illustrate the issue. My real concern is why I am not seeing any shadows which I am hoping is something simple in the closest-hit or any-hit shaders.

Thanks again for any help.
anyhit.cu (268 Bytes)
closest_hit.cu (1.3 KB)
common.h (202 Bytes)
hitgroup_data.h (125 Bytes)
launch_param.h (191 Bytes)
per_ray_data.h (547 Bytes)
sbt_record.h (237 Bytes)

The solar direction calculations look confusing.

  // Calculate solar direction
  float3 sdirection = make_float3(...);

  // Calculations to determine illumination
  float thi = ....;
 
  if(thi > 0.0f) {
    float3 sdirection_world = normalize(optixTransformNormalFromWorldToObjectSpace(sdirection)); 

The ray is in world space inside the ray generation, closest hit and miss programs, and in object space in intersection and anyhit (that is, during traversal.)

1.) What coordinate space is sdirection in?
If that is in object coordinates then you need to transform from object to world space. You do the opposite.

2.) Then you transform the direction vector as a normal.
A normal is transformed with the inverse transpose matrix to handle non-uniform scaling of objects. But a light direction is not a normal, that is a vector. Transforming that from object to world space would require optixTransformVectorFromObjectToWorldSpace.

But neither of this makes sense, unless sdirection had been calculated inside the object space of the currently hit object.
Instead you should calculate the light related things directly in world coordinates. They shouldn’t be dependent on the transform hierarchy of the currently hit object’s transformation list.
If I want light geometry be defined in object space, I provide the necessary transformation to world space along with the light definition, which can scale and place the light anywhere in the scene. For environment lights that would contain some rotation matrix which allows orienting a spherical environment light.

Wouldn’t be too surprising that shadows do not work when shooting into the wrong direction.

The shadow ray anyhit program could be made faster. Since you’re only having a binary decision inside the shadow anyhit program, there wouldn’t even need to be a check for if(pr->attenuation == 1.0f) required.
With the current implementation any hit of the shadow ray interval means full shadow in your current implementation and then the traversal is terminated anyway (and the shadow ray closest hit would be called if it existed).

1 Like

Thank you @droettger for the reply.

I will look into the transform directions.

The anyhit program has the if(pr->attenuation == 1.0f) condition because if I didn’t there were a lot of places with just missing images - e.g. the ray is terminated in many different places. Not sure if that helps, but that is the reason that it is there.

Thanks again.

The anyhit program has the if(pr->attenuation == 1.0f) condition because if I didn’t there were a lot of places with just missing images - e.g. the ray is terminated in many different places. Not sure if that helps, but that is the reason that it is there.

If the shadow ray’s anyhit program leads to missing geometry, then there might be something wrong with the anyhit program assignments in the hit records of your shader binding table. That’s why I asked for that host code as well before.

If you look into my anyhit program implementations, you’ll see that radiance rays for opaque objects do not have an anyhit program at all.
The shadow ray for opaque materials only sets a flag to indicate a shadow, resp, a visibility test failure, and terminates the ray.
(That specific shadow ray anyhit program only exists because a scene could contain cutout opacity materials. If a scene contains only opaque materials, the shadow/visibility ray can be implemented faster with just a miss program, as explained in the link I posted above on July 22nd.)

Hit records for materials with cutout opacity need anyhit programs for both, the radiance and shadow ray, where the one on the radiance ray only needs to check if the ray hit a hole and ignore that intersection, and the anyhit program on the shadow ray needs to determine if it hit a hole and if yes, also ignore that intersection, and if hitting an opaque area, indicate the shadow again and terminate the ray traversal.

Can all be seen in this file: https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/nvlink_shared/shaders/anyhit.cu#L84
and in this SBT setup: https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/nvlink_shared/src/Device.cpp#L761

1 Like

Cool. I will look at the SBT setup as this may be the bigger issue.

Thanks

Thanks for the SBT link @droettger that was exactly what was wrong with the missing geometries .

A million thanks :)

Thank you again @droettger for all your patient assistance during my OptiX journey, the help is greatly appreciated.

One last question, if that is okay. I have searched the forum and can find no information regarding this - apologies if it is simple and/or been answered before. However, in the closest_hit.cu shader after calculating the potential shade ray I try creating reflections. These reflections, for some reason, look speckled rather than solid as they should be since the geometry is smooth. They look as if they are either rendered using pointillism or have interference.

I am trying to understand if this is because of something on the closest hit shader or something to do with the building of SBT/Program Module/etc. I am leaning to the later.

Have you, or anyone else ran into this kind of issue before?

Snippets of relevant code follow - I realize it is missing a lot of data but I tried to keep it as succinct as possible.

per_ray_data_reflect.h

#ifndef PER_RAY_REFLECT_H
#define PER_RAY_REFLECT_H
#include "common.h"

struct PerRayDataReflect {
  float4 result;
  int depth;
};

typedef union {
  PerRayDataReflect *ptr;
  uint2 dat;
} ReflectPayload;

__forceinline__ __device__ uint2 SplitPointerReflect(PerRayDataReflect *ptr) {
  ReflectPayload payload;
  payload.ptr = ptr;
  return payload.dat;
}

__forceinline__ __device__ PerRayDataReflect* MergePointerReflect(unsigned int p0, unsigned int p1) {
  ReflectPayload payload;
  payload.dat.x = p0;
  payload.dat.y = p1;
  return payload.ptr;
}
#endif

closest_hit.cu

extern "C" __global__ void __closesthit__reflect() { 
    // ... 

    // Payload
    uint2 payload;

    // Get RAY data payload from previous OptiX RAY Trace call (rayReflect program)
    PerRayDataReflect *prd = MergePointerReflect(optixGetPayload_0(), optixGetPayload_1());

    (prd->depth)++;

    // Calculate solar illumination and call shade ray as before
    // ...

   if(prd->depth < 2) {
       PerRayDataReflect *refPrd;
    
       // Set some values then call reflect ray

      // RAY Trace
      payload = SplitPointerReflect(&refl_prd);

      optixTrace(topObject, hit_point, refdir, fmaxf(sceneEpsilon,0.0f), RT_DEFAULT_MAX,
                          0.0f, OptixVisibilityMask(255), OPTIX_RAY_FLAG_NONE,
                           RAY_TYPE_PRIMARY, RAY_NUM_TYPE, RAY_TYPE_PRIMARY, 
                           payload.x,  payload.y);

        if(refl_prd.result.x > 0.0f) {
          (prd->result.x) += a_scalar * refl_prd.result.x;
        }
      }
// ...

Thank you again for any help, sorry if this question should be placed as another topic but I wanted to try and keep my dumb questions to a minimum number of topics.

An image would have explained this better.

The sceneEpsilon is a scene size dependent hack to avoid self intersection when starting rays from a surface.
Make sure that your epsilon is not too short or you get pixel acne, often in form of concentric circles due to the radial distance between surface hit points to the observer.

Did you solve the light direction calculations in the previous posts by making all light calculations happen in world space?

Then it would be the most important detail to see how the refdir is calculated. (Seriously?)

How does that fit together?

PerRayDataReflect *refPrd; // Unused?
...
payload = SplitPointerReflect(&refl_prd);

Where is that structure, and how did you initialize it?

If that is a specular reflection, then all you need is the current ray direction, the surface normal at the hit point, and the reflect() function, like in this line, where wo is the direction to the observer (-ray.direction), wi is the reflected ray.)
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/nvlink_shared/shaders/bxdf_specular.cu#L73

You’re shooting a recursive “primary” ray and you have a __closesthit__reflect. To what is that assigned?

Normally you call a recursive reflection ray from a closest hit program of the radiance ray and that can call into its own or another closesthit program again to calculate its surface color, which will become the reflected color in the caller.
(That recursion could go on forever, that’s why it’s important to limit the depth of the recursive calls to the maximum depth you’ve told the OptixPipeline up front.)

Not sure why you’re checking only refl_prd.result.x. That would mean anything without a red color value would not be reflected.

You don’t need to write a specific merge and split pointer function version for each different payload structure you use. I did that because I only use one payload structure. If I had multiple I would have implemented only one version which aliases a void pointer with an uint2 and cast to the respective payload structure in the caller.

There is also no need to put brackets around the prd->depth++ or prd->result.x assignments.
The C++ operator precedence will handle that as you intended because member access and post-increment have the same precedence and are evaluated left-to-right. https://en.cppreference.com/w/cpp/language/operator_precedence

1 Like

Thanks for reply @droettger

Sorry for the poorly constructed question. I will try to be more clear.

I did solve the light direction calculation from previous posts by making all light calculations happen in world space. Thank you again for the assist.

The refdir is calculated by the following (closest_hit.cu):

extern "C" __global__ void __closesthit__reflect() {

    float3 rayDirection = optixGetWorldRayDirection();
    float3 rayOrigin = optixGetWorldRayOrigin();
    float tHit = optixGetRayTmax();

    HitGroupData* hg_data = reinterpret_cast<HitGroupData*>(optixGetSbtDataPoint());
  
    unsigned int triIdx  = optixGetPrimitiveIndex();

    // From HitGroupData SBT
    float3 *norms = hg_data->d_norms;
    float3 normalDir = norms[3*triIdx];

    float3 rayD = normalize(optixTransformVectorFromWorldToObjectSpace(-rayDirection));
    float cthv = dot(normalDir, rayD);

    float3 reflectD = (2.0f*cthv) * normalDir - rayDirection;

    float3 hit_pt = rayOrigin + tHit * rayDirection;
 
    // Increment depth counter
    ++(prd->depth);

    
    if(prd->depth < 2) {
        PerRayDataReflect *refPrd;

        // Set alpha to some scalar
        float alpha = ...
  
        float3 T = cross(normalDir, reflectD);
        float3 B = cross(T, reflectD);
        float dang = 2.0f * alpha;
        if(dang > 1.4f) dang = 1.4f;
        if(dang < 0.1801f) {
            float3 P = reflectD;
            float Pnorm = length(P);
            P /= Pnorm;
            float cthi = dot(normalDir, P);
    
           float3 refdir = normalize(optixTransformVectorFromObjectToWorldSpace(P));

           refPrd.depth = prd->depth;
	   refPrd.result = make_float4(0.0f, 0.0f, 0.0f, 0.0f);

           payload = SplitPointerReflect(&refPrd);
           optixTrace(topObject, hit_pt, refdir, fmaxf(sceneEpsilon,0.0f), RT_DEFAULT_MAX,
                               0.0f, OptixVisibilityMask(255), OPTIX_RAY_FLAG_NONE,
                               RAY_TYPE_PRIMARY, RAY_NUM_TYPE, RAY_TYPE_PRIMARY, 
                               payload.x,  payload.y);

This is a typo, I meant to write __closesthit__ch()

You are correct. I put the brackets around the prd->depth++ and prd->results.x simply for readability for me trying to debug the issue.

I would like to think that the speckled reflections are due to some simple flag or the like that I am not setting properly. I hope anyway.

Thanks again for the assist.

Also attached is a png showing some of the reflective speckling I am talking about. Screen Shot 2021-08-09 at 11.06.34 AM

I looks to me as almost a noise.

Doh, the problem was so obvious I feel stupid now. As @droettger had pointed out the sceneEpsilon is scene size dependent hack. I just needed to make sure it wasn’t so short then it cleared right up.

Thank you again for walking me through my OptiX troubles @droettger you and @dhart are the best.

1 Like