I’m reviewing some of my MDL-related OptiX/.cu kernel code in my OptiX7.4-based path tracer (based on Detlef’s GitHub - NVIDIA/OptiX_Apps: Advanced Samples for the NVIDIA OptiX 7 Ray Tracing SDK);

In the MDL-related .cu file:

but on event BSDF_EVENT_DIFFUSE_TRANSMISSION its handled as “thinwalled”, is there
more to do? however, its seems that its somehow working already:

Thank you.

on the reflection/transmission events of BSDF_EVENT_GLOSSY its handled directly with Next Event Estimation.

The “transmission” part of that statement cannot function because transmissive materials are not directly lit in these simple examples.
Only the diffuse and GGX BRDFs are directly lit in those.
If you look at the GGX BRDF and BSDF implementations you’ll notice that there is only one eval function __direct_callable__eval_brdf_ggx_smith for the BRDF, not for the BSDF.

It’s not possible to light specular surfaces directly and direct lighting of glossy transmissive surfaces would require handling incoming light from a spherical distribution for transparent surfaces which is not implemented. Direct lighting is only happening over the hemisphere of the surface side the ray has hit in these examples.

Means specular or glossy transmissions are never directly lit in these examples but handled like specular events and just follow the ray path and gather light when implicitly hitting an area light.
To look correctly, that in turn requires that the emission on the area lights needs to return the proper radiance which doesn’t seem to be the case in your images because the lights are all black on their emitting surface.
This should just work when the lights are emitting correctly.

If there are evaluation functions generated by MDL which handle direct lighting through transmissive diffuse or glossy surfaces, then I don’t see why they shouldn’t work in the next event estimation handling when the FLAG_DIFFUSE is properly set accordingly to make all lighting code take the proper code paths.

The examples have a compile-time switch to disable direct lighting / next event estimation which converts them to a brute force path tracer with just implicit light hits. Make that work correctly first.
Both these light transport algorithms need to converge to the same result which allows verifying direct lighting implementations.

1 Like

Thank you very much for the clarification.

I updated the BSDF_EVENT_GLOSSY_TRANSMISSION event according to your suggestions.
so glossy transmission simply is glossy refraction, right?

NOTE: The lights in the previous post are not all black on their emitting surface;
its the back-side of the light. The pink light arrow (perpendicular to the area light) is rendered on top, without respecting depth of the ray-object, so it looked confusing; sorry for that. In the image above now the camera is below the light source.

As always, very appreciated. Thank you Detlef.

I updated the BSDF_EVENT_GLOSSY_TRANSMISSION event according to your suggestions.
so glossy transmission simply is glossy refraction, right?

Basically, yes. It’s literally a transmission event through a glossy BSDF irrespective of an effective refraction.

What you’re doing in the thin-walled “glossy” transmission image of the plant is incorrect.
That the ray goes straight through a material only happens for thin-walled specular transmissions (or when the IOR is 1.0).
If you looked at the GGX implementation of the example I linked above, you’d see that the continuation ray direction needs to be sampled with the glossy distribution. Means thin-walled diffuse or glossy transmissions should look like frosted glass. What you showed is a specular transmission.

Means if you replaced the material system with the MDL-generated BSDF sampling and evaluation functions, that prd->wi = -prd->wo; code should appear nowhere in your code. You would always need to use the sampled direction from the MDL function which looks to be the data.sample.k2 in the right-most image.

In these examples wo is the direction to the observer (the negative ray direction) and wi is the incident light resp. continuation ray direction sampled by light or BSDF sampling functions.

I’m unable to tell if there are actually glossy transmissions happening in that right-most plant image.
If the plant leaves are not modeled as meshes enclosing a volume, that won’t look right when not using thin-walled, because refractions wouldn’t look as expected.
I would recommend using basic MDL BSDFs without layers during debugging to see if things are working correctly first.

That FLAG_TRANSMISSION is used to maintain a small nested material stack which tracks the IOR and absorption coefficients across transmission events inside the ray generation program. (It would also need to track the volume scattering coefficients if they were implemented.)
But if you replaced the material system with MDL BSDF sampling and evaluation functions, that material stack would need to be able to handle MDL materials instead somehow.

Again, when the brute force path tracing without next event estimation is working by using only the MDL-generated sampling function, the next step would be to implement the evaluation functions as well, only if the MDL evaluation functions are able to calculate light contributions through diffuse and glossy transmissions.

Note that the MDL-SDK example code in is also only lighting on the hemisphere of the surface side the ray hit, so I can’t really tell if this is going to work or not.

If yes, the direct lighting inside my examples is using multiple importance sampling, so all BSDFs which should be directly lit would need to set the FLAG_DIFFUSE bit to make sure the direct lighting happens inside the closest-hit program and the next implicit light hit will adjust the emission to handle the other part of the multiple-importance sampling.

Maybe use that MDL-SDK example as basis for your experiments first. (The NO_DIRECT_CALL code path should result in a faster runtime.)

Yeah, that obviously is what I did not understand how MDL handles it. Thanks for the link to the; I was not even aware of that new sample code.
I changed the thin_walled/IOR handling like shown in line 662 there: MDL-SDK/ at master · NVIDIA/MDL-SDK · GitHub.
So FLAG_VOLUME in your renderer code is what gives me the value for “is_inside”.
And thin_walled is what in your code is FLAG_THINWALLED. On the BSDF_EVENT_DIFFUSE_TRANSMISSION event now I simply use :

prd->wi = sample_data.k2;
prd->pdf= sample_data.pdf
prd->f_over_pdf = data.sample.bsdf_over_pdf;  // for the tint

according to line 690 in that same sample. That both works for diffuse materials also, so it seems its the same for glossy transmission.

However, so the sample DC you call in line 245: OptiX_Apps/ at master · NVIDIA/OptiX_Apps · GitHub simply is the equivalent to the MDL sample DC here in line 675:
MDL-SDK/ at master · NVIDIA/MDL-SDK · GitHub

That means I do not need to implement the refraction anywhere, so is it in the MDL sample DC:
microfacet_ggx_smith_bsdf_sample(() libbsdf.cpp#L2254
void microfacet_sample() libbsdf.cpp#L598

here some volume tests: with basic MDL BSDFs without layers

images above are all denoised

scatter_reflect_transmit and scatter_reflect give me a very similar result

In microfacet_evaluate() in line 756 “scatter_transmit” is checked; so for what cases would I need to run the eval DC also on glossy tranmission?
maybe I need to implement some refraction anyway there?

here the (in my understanding) expected output for transmission roughness == 0.0f:

(generated in Blender 3.1.2)

Is there even any transmission happening in your images? It’s hard to tell in still images with that mostly diffuse environment light.
To see if transmissive materials work correctly, there should be some lighting happening behind that object to see if that makes it through.
Also, could it be that your ray path length is too small to actually go through the object?

The first scatter_transmit image looks completely incorrect. There shouldn’t be any front surface reflections on a purely transmissive material.
(Same for the the previously attached images with the plant. scatter_transmit cannot match the result of scatter_reflect_transmit.)
Mind that glossy roughness of 0.0 means the material is specular. That simply cannot be directly lit, unless the MDL materials clamp the glossy roughness to a minimum value greater than 0.0 somehow.

Please look at the images inside the MDL Handbook chapters 3.6.2 Specular transmission and 3.7.2 Glossy transmission and compare that to the scatter_reflect_transmit images in there to see the expected difference.

Have you tested this without direct lighting, that is, with the USE_NEXT_EVENT_ESTIMATION compile-time switch set to 0?
If that looks the same or incorrect in a different way, then the BSDF sampling is incorrect and nothing will work.
That simply must work correctly before adding direct lighting.

That means I do not need to implement the refraction anywhere

If you replaced the material handling with the MDL-generated functions inside the closest hit function like shown inside the MDL-SDK OptiX 7 example, none of my examples’ original direct callables should be present anymore. You only need the integrator, which is implemented inside the ray generation and closest hit program(s).

In microfacet_evaluate() in line 756 “scatter_transmit” is checked; so for what cases would I need to run the eval DC also on glossy tranmission?

That code calls the BSDF eval() function depending on the scatter_mode and backside_eval conditions. If you set the FLAG_DIFFUSE and called that material’s generated evaluation function, I don’t see why that shouldn’t work.

Again, maybe try that glossy scatter_transmit material inside the MDL-SDK OptiX 7 example code.
The difference between the evaluation code inside that closest hit example and my implementation is that I flip both the geometry and the shading normal to the hemisphere the ray is hitting, the MDL-SDK example only flips the geometry normal, not the shading normal N.
Means the MDL-SDK example would calculate a positive cos_theta if the light sample direction is behind the object and the ray hits from the inside on a backside of the object where their surface normal is in the same hemisphere as the light sample direction.

If the MDL shader implementations always expect the unchanged surface normal in all cases, that would be the culprit.

No, that is the point, there seems to be none.

No, its always optixGetRayTmax() and that is the same as on the diffuse transmission tests in earlier posts in this thread. And there it seems to work correctly.

Great! So I now know exactly how it has to look like.

No, not yet.

I will try that next.

Here the critical code I use to run the MDL DCs; maybe should I set FLAG_VOLUME ?

// parts of this demo code is taken from OptiX Advanced Samples 2018
// and from the MDL SDK samples; 
// its used in my app with MDL SDK 2020.1.2  ABI 
// (pre-compiled binaries  build 334300.5582, 12 Nov 2020)
// My System:  OptiX 7.4.0 SDK   CUDA 11.4.3        VS2019 v16.11.13     
// Win10PRO 64bit (version 21H1; build 19043.1237)  Windows SDK 10.0.19041.0    8GB RAM    
// device driver: 512.15   GTX 1050 2GB      

// in a closesthit program this function is called:

RT_FUNCTION void do_mdl_handling(MaterialParameter & parameters, 
                                State & state,
                                PerRayDataUNI*  prd,
                                HitGroupData* sbt_data,
                                const float3 & tangent_u, 
                                const float3 & tangent_v,
                                const float3 & text_coord
   unsigned int prd_flags = prd->flags;

  // put the BSDF data structs into a union to reduce number of memory writes
      mi::neuraylib::Bsdf_sample_data sample;
      mi::neuraylib::Bsdf_evaluate_data<mi::neuraylib::DF_HSM_NONE> evaluate;   // required since MDL SDK 2019.2
      mi::neuraylib::Bsdf_pdf_data pdf;
      mi::neuraylib::Bsdf_auxiliary_data<mi::neuraylib::DF_HSM_NONE>  aux_data; // = Bsdf_auxiliary_data;
  } data;

  mi::neuraylib::Resource_data res_data = 
    NULL,                                    // void const  *shared_data    The shared_data field is currently unused and should always be NULL.
    sbt_data->texture_handler   // rt_data->texture_handler
  const char* arg_block = sbt_data->arg_block; 
  const unsigned int base_MDL_DC_id = sbt_data->mdl_callable_base_index;
        //   "mdl_material_info.h" clones mi::neuraylib::ITarget_argument_block const *arg_block to "m_arg_block";
        //   char* get_argument_block_data() returns a CPU pointer of the arg_block; 
        //   get_argument_block_size() returns the size in bytes;
        //   the block of this size is copied to GPU; arg_block is offset to that data block
  optixDirectCall<const int, State*, mi::neuraylib::Resource_data* const, void const *, const char*>(base_MDL_DC_id + MDL_BSDF_INIT, &state, &res_data, /*exception_state=*/ nullptr, arg_block);
  //   thePrd.wo   is same as in MDL:   data.sample.k1 = dir_out;      // outgoing direction
  //   thePrd.ior  =  ior.xy are the current volume's IOR and the surrounding volume's IOR.  ( efault: make_float2(1.0f) )
  prd->absorption_ior = make_float4(parameters.absorption, parameters.ior);
  const float xi0 = rnd(prd->seed);
  const float xi1 = rnd(prd->seed);
  const float xi2 = rnd(prd->seed);
  const float xi3 = rnd(prd->seed);
  float3 dir_out  =  normalize(-optixGetWorldRayDirection());

  #if defined(USE_ALBEDO)  // NOTE: denoiser uses geometry normal (on start of closesthit)
    if (frame == 0)  //  only on first accumulation frame
    { // aux
      if (prd->RayDepth == 0)
        data.aux_data = {};
        data.aux_data.ior1 = make_float3(parameters.ior);
        data.aux_data.ior2 = data.aux_data.ior1; //  .x = MI_NEURAYLIB_BSDF_USE_MATERIAL_IOR;
        data.aux_data.k1 = dir_out;      // -WorldRayDirection();      // outgoing direction
        //  mdl_bsdf_auxiliary(&data.aux_data, &state, &red_data, NULL, arg_block);
        optixDirectCall<const int, mi::neuraylib::Bsdf_auxiliary_data<mi::neuraylib::DF_HSM_NONE>*,  const State*, mi::neuraylib::Resource_data* const, void const *, const char*>(base_MDL_DC_id + MDL_BSDF_AUXILIARY, &data.aux_data, &state, &res_data, /*exception_state=*/ nullptr, arg_block);

        const float3 aux_albedo = data.aux_data.albedo;

          // The raygeneration program uses this to write the denoiser's albedo buffer.
             prd->albedo = aux_albedo;  
             prd->flags |= FLAG_ALBEDO_FORCE;  // my own flag

      }  // if (prd->RayDepth == 0)
    } // aux  if (frame == 0)
 #endif//  defined(USE_ALBEDO)

  const float3 wo = dir_out;
  const float3 positive_wo = -wo;

  const int is_inside = (prd->flags & FLAG_VOLUME);
  const int thin_walled = (prd->flags & FLAG_THINWALLED);

  if (is_inside && !thin_walled)
     data.sample.ior1.x = MI_NEURAYLIB_BSDF_USE_MATERIAL_IOR;
     data.sample.ior2 = make_float3(1.0f, 1.0f, 1.0f);
    data.sample.ior1 = make_float3(parameters.ior);
    data.sample.ior2.x = MI_NEURAYLIB_BSDF_USE_MATERIAL_IOR;
  data.sample.k1 = dir_out;      // outgoing direction
  data.sample.xi = make_float4(xi0, xi1, xi2, xi3);

  // call MDL sample DC:
  optixDirectCall<const int, mi::neuraylib::Bsdf_sample_data*,  const State*, mi::neuraylib::Resource_data* const, void const *, const char*>(base_MDL_DC_id + MDL_BSDF_SAMPLE, &data.sample, &state, &res_data, /*exception_state=*/ nullptr, arg_block);

        //   thePrd.f_over_pdf   is same as in MDL:   data.sample.bsdf_over_pdf   (=  bsdf * dot(normal, k2) / pdf )
        //   thePrd.pdf                is same as in MDL:    data.sample.pdf   (or additionally use "mdl_bsdf_pdf" => data.pdf.pdf )
        //   thePrd.wi                 is same as in MDL:    data.sample.k2;   ///< incoming direction
        //   thePrd.flags             is similar as in MDL:  data.sample.event_type

  // REMOVED: 
  //int update_transmission = (int)( thin_walled ? false
  //                             : ((data.sample.event_type & mi::neuraylib::BSDF_EVENT_TRANSMISSION) != 0));

  if ((data.sample.event_type & mi::neuraylib:: BSDF_EVENT_SPECULAR) != 0)

    prd->wi = data.sample.k2;   ///< incoming direction    

    /*  // REMOVED
    if (dot(prd->wi, state.geom_normal) <= 0.0f) // Do not sample opaque materials below the geometric surface.
        prd->flags |= FLAG_TERMINATE;
    prd->f_over_pdf = data.sample.bsdf_over_pdf;
    prd->pdf = 1.0f; // data.sample.pdf;    // Not 0.0f to make sure the path is not terminated. Otherwise unused for specular events     


  if (data.sample.pdf  <= 0.0f)
    prd->flags |= FLAG_TERMINATE;

  if ((data.sample.event_type & mi::neuraylib::BSDF_EVENT_GLOSSY) != 0) 

    if ((data.sample.event_type & mi::neuraylib::BSDF_EVENT_GLOSSY_TRANSMISSION) == mi::neuraylib::BSDF_EVENT_GLOSSY_TRANSMISSION)
      /*  // REMOVED: this caused some of the black parts !
      if (dot(prd->wi, state.geom_normal) <= 0.0f) // Do not sample opaque materials below the geometric surface.
          prd->flags |= FLAG_TERMINATE;

      prd->wi = data.sample.k2;   ///< incoming direction     
      prd->f_over_pdf = data.sample.bsdf_over_pdf;
      prd->pdf = data.sample.pdf;  // 1.0f   // Not 0.0f to make sure the path is not terminated. Otherwise unused for specular events     

      // ADDED, was missing before!
      prd->flags |= FLAG_TRANSMISSION;


    prd->flags |= FLAG_DIFFUSE; 

 else if ((data.sample.event_type & mi::neuraylib::BSDF_EVENT_DIFFUSE)  != 0) 
  if ((data.sample.event_type & mi::neuraylib::BSDF_EVENT_DIFFUSE_TRANSMISSION)  != 0) 
     // REMOVED: 
     // prd->flags |= FLAG_THINWALLED;

    // ADDED:
    if (!thin_walled) prd->flags |= FLAG_TRANSMISSION;
  prd->flags |= FLAG_DIFFUSE; 


 // ===========================================================================  
 prd->f_over_pdf = data.sample.bsdf_over_pdf;
 prd->pdf = data.sample.pdf;
 prd->wi = data.sample.k2;   ///< incoming direction    

  // REMOVED: 
  // if (update_transmission)  prd->flags |= FLAG_TRANSMISSION;

 if (data.sample.event_type != mi::neuraylib::BSDF_EVENT_ABSORB)
  const unsigned int prd_flags = prd->flags;
  if ( (prd_flags & FLAG_DIFFUSE) && 0 < sysNumLights )  
    LightSample lightSample; // Sample one of many lights. 

    const float2 sample = rng2(prd->seed); // Use higher dimension samples for the position. (Irrelevant for the LCG).
    const float3 pos = prd->pos;
    lightSample.index = clamp(static_cast<int>(floorf(rng(prd->seed) * sysNumLights)), 0, sysNumLights - 1); 

    const LightType lightType = (sysLightDefinitions[lightSample.index].type & 0x000000F);
    optixDirectCall<const  int, float3 const&, const float2, LightSample&>(BASE_LIGHT_ID + lightType, pos, sample, lightSample);

    if (0.0f < lightSample.pdf) // Useful light sample?
      // lightSample.direction =wiL    is same as in MDL:    data.evaluate.k2 = dir;    //  incoming direction
      // Evaluate the BSDF in the light sample direction. Normally cheaper than shooting rays.
      // Returns BSDF f in .xyz and the BSDF pdf in .w
     float4 bsdf_pdf; 
     data.evaluate.k2 = lightSample.direction;    //  incoming direction
     // call MDL eval DC:
     optixDirectCall<const int, mi::neuraylib::Bsdf_evaluate_data<mi::neuraylib::DF_HSM_NONE>*,  State* const, mi::neuraylib::Resource_data* const, void const *, const char*>(base_MDL_DC_id + MDL_BSDF_EVAL, &data.evaluate, &state, &res_data, /*exception_state=*/ nullptr, arg_block);
        // f     = make_float3(bsdf_pdf) is same as in MDL: data.evaluate.bsdf
        // pfd = bsdf_pdf.w                    is same as in MDL: data.evaluate.pdf
     bsdf_pdf = make_float4((data.evaluate.bsdf_diffuse + data.evaluate.bsdf_glossy), data.evaluate.pdf);   // since MDL SDK 2019.2
    if (0.0f < bsdf_pdf.w && isNotNull(make_float3(bsdf_pdf)))
        OptixTraversableHandle handle = params.handle;
        OptixRayFlags rayflags = ShadowRayFlags;
        const float Ldist = lightSample.distance - sysSceneEpsilon;
        mTraceShadow(handle,prd->pos,lightSample.direction,Ldist,prd,rayflags);  //  defines int "visible"        

      if (visible)
        if (prd_flags & FLAG_VOLUME) // Supporting nested materials includes having lights inside a volume.
          // Calculate the transmittance along the light sample's distance in case it's inside a volume.
          // The light must be in the same volume or it would have been shadowed!
          lightSample.emission *= expf(-lightSample.distance * prd->extinction);

        const float misWeight = powerHeuristic(lightSample.pdf, bsdf_pdf.w);
        prd->radiance += make_float3(bsdf_pdf) * lightSample.emission * (misWeight * dot(lightSample.direction, state.normal) / lightSample.pdf);
      } //  if (prdShadow.visible)
    } // if (0.0f < bsdf_pdf.w && isNotNull(make_float3(bsdf_pdf)))
  } //  if (0.0f < lightSample.pdf) // Useful light sample?

  } //  if  ((thePrd.flags & FLAG_DIFFUSE) && 0 < sysNumLights)  
 } // if (data.sample.event_type != mi::neuraylib::BSDF_EVENT_ABSORB)

I said ray path length. That’s the number of rays shot along a path inside the ray generation program, tracked with the local depth variable in my code. That should be at least 5 or higher to capture transparency effects.

Problems in your code: (Disclaimer, I have not used the MDL-generated functions myself.)

1.) I think you’re special casing too much in your code.
If you’re calling the MDL-generated sampling function, you should always set the results inside the per-ray payload unconditionally exactly once. You set it at three different locations.

      prd->wi = data.sample.k2; 
      prd->f_over_pdf = data.sample.bsdf_over_pdf;
      prd->pdf = data.sample.pdf;

I would also expect that the pdf is set to 0.0f for any case where no continuation ray could be sampled, and that is an end condition for the path as well.
I mean there is probably no need to check if the continuation ray direction goes under the surface of an opaque material. If at all there should only be one code line setting FLAG_TERMINATE after the sampling function was called.
Again everything I do in my sample and eval functions is effectively replaced by MDL-generated functions and any logic around that needs to be replaced by their event results.

2.) The FLAG_DIFFUSE indicates that there was a BSDF sampled which can be directly lit.
That is the only thing which would be related to the sampling event. Don’t set it for specular events and that’s it.

3.) The thin_walled property is a material parameter. It doesn’t make any sense to set the FLAG_THINWALLED after a diffuse transmission event.
Outside my BSDF sampling and evaluation functions the FLAG_THINWALLED is only used to track if a volume has been entered or left inside the ray generation program.

4.) Again, you should first check if MDL expects the State normal to be unchanged.
That’s not part of the provided code but happens inside the rest of the closest hit program.
Check if not negating the state.normal when looking at a backface is changing the result.
My implementation required that, and if MDL doesn’t, then transmission events from within a volume wont be correct.

1 Like

in integrator() I sill use it like in line 103 of your new sample:
So the flag simply is sufficient.

Ok removed it there.
I obviously misunderstood you, when back in the days on my question about translucency you answered this way : “In MDL terms those are scatter_ transmit or scatter_ reflect _ transmit” […] “diffuse _transmission _ bsdf is a special case of that and should only be used with thin walled materials.” And so I set them to thin walled. Ok its clear now, thanks!

yes, and I completely deactivated the part where the normal is flipped. (and only apply it flipped on backsides for the denoiser)
Deactivating the normal flip obviously did the main change; since that being “within” the volume its not dark, so there is finally transmission going on.

Also removing that checking made some black parts disappear. Progress!
And actually setting prd->flags |= FLAG_TRANSMISSION; for glossy transmission did the last step; so now the interaction between materials using your BSDF sampling+evaluation and MDL materials could to be seamlessly possible.
In the code in my previous post that was not applied, cause of the “return” inbetween. I also updated that code there in the previous post to my new settings. But FLAG_VOLUME was respected for the IOR setting already,

I mis understood; I did not change any of it. Its (Path Length min=2 max=10). But it works with your new recommended settings now.

I noticed also that in my app I was still linking against CUDA 11.1 libs, but using cudart64_110.dll from CUDA 11.4 bin folder. However, after updating that nothing of the visual output had changed.

But now I think I got the “frosty glass” look you told me.

Great Thanks Detlef! I think it all works now!!!

Have look:

The new OptiX7 MDL sample obviously was made for OptiX < 7.4; it still uses OPTIX_COMPILE_DEBUG_LEVEL_LINEINFO instead of OPTIX_COMPILE_DEBUG_LEVEL_MINIMAL
Unfortunately after setting it up completely with the MDL SDK 2021.1.2 (April 2022) ABI: mdl-sdk-349500.8766a: linking to CUDA 11.4 CUDA error 700 “an illegal memory access was encountered”
That also is still the reason, why I yet did not update my app to a newer MDL SDK than MDL SDK 2020.1.2, cause it was not working for me after that version then anymore.
There were also so many changes, that its not so easy to adapt it shortly for the new MDL sample.

I think I will try to add your new bsdf ggx smith DC also into my renderer.

Ok, that looks much more reasonable. :-)

Interesting. I need to reconsider my shading normal behavior. I flipped it to the ray side to be able to handle lighting on back faces of opaque surfaces easily, otherwise they would have been black and that is not how the MDL specs define the surface behavior. That in turn complicated back-lighting transmissive glossy materials which I didn’t implement but always wanted to be able to support singular lights.

I think I will try to add your new bsdf ggx smith DC also into my renderer.

Mind that this will only work when you call these from a separate closest hit program since they rely on a different State behavior for the shading normal.
I wouldn’t keep my BSDF sampling and evaluation functions if the renderer supports MDL-generated functions anyway. That would be redundant. My examples are just showing a very small sub-set of the basic BSDFs with no layers, mixers, modifiers, etc.

1 Like

I successfully added your mdl ggx smith DC to the renderer and it works great. (for cases where no layers are required it should be faster than similar MDL material, cause it does not need the additional mdl code DC calls to “init” and “auxiliary” (“auxiliary” for the denoiser albedo).

On the view “from within the object” in the image of my previous post IOR was 1.45 - that caused obviously the black parts in there.
When IOR is 1.0 the object is totally invisible on specular transmission and on glossy transmission (when also roughness is 0.0).
In the code above in case of BSDF_EVENT_SPECULAR_TRANSMISSION additional setting required :

prd->flags |= FLAG_TRANSMISSION;

to interact with the volume handling in the integrator. Otherwise it remains black.
(cannot anymore edit that post).
Here the output for IOR 1.1 :

Note that the view from inside refractive objects will not produce the correct refraction because the primary ray doesn’t “know” that it started inside a volume. Primary rays assume they start in a vacuum, as commented inside my ray generation programs.

For that to produce the correct refraction the nested material stack would need to be initialized with the IOR and volume absorption (and scattering) coefficients to be able to calculate the correct effective IOR value (the eta inside my BXDF functions).

1 Like

yes, but instead of “eta” two sets of IOR’s can be passed, so I cannot calculate “eta”

if (inside_and_not_thinwallled)
  data.sample.ior1 = make_float3(parameters.ior);  // this volume's IOR   (=current)
  data.sample.ior2 = make_float3(prd->ior.y);      // surrounding volume IOR  (=the other side)
  data.sample.ior1 = make_float3(prd->ior.y);      // surrounding volume IOR  (=current)
  data.sample.ior2 = make_float3(parameters.ior);  // this volume's IOR  (=the other side)

from MDL SDK:

struct BSDF_sample_data
    float3 ior1;                // mutual input: IOR current medium
    float3 ior2;                // mutual input: IOR other side

So I think passing the IOR’s is the way to go there.
I of course also updated the ior values for the Bsdf_auxiliary_data accordingly.

3.6.2 “Specular transmission” in the MDL Handbook says, that at IOR > 1.0 […]"the direction of light is modified
at the surface boundary, or refracted."[…] That seems to work in my app already.

in 3.7.2 “Glossy transmission” there: […]However, rendering the objects with glossy_transmission results in very dark areas that do not correspond with our intuition of a transparent object, glossy or not.[…]
Though they were less visible in the specular case, the dark areas represent areas in which
total internal reflection reduced the light intensity to zero. Such an object is never seen in nature
because of the unusual definition of the surface—only transmision, without any reflection at
all. This unnatural condition was also true for the material that only defined specular transmission, but the error was not so easily seen.

=>so I think I encountered this situation; and so implementation-wise IOR seems to be ok in my app.

on IOR 2.418 that works great (even inside) for:

  • a specular_bsdf (scatter_transmit)
  • a simple_glossy_bsdf (roughness 0.1, scatter_transmit)
  • a microfacet_ggx_smith_bsdf (roughness 0.1, scatter_transmit)

on glossy_bsdf / microfacet_ggx_smith_bsdf roughness 0.01 on IOR 2.418
still black artefacts; not if IOR is 1.1

but ok, if roughness is higher or IOR is lower

starting the ray from within works in all of these cases now without additional problems
NOTE: all the MDL materials theirselves do not contain any IOR setting, so that is color(1.0) there. The IOR is only applied in the OptiX kernel code, when calling the mdl code DC’s.

so hopefully these results should match the expectations!?!!!

When using your GGX Smith DC (from your samples) with roughness 0.01 the result looks also very similar to the MDL output.

I did a lot of additional tests with and without the normal flip (in case of backside), and now the output looks very similar for all combinations of normal flip on/off and NEE on/off.
So obivously the IOR settings (and some other implementation issues) seem to me, were the main reason for the problems in the beginning. Not so much the normals.

Thank you very much!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.