How to improve shadows ?

thank you for that explanation.

Now I saw a video exactly using a raytracer for shadows only and creating AO with the ray tracer (my originally topic in this thread). Shadows rendered as “Ray-Traced” at 16:02 : (AO at 16:42) https://youtu.be/e4hdDcYumec?t=16m2s At 17:12 AO visualization of that scene

It does exactly what I first was thinking about. Didn’t know it at that time. The Path Tracer is higher quality (and I will continue that also), but this AO method seems to be faster and so I tested the prime baking sample: https://github.com/nvpro-samples/optix_prime_baking
Only those mesh subsets need to be re-baked, which are transformed with bones or blendshapes/shapekeys and those which are around them; in worst case the entire sceen. All unaffected parts simply can be baked one time and used as they are. This saves much of render time and gives soft shadows to a pure diffuse ray-traced whitted scene by compositing in a rasterizer afterwards.

I noticed on https://developer.nvidia.com/optix-prime-baking-sample under “Map AO to Vertices” how the mapping is done. There written: […]The least squares filtering step is not currently GPU accelerated, […] using OpenMP[…] This step takes the longest time. Are there any new software-based speed improvements for this since June 2016 which are not described on that page?

speed on Release (GPU usage is as expected; but CPU at about 40% on a QuadCore although openMP is active):
[i]Loaded scene: lucy_v134.bk3d
Compute AO … 791.82 ms
Map AO to vertices … 1842.70 ms

Loaded scene: sled_v134.bk3d.gz
Compute AO … 1721.86 ms
Map AO to vertices … 2531.73 ms[/i]

I tried to use simple spheres as input and I’m a bit unsure what is better for performance on creating good quality (for example smooth gradient on the sphere) on an arbitrary mesh. More “rays per sample” or more “samples per face” ? The timing results did not really tell me much about it. The best quality+speed seems to be 121 rays per sample and 12 samples per face for pure spheres (see attachment). But this seems not generally to be true. Is there an analyze chart available which shows the optimal ray and sample settings for a given mesh complexity? I think there is somewhere an optimum which can be detected with a neural network or some other statistically analyzing methods. I can do that on my own, but if this information is already present, it would be great to know.

Is it possible to combine a prime OptiX context with non-prime OptiX parts at same time sharing same OptiX context?


at first in this post I wrote about build problems (all now removed) with the baking sample; these are solved : instead of OptiX 3.9.0 I use OptiX 5.0.0 (it also succeeded on OptiX 4.0.2 ) then this prime baking sample also works on my Pascal GPU with CUDA 9.0 in a VS2017 project (using v140 toolset) thanks to this post https://github.com/nvpro-samples/optix_prime_baking/issues/2

If you want ambient occlusion only and you have an accumulating path tracer already, the only thing you need to do is to make all your materials white Lambert and put them into a constant white environment.
Path length should be 2 to have the primary ray hit something and the continuation ray pick up the environment light or not.
Even faster when the primary ray is shot and you only shoot shadow rays from there (indoors with a limited length to not get all black).
I’ve posted complete code how to do that on the forum before. It’s really that simple:
[url]Exceptions when using occlusion rays - OptiX - NVIDIA Developer Forums
You can do that with a separate ray generation program entry point to switch between full global illumination and ambient occlusion instantly.

If you want to bake, read the forum posts where I explained that recently.

thank you for the code.
I tried it out and it works fine. Its really an improvement for shadows on the diffuse ray tracer without the requirement of using a path tracer. (Direct Illumination and reflections/refractions could be done with the pure whitted one). A great option for designing a final scene without always running the full quality path tracer during the design process.

Its really higher quality, especially when using larger triangles. And the denoiser works great on it.
The prime baking sample may be faster, but its obviously much harder to optimize it for speed.

I use Hammersley hemisphere coords cause especially on spheres they have much better quality than others.

I finally applied the AO result on the diffuse ray tracer. its really an improvement. the shadows from direct illumination become a bit softer and the AO increases the image quality.
However, its clear, that there is no way to get results as on a full path tracer with AO.
So this implementation will be for designing a scene only.

I will now move on with the MDL materials.

Detlef, thank you very much!

An ambient occlusion image is meant to be white where there is no occlusion. Not sure how you generated the AO images themselves, but if you modulate a Whitted ray tracer with that, your image will be too dark.

For simplicity I used the geometric normal in that small AO example code, which results in that facetted look, you should try the shading normal as well to get smoother results, but would need to make sure that AO rays stay above the geometry surface.

The AO image is (optionally) build in its own kernel launches in the same material programs as the radiance is build. During the AO pass loop (loop steps: rays per pixel), the material simply operates as AO: only calling a “closesthit_ao” function and returns. That function is a derivate from the code at the link you posted above. It uses a Hammersley hemisphere distribution and now also a shading normal and faceforward to ensure that its pointing to the right direction. Thank you very much for this info! And of course also the “anyhit” program is operating as “anyhit_shadow” then. That all is rendering to a separate accumulation buffer. For visualization this buffer then is applied to the output buffer, otherwise it isn’t. (any misses are represented as “uninitialized depth” shown as green color in visualization).

The final image (in the last previous posts) was wrongly build by multiplying the AO from the accu buffer to the radiance result of each final pixel. That is indeed dark. Lighting is always a tough task. So I tried again:
Now for each material I simply multiply the AO result from AO kernel passes to the “ambient contribution”
instead of multiplying it to the full radiance result. I noticed, that the shadow quality is much better when
additionally to that AO another AO value from an AO texture map is applied. I also removed a bad error in the TBN normals creation and added specular light from a SPEC map.
And again: roughness… Ok, I know its the opposite of glossiness. And that glossiness is not specular light.
In the phong-based materials I use the inverse of roughness as phong exponent, as I’ve seen in the Optix 5.0.0 Tutorial sample where a “metalroughness” is used.
So now I think, finally that’s it for the diffuse ray tracer (using phong-based materials)…

Hi Detlef,
I found this new video [url]NVIDIA RTX and GameWorks Ray Tracing Technology Demonstration - YouTube (at 01:32 to 1:47) with soft shadows called “RayTraced Shadows - Directional Light”
And so I wonder, how they’re made. Are these soft shadows done from area lights? But obviously not only as AO, cause only Directional Light is used. So please tell me: how can I do these soft shadows with a pure diffuse raytracer? Blurring?

I think these are directional lights with an adjustable cone spread angle where 0 degrees is the hard shadow at 1:45 and higher angles produce the appearance of a disc light but without geometric position.

Thank you very much for your answer.
Now I think I understand it.

I try to implement a Bidirectional Path Tracer (BDPT) with Multi-Importance-Sampling based on a paper I use OptiX 5.0.1; The light path and eye path creation works fine already.

But again on the glass material (a derivate from Advanced OptiX Glass Sample) the reflections of the light emitter, the caustics and the glass transmittance output of my test has problems. Refraction seems to work. And reflection of the environment also seems to work.
In attachment the exact same scene is rendered once with Path Tracer (all OK) and some BDPT tests with exact same scene
on different scale values for transmittance. (all 8spp; BDPT: max: 16 light vertices; 16 eye path vertices)

As described in the thesis for reflection and refraction I use probability = 1.0f; And I apply that to the light path vertex contribution:

The same I do for the eye vertex. The eye path I run at same time when weight combining is done, cause the current eye path only needs data from its own vertex and from the previous one (accumulated).

System: OptiX 5.0.1 SDK CUDA 9.0 GTX 1050 Win10PRO 64bit (version 1607) device driver: 388.59 TDR re-enabled:defaults+longer delay times VS2017 (toolkit v140 of VS2015)

That the roof of the Cornell box is not the same brightness as the path tracer is already an indication that something is not fully matching even without the specular glass material. You might want to solve that case first before adding specular materials.

thank you, I changed some code and finally the BDPT has same roof brightness as on all other walls/floor. Light source also is correctly reflected now and caustics are valid. Also refraction is generally ok. But transmittance is still incorrect (too dark).

The roof is darker, when in the Eye Path also the radiance result is multiplied to the contribution. So I removed them. Obviously BDPT does not need the shadow test in material !? instead it obviously uses the Light Path for that !? I’m not sure, whether I understood that correctly. So attenuation is only applied without shadow tests in a material. Instead the LightPath does this?

The main difference for the roof and the light reflections I had to make, was the light emission in the light struct. In the “Denoiser” sample of OptiX 5 its 340,190,100 For my tests with BDPT this was way to high. In the attachment I added some renders with much lower emission:
BDPT without MIS: 0.136, 0.076, 0.04
BDPT with MIS: 3.4, 1.9, 1.0 (to get about the same brightness as if no MIS is used)
However, the emission color (which is used in “diffuseEmitter()”) must remain as high as its in PT; otherwise no caustics.
UPDATE:
obviously there’s a bug in my implementation yet.

When using the Path Tracer, in total there is much more luminance so the glass material reacts better to it. In BDPT also the final color result is divided by eyepath_length * lightpath_length (see code in my previous post) Changing it would destroy the light equation I think. But how to change this for transmittance (glass) materials only?

Somewhere I read about different handling for transparent objects concerning the shadow rays (between the light vertex and eye vertex), and i tried to avoid that shadow test for glass materials, but that makes things even worse,

And some people even only add diffuse light vertices to the light path; skipping all others. I see no advantage from that. What is right?

If your implementation doesn’t produce the same results with or without MIS then something is seriously broken. Even a brute force path tracer without next event estimation must reach the same result eventually.

Are you mixing radiant exitance and flux in your implementations?

In a standard light transport algorithm, the visibility test (the shadow ray anyhit program) will fail with whatever geometry is between the surface point to be lit and the light source sample, even for transparent materials.
There is no attenuation from transparent surfaces during direct lighting. That’s a shadow condition, the visibility is false.
The light transport algorithm will take care of the surface colors and volume absorption by eventually hitting the light through transparent objects which produces caustics in the end. Different light transport algorithms handle that differently well.

I’m not sure you’ve understood the probability density function for specular materials.
That is a Dirac function, means it’s 0.0 everywhere and only in the single valid reflection/transmission direction, it’s infinity, so that the integral over it remains 1.0.

That needs to be handled as special case. The pdf is normally set to 1.0 for specular materials to not interfere with the path throughput calculation which modulates by bsdf * fabsf(cosTheta) / pdf. The cosTheta is 1.0 as well then.

You cannot connect paths on specular materials because the probability to hit that single valid reflection direction in that connection is 0.0.
That is the same probability as hitting a point light or a directional light direction randomly: zero chance.

If you use that hacked pdf == 1.0 from specular materials in your path connections, that would be wrong. It should be 0.0 for any direction which is not the actual path continuation.

As such it’s not necessary to store surface hits along a path when hitting a specular surface because you cannot connect with that.
Same for photon mapping, you wouldn’t store photons on specular surfaces.

Indeed I obviously wrongly mixed some things up. I’ll retry.

previous message content REMOVED