How to set a ray pass through in closet hit.

I wanna to know if there is a in-built alpha blending in optix.
In this thread https://devtalk.nvidia.com/default/topic/833397/optix-alpha-merge-with-opengl/
You could even generate an opacity value along rays which only pass through transparent objects before hitting the miss shader, which would allow to blend them over the background.
So, i wonder if i set the result color as (1,1,1,0) in prd_radiance back , will the ray still trace and and blend current result and next hit’s result back?

The motivation comes from volume rendering’s ordered compositing problem.
So can anybody give me some hints about how to do efficient volume rendering ? Or the solution on alpha blending.

I know rtIgnoreIntersection() can be used in any-hit but not closest hit, but any-hit is almost unordered.

Hi liuwj,

The alpha blending question is easy. To do alpha blending, you can trace a new ray explicitly yourself after you hit a transparent surface, from your closest hit shader. This is also where you would put your alpha blending code.

Take a look at the optixWhitted sample, specifically in the glass.cu material shader. The function closest_hit_radiance() traces one ray for refraction and one ray for reflection, and blends the two result colors into the final return value.

Your question about efficient volume rendering is a bit more open ended, and the answer mostly depends on what you need and what your data looks like. Do you need to mix surface geometry with volume data? What kind of rendering effects do you need, for example shadows or transparency or global illumination? If you only have volume grid data, it may be more efficient to use your own CUDA kernel to traverse your volumes.


David.

Thanks for your advice, dhart.
OK, iwill check the samples.
The data i have is exactly nearly 3,000,000 particles each frame. And i wanna to render them with meshes together.
It could be at most 50,000 particles at one pixel , it’s quite deep, right?
An explict approach will crash the memory and cause stackoverflow even 64GiB stacksize is given.
Thus i try to seek for an builtin implicit blend mode.
At first, i implemented a global illumination, it costs, then i reduce to single light source and no shadow, it still costs.

If you have any great ideas, please tell me.
Thanks

Yes, you are correct, 50k particles is way too deep to trace transparent rays from or put your blending in your closest hit shader. You’re right, you will run out of stack space.

There are a couple of other options for starting points that I can suggest.

First, you could take a look at the OptiX advanced samples. (Info here: https://devtalk.nvidia.com/default/topic/998546/optix/optix-advanced-samples-on-github/) There is a sample in there called optixParticleVolumes. I believe it is using the any hit shader to collect a fixed number of the most important particle intersections, and then sorting and compositing them properly in the ray gen program. An article about this is soon to be published in the upcoming Ray Tracing Gems book, here: http://www.realtimerendering.com/raytracinggems/

For mixing volume particles with surfaces, perhaps an obvious suggestion would be to use a loop in your ray gen program to trace through both surfaces and particles in 2 separate queues, and combine the results as you accumulate color along the ray by merging at whichever intersection has the minimum future t value. If you used the technique in the optixParticleVolumes advanced sample, you’d need to take some care to ignore particle intersections that happen beyond the next surface intersection.


David.

Thanks, Dhart. Your reply was so detailed and constructive.
I will try.
If any progress, i will post here.