I’m trying to apply the volume renderer smoke.nvdb example to my OpenGL rasterized scene.
I’m currently setting the miss color to blue and then rendering the volume texture as a quad in NDC [-1, 1], applying a color (blue) to alpha. This works and I see the smoke is placed where I want it. However, this approach looks wrong, so it probably is.
I don’t have any depth information to test against the rasterized scene. So the smoke always appears when behind opaque objects.
Can anyone point me in the right direction on this? Are there examples that show how these offscreen renderings are applied to a rasterized scene?
I don’t think that will work. You wouldn’t be able to composite a 2D image of a transparent(!) volume into a 3D scene correctly if the volume hasn’t been rendered against the scene geometry itself already.
Otherwise compositing that would require 3D volumetric data to be able to compare that against the depth of the rasterized geometry. That would work even less correctly with transparent geometry.
Means the volume itself would need to be aware of the other geometric primitives inside the scene and be rendered against that, blocking the ray tracing of the volume correctly, so that a simple alpha masked addition of the 2D rendered volume on top of the rasterized scene looks right.
Not sure how perfect you need that, but if that should be fully accurate, all lights and materials of the geometry would need to be part of the volumetric renderer as well, to be able to implement global illumination correctly, like volumetric scattering and capturing reflections of the lights from geometric objects back into the volume.
If it shouldn’t be perfectly accurate, you could also build a rasterizer which renders the volume with some ray marching GLSL shader directly instead. That will not allow all possible global illumination features.
Just a quick note to dovetail with what Detlef said - be aware that the OptiX SDK sample optixVolumeViewer is already almost a perfect example of what you want to do (minus the OpenGL handling). This sample essentially traces 2 rays per sample - one ray through geometry and one ray through the volume, and it combines them by terminating the volume ray at the closest surface hit. You can do the same thing with OpenGL by replacing the ray through the surface geometry BVH by the depth value you get out of the rasterizer. So rasterize your scene and output a depth buffer and a color buffer, then ray trace the volume and set your tmax to the depth value calculated from the depth buffer, and viola you can now composite the resulting volume render onto your surface render. Study the function __closesthit__radiance_volume() in volume.cu and note the use of the end variable to terminate the ray at the surface hit.
Careful with those depth value calculations though.
The OpenGL rasterizer depth is a parallel distance from the camera plane (defined by the frustum) while the primary ray intersection distance from a pinhole camera is a radial distance from the camera position.
These depth and distance values are not going to composite directly.
You must transform the ray tracing world space hit coordinates to OpenGL window depth values, or vice versa for what David described, using the OpenGL model-view projection matrix setup to be able to composite ray traced intersection distances against rasterizer depth values.
Again, all that is only going to work with opaque materials. If you have transparency inside your scene, things are getting complicated.
Also the optixMeshViewer is far from rendering volumes looking nicely, like real clouds. That needs a completely different lighting and integrator implementation than that simple example is using.