Saving optixVolumeViewer depth

Hello,

I’m trying to apply the volume renderer smoke.nvdb example to my OpenGL rasterized scene.

I’m currently setting the miss color to blue and then rendering the volume texture as a quad in NDC [-1, 1], applying a color (blue) to alpha. This works and I see the smoke is placed where I want it. However, this approach looks wrong, so it probably is.

I don’t have any depth information to test against the rasterized scene. So the smoke always appears when behind opaque objects.

Can anyone point me in the right direction on this? Are there examples that show how these offscreen renderings are applied to a rasterized scene?

Thanks in advance,
DC

I don’t think that will work. You wouldn’t be able to composite a 2D image of a transparent(!) volume into a 3D scene correctly if the volume hasn’t been rendered against the scene geometry itself already.
Otherwise compositing that would require 3D volumetric data to be able to compare that against the depth of the rasterized geometry. That would work even less correctly with transparent geometry.

Means the volume itself would need to be aware of the other geometric primitives inside the scene and be rendered against that, blocking the ray tracing of the volume correctly, so that a simple alpha masked addition of the 2D rendered volume on top of the rasterized scene looks right.

Not sure how perfect you need that, but if that should be fully accurate, all lights and materials of the geometry would need to be part of the volumetric renderer as well, to be able to implement global illumination correctly, like volumetric scattering and capturing reflections of the lights from geometric objects back into the volume.

If it shouldn’t be perfectly accurate, you could also build a rasterizer which renders the volume with some ray marching GLSL shader directly instead. That will not allow all possible global illumination features.

Just a quick note to dovetail with what Detlef said - be aware that the OptiX SDK sample optixVolumeViewer is already almost a perfect example of what you want to do (minus the OpenGL handling). This sample essentially traces 2 rays per sample - one ray through geometry and one ray through the volume, and it combines them by terminating the volume ray at the closest surface hit. You can do the same thing with OpenGL by replacing the ray through the surface geometry BVH by the depth value you get out of the rasterizer. So rasterize your scene and output a depth buffer and a color buffer, then ray trace the volume and set your tmax to the depth value calculated from the depth buffer, and viola you can now composite the resulting volume render onto your surface render. Study the function __closesthit__radiance_volume() in volume.cu and note the use of the end variable to terminate the ray at the surface hit.


David.

Thanks!

I’ll try that option and see how far I get. That makes much more sense than what I was trying to do.

Careful with those depth value calculations though.
The OpenGL rasterizer depth is a parallel distance from the camera plane (defined by the frustum) while the primary ray intersection distance from a pinhole camera is a radial distance from the camera position.
These depth and distance values are not going to composite directly.
You must transform the ray tracing world space hit coordinates to OpenGL window depth values, or vice versa for what David described, using the OpenGL model-view projection matrix setup to be able to composite ray traced intersection distances against rasterizer depth values.

Again, all that is only going to work with opaque materials. If you have transparency inside your scene, things are getting complicated.

Also the optixMeshViewer is far from rendering volumes looking nicely, like real clouds. That needs a completely different lighting and integrator implementation than that simple example is using.

1 Like

Detlef,

My first stab at this was to ray-trace the volume bounding box into the scene. However, the image of the bounding volume looks somewhat concave and I have a feeling that what you said regarding parallel depth vs radial depth above is happening. These are my steps for ray tracing the volume.

  1. Convert ray direction and ray origin to index space
  2. Using the slab method, test if ray hits the box.
  3. If miss, return scene rasterized color as the miss color.
  4. If hit, compute the hitPos = rayOrigin + t * rayDir (in indexed space)
  5. Convert hitPos from index space to world space.
  6. Compute depth of hitPos as follows.
float getDepth(vec3 hitPos)
{
    float n = gl_DepthRange.near;
    float f = gl_DepthRange.far;

    vec4 ndc = projectionMatrix * modelViewMatrix * vec4(hitPos, 1.0);

    float d = ndc.z / ndc.w;

    return (gl_DepthRange.diff * d + n + f) * 0.5;
}
  1. Compare this depth with the scene depth.

Is this the correct way to compute radial depth to parallel depth?

You’re transforming a hit position in world space with the model-view matrix which transforms from model (object) space into view space?
That can’t work when the model matrix is not the identity.
OpenGL doesn’t have an explicit “world” space transformation.
You would need to to eliminate the model matrix from that transformation if that is not the identity.

The float d should be in range [-1, 1] after the division and the final window depth needs to be in range [0.0, 1.0].
The glDepthRange (khronos.org) manual explains the mapping.

OpenGL uses only the positive half-space of the normalized device coordinates.
Means d < 0.0 should be behind the camera position and not rendered and for the rest isn’t this simply the same as in the OpenGL compatibility specs for glWindowPos3f on page 579?

Or the inverse, I forgot. The last time I programmed that was October 2014 and I don’t have that code anymore, but that was just doing this inside the ray generation program writing OpenGL depth values into a buffer.

Thanks Detlef.

I took out the modelView matrix. There must be something fundamentally wrong that I am doing.

The ray tracing is done in index space. Afterwards, I compute the hit position, which is still in index space and then call grid->indexToWorldF(hitPos) to put in world space. That is what I am passing to getDepth, but the values are still foobar.

Wait, you cannot leave whole modelview matrix away
You need to take only the model matrix out of the modelview matrix.

The model matrix transforms from object space to world space.
The view matrix transforms from world space to eye space. That defines the camera position and orientation.
The projection matrix transforms from eye space into clip space (heterogeneous 4D coordinates). This flips from a right-handed to a left-handed coordinate system in OpenGL.
Dividing the clip space positions by their homogeneous w-component transforms into normalized device coordinate space, the unit cube [-1, 1]. The positive half-space is what is visible in OpenGL. (In contrast to D3D which is using the whole space.)
The normalized device space is transformed into window space by the viewport matrix which gives window space coordinates.

You only need the z-component of that whole transformation from world to window space if you input world space coordinates.

Add printf() commands to all your intermediate coordinates to see if their values are reasonable.
(Don’t use R530 drivers for that. printf wasn’t working in OptiX device code by default there, use R535 driver instead.)
I usually do that by adding the following debug code to print values only for one launch index, normally the center of the image, but there are better ways, like adding a 2D launch index location into the launch parameters you can drive with the mouse coordinates (which are y-inversed).

{
  // DEBUG
  uint3 theLaunchIndex = optixGetLaunchIndex();
  if (theLaunchIndex.x == 256 && theLaunchIndex.y == 256) // e.g. in the center of a 512x512 image.
  {
    printf("value = %f\n", value);
  }
}

I got it working following the code snippet at the bottom of this link.

Combining ray tracing and polygons.

Thanks for your patience and help with this.