Let’s take a step back. You want to visualize volumetric data but talk about cases where rays are not hitting any geometry.
In your image, are the black triangles the geometry which contains volumetric data, like a material with volume absorption and scattering coefficients, volumetric emission etc. or are the orange bounding boxes of your kD-tree enclosing volumetric data outside and inside that triangle mesh, like a participating media?
Is there additional geometry inside your scene which is not the volumetric data?
If the triangle mesh is the boundary between volumes, meaning the outside is a different medium (e.g. vacuum) than the inside of the closed mesh (e.g. fog, smoke, milk), then there wouldn’t be a need to build other bounding boxes around that because the ray tracer would directly hit the mesh itself and could march along the ray after entering that volume and stop once leaving it again.
The idea with the front and back face culling should work for that, unless there are intersecting volume boundaries but that would be an incorrect scene setup for such an algorithm.
If the volumetric data is actually represented with some bounding boxes which define spaces with volume data inside where empty space can quickly be skipped by not having any bounding boxes for volume parts without any scattering medium information, you could simply build an acceleration structure where these bounding boxes are built of triangles (12 per box) and the same algorithm as for the mesh as boundary between two surfaces would apply.
Care would need to be taken to no miss adjacent boxes, like in your image where the middle ray goes from the left-most orange box into the right-most orange box. Make sure you do not miss the entry of the right-most box.
Let’s assume all triangle geometry is properly wound, world coordinate space is right-handed, front face is counter-clockwise triangle winding. Then you can either use the built-in triangle face culling or the face normal (float3 unnormalized_face_normal = cross(v1 - v0, v2 - v0);
) of a hit to determine on which side of the triangle your ray hit.
You get that information inside the closest hit program of your primary ray, store that entering point resp. entering intersection distance on your payload and return to the ray generation.
Then you could either determine the exit point resp. exit intersection distance by shooting the same primary ray with front face culling enabled and an interval from the entering point as ray.origin resp. distance as ray.tmin or you could also simply start marching with small increasing ray intervals along the ray from the volume entry point until you hit the back-face of the entered volume geometry.
I would recommend the first option because that limits the ray [tmin, tmax] interval to the entered volume range which should be faster than using a huge ray.tmax during a ray marching, if that is required at all (see below).
If there is no other geometry inside the scene than the volume data itself, there wouldn’t actually be any need to trace any additional rays to calculate the transmittance. Instead you would just need to calculate the volume sample points along the found entry and exit points distance and accumulate the volume data.
If the volume is homogeneous there wouldn’t be a need to step though the volume either. The transmittance could be calculated directly from the extinction coefficient and the distance between entry and exit points
If the volume is heterogeneous, then you would march along the ray until you reach the back-face.
If the volume is heterogeneous there would also be methods to not step with a fixed value (your 0.05f
) but step according to the density of the volume (with larger steps when thinner and smaller steps when thicker) which could reduce the number of steps and potentially reduce some aliasing effects when using fixed stepping sizes. Then there are advanced methods like residual delta tracking which can improve transmittance calculations.
Let’s assume there is only volume data in the scene, then the pseudo algorithm could look like this:
extern "C" __global__ void __raygen__volume()
{
// Calculate primary ray orgin and direction here
...
float3 origin = launchParams.cam_eye;
float3 direction = normalize(launchParams.camU * d.x +
launchParams.camV * d.y +
launchParams.camW);
// This code block needs to be inside a loop which advances the the ray orgin or tmin value to the next AABB entry point until both rays miss.
float distanceMin = -1.0f; // Front face intersection distance, stays negative when missed.
float distanceMax = -1.0f; // Back face intersection distance, stays negative when missed.
// Shoot primary ray with back face culling enabled.
unsigned int payload = __float_as_uint(distanceMin);
optixTrace(launchParams.topObject,
origin, direction, // origin, direction
0.0f, RT_DEFAULT_MAX, 0.0f, // tmin, tmax, time
OptixVisibilityMask(0xFF), OPTIX_RAY_FLAG_CULL_BACK_FACING_TRIANGLES,
TYPE_RAY_PROBE_VOLUME, NUM_RAY_TYPES, TYPE_RAY_PROBE_VOLUME,
payload);
distanceMin = __uint_as_float(payload);
// Shoot primary ray with front face culling enabled.
payload = __float_as_uint(distanceMax);
optixTrace(launchParams.topObject,
origin, direction, // origin, direction
0.0f, RT_DEFUALT_MAX, 0.0f, // tmin, tmax, time
OptixVisibilityMask(0xFF), OPTIX_RAY_FLAG_CULL_FRONT_FACING_TRIANGLES,
TYPE_RAY_PROBE_VOLUME, NUM_RAY_TYPES, TYPE_RAY_PROBE_VOLUME,
payload);
distanceMax = __uint_as_float(payload);
// There are four possible case here.
if (0.0f < distanceMin && 0.0f < distanceMax)
{
if (distanceMin < distanceMax)
{
// The standard case: The ray started outside a volume and hit a front face and a back face farther away.
// No special handling required. Use the two distance values as begin and end of the ray marching.
}
else // if distanceMin >= distanceMax
{
// This means a backface was hit before a father away front face.
// In that case the ray origin must be inside a volume.
distanceMin = 0.0f;
}
}
else if (distanceMin < 0.0f && distanceMax < 0.0f)
{
// Both rays missed, nothing to do here, fill per output buffer with default default data and return.
}
else if (distanceMin < 0.0f && 0.0f < distanceMax)
{
// Primary ray origin is inside some volume.
distanceMin = 0.0f; // Start ray march from origin.
}
else // if (0.0f < distanceMin && distanceMax < 0.0f)
{
// Illegal case. Front face hit but back face missed
// This would mean there is no end to the volume and the ray marching would be until world end.
// Maybe tint the result in some debug color to see if this happens.
}
float3 result = make_float3(0.0f);
for (float distance = distanceMin; distance < distanceMax; distance += STEP_DISTANCE)
{
// Calculate world position of the volume sample point.
const float3 position = origin + direction * distance;
// Somehow calculate the 3D volume data element (e.g. a 3D texture coordinate in normalized range [0.0f, 1.0f]^3
const float3 sample = getVolumeSampleCoordinate(position);
// Read some volume data from the sample position.
const float3 data = some_volume_data_lookup(sample);
// Accumulate to the result you need.
result += some_operation(data);
}
// Write the per launch index result to your output buffer.
...
}
There is no need for a miss program when initializing the distance values with the case indicating a miss (negative intersection distance).
The closest hit program would need to return the intersection distance (optixGetRayTmax()) inside the single payload register.
If you actually need to shoot marching rays to be able to hit other geometry, the above for-loop would need to do something like this:
for (float distance = distanceMin; distance < distanceMax; distance += STEP_DISTANCE)
{
payload = __float_as_uint(-1.0f);
optixTrace(launchParams.topObject,
origin, direction, // origin, direction
distance, distance + STEP_DISTANCE, 0.0f, // tmin, tmax, time
OptixVisibilityMask(0xFF), OPTIX_RAY_FLAG_NONE,
TYPE_RAY_MARCH_VOLUME, NUM_RAY_TYPES, TYPE_RAY_MARCH_VOLUME,
payload);
const float distanceHit = __uint_as_float(payload);
if (0.0f < distanceHit)
{
// Hit someting else than volume boundary between the volume entry and exit points.
...
}
else
{
// Nothing in the way, accumulate along volume step as usual.
...
}
}
Note that I used a different ray type, which is probably not required, because that could be handled in one closest-hit program by adding another payload value which would switch the behavior when needed, or more when additional data needs to be returned.
There could also be different traversable handles used for volume boundaries and other triangle meshes in the scene. It really depends on what exactly you need how the scene and programs can be implemented.
If there are more complex algorithms required (volume scattering with random walk is not running along a single direction) there is a simple random walk volume algorithm implemented in the MDL_renderer raygeneration program. This is not really beginner stuff. If you’re not familiar with OptiX, look at all OptiX SDK examples first and then the “intro” examples in above repository.