Radiation Physics problems


I need to identify material boundaries i.e. intersections (I believe you call them hits) to calculate path lengths through materials to simulate x-ray attenuation. So I will need to be able to determine all of the points along a given ray (parametrized line position or x,y,z)that intersect with a model (not just the first) and of course I will want to do it for lots of rays so it needs to happen on a GPU. I downloaded and successfully built my own Optix 3.9 example in Ubuntu 14.04.4 (nvdia driver 352.63 so I can’t use 4.0)

I saw another post
mentioning that Optix - prime might suffice because what I am trying to do once the intersection is found is simple. I do need some list of all intersections for each ray though. Please advise.

Also, is there a simpler/standalone CMakeLists.txt file one can use to build a simple project like the one I am proposing. Your build strategy unless I am mistaken does not play well with IDE’s like Clion and involves multiple CMakeLists.txt files… Regards

1 Like

To get all hits along a ray in order the most straightforward approach would be to continue the ray after each closest_hit until you don’t hit anything anymore. In OptiX that means reaching the miss program, in OptiX Prime that means not getting a hit result.

Using the OptiX any_hit program alone to gather all hit points along a ray won’t work for bounding volume hierarchies which use splitting like SBVH or TRBVH, where primitives can appear multiple times in smaller BVHs. That would result in duplicate hits.

In OptiX Prime any_hit results alone would not work either because you won’t be able to do the proper traversal continuation like you can do in OptiX by calling rtIgnoreIntersection() inside the any_hit program.

Anyway, per ray in your launch grid size:

  • setup the primary ray origin and direction,
  • trace it to get the closest_hit result,
  • set the next ray origin to that hit point coordinate,
  • keep the ray direction to gather all intersections along a straight line,
  • offset ray.t_min by a small epsilon to prevent self intersections.
  • Repeat until you don’t get a closest hit anymore.

This works with either OptiX or OptiX Prime and any BVH acceleration structure.

A naive implementation will quickly run out of work for the rays which terminated with no more hits and you should compact the active ones or do some smaller launch grid and get new rays from a pending work pool to get the maximum performance, etc.

If the number of expected intersections is known a-priori you can also do that in OptiX with a single launch. The limit here is to be able to store all intersections along each ray in one sweep. Depending on the amount of intersections that need to be stored, the limit would be the amount of GPU RAM to store all these results at once (array of hit points per ray and an end tag, e.g. count or negative intersection distance).
If you don’t know the maximum number of intersections beforehand or the data is too big to be stored, forget about this method and do the iterative approach above.

Thank you. Thinking about your comments… Is the approach you propose to handle several material boundaries the same one you would use with a material like glass which transmits some light? In x-ray there is virtually no refraction and no reflection per say. We have something called scatter which is generated throughout the material. We have techniques to reduce the contribution of scatter to our images so I dont need to model that.

Furthermore, in my application, a ray would intersection a small number of materials boundaries, less than 50.

“Is the approach you propose to handle several material boundaries the same one you would use with a material like glass which transmits some light?”

Yes, running along a single ray direction and gathering all hits along a straight path in ascending distance order is what would happen with just transparent materials of index of refraction 1.0.

“Furthermore, in my application, a ray would intersect a small number of material boundaries, less than 50.”

That means you would need to reserve storage for at least 50 floats per ray, if you encode the hit positions via the distance along the primary ray direction. That’s 395 MB for a FullHD image size result buffer besides your other data and acceleration structure.
That shouldn’t really be a problem with modern workstation boards. Use individual launches in case the amount of data would get too big for your board.

If your data is actually volumetric and you need to capture scattering, there are completely different mechanisms possible to visualize such content.
There was a very nice presentation on path tracing volumetric medical data from Klaus Engel from Siemens Healthcare on the GPU Technology Conference last week.
Search for “S6535 - Real-Time Monte-Carlo Path Tracing of Medical Volume Data” on [url]https://mygtc.gputechconf.com/form/session-listing[/url]

I meant to thank you for you detailed responses sometime ago. We are building and modifying primeSimplePP to better understand the technology.

Although it has been over a year, I would like to revive this discussion, since I am interested in a similar usage of Optix.

I would like to follow the procedure outlined above:

- setup the primary ray origin and direction, 
- trace it to get the closest_hit result,
- set the next ray origin to that hit point coordinate,
- keep the ray direction to gather all intersections along a straight line, 
- offset ray.t_min by a small epsilon to prevent self intersections.
- Repeat until you don't get a closest hit anymore.

However, after setting the next ray origin to the hit point coordinate, I need to know whether the ray is entering a material from the scene, or exiting the material it has been traveling through and, in the latter case, whether it will enter the scene again or another material that shares a face with the previous one. In that case I also need to know the properties of the new material.

In other words, for every ray origin I need to know the corresponding material. In case of a transition from one material to another I need to know the properties of the material the ray is about to enter.

I did some testing and it seems that the closest_hit result is not uniquely defined in case a rays origin is inside a material and leaving through a face shared with an adjacent material. In some cases the closest hit will be with the same material again, in other cases it will be a hit with the adjacent material.

Do you have an idea how to resolve this issue? Otherwise, I am generally interested in scattering the ray within the material, with the scattering properties dependent on the material currently surrounding the ray.

I tried to open the link (https://mygtc.gputechconf.com/form/session-listing) you mentioned in that regard, but it leads to a 404.

Best regards.

Yes, tricky problems.

If you have coplanar faces shared by two geometries, the closest hit cannot be properly determined by the intersection and can change depending on the BVH traversal order.
Using a scene epsilon to continue the ray after each intersection will also skip one of the faces.
Thinking about possible solutions results in scary and complex ideas. To be tried another time.

I have a solution for nested materials and volume scattering in my MDL capable path tracer, but I do not handle the case of coplanar surfaces or partially intersecting objects.
It’s already a little involved to implement all tiny details of nested materials and maybe I’ll put out an OptiX advanced sample showing that in the future.

What you need is a small “material stack” which stores the volumetric parameters: absorption coefficients, IOR, volume scattering coefficients, phase function value, and the material index of the material the ray is currently in.
Being able to actually index materials and their programs is a feature which requires a whole specific rendering architecture I developed over time. The material is needed during volume in-scattering lighting calculation to determine if a hit is the same material the ray is currently in, and because the surface of the volume tints the incoming light if there is a transmission of the light ray at all.

Then you would need the IOR of the volume the ray is currently in, the IOR of the surrounding volume, and the IOR of the material surface you’re currently hitting.
Then a transparent material would need to determine the effective IOR from current volume IOR to surface IOR when entering the volume (hitting a frontface) or the surface IOR and surrounding IOR when leaving a volume (when hitting a backface, surface IOR and current IOR should be identical)

Absorption calculations need to be done with the last ray segment’s distance in the current volume material. That also handles the case of different nested materials, for example entering the water in a pool and then hitting the floor if the water surface is not modeled as closed volume.

Now if there was a transmission and you entered a volume you need to put the new volumetric parameters onto the material stack.
If you left the current volume you need to pop the current volume parameters from the material stack.

Volume scattering is another level of complexity.
If you have a path tracer, that can be done with a brute force random walk through the volume, with some limit on the number of steps. Each vertex of that path which missed some geometry, means which did a random step inside the volume, would need to calculate in-scattering light.
That needs even more attention to detail if you want to handle the case of lights and other materials inside the volume (fog with lights and volumetric shadows).

Topics to research that are the Henyey-Greenstein phase function, in-scattering, source term, Fresnel, and then there are other methods to approximate volume scattering, like dipoles, some of which require pre-processing of incoming light on the volume’s surface.

To find the linked GTC presentation use this portal:
In the search dialog select Year 2016 and search the results for the given topic.
Searching for it directly somehow broke.

I thought to handle the volume scattering as a Markov random process, i.e. based on the material properties (mean path length of particle in the material, probability distribution of scattering angles) random path segments are created. For each path length and direction I could check whether the current path segment will be shorter than the closest hit (which assumes infinite path length).

Then, in the closest hit program I would create a new ray either at the closest hit or the end of the path segment (depending on which is the closest).

I then thought to save the above mentioned material properties within the payload of the ray.

What if I create an epsilon gap between all adjacent materials. As long as the gap is large enough I expect to receive both closest hits. That way, I could always update the material properties when entering a material (determined by scalar product between ray direction and surface normal).
On the other hand, if the gap is small enough, and each ray does not cross too many interfaces, the bias introduced by the artificial vacuum in the gap will be negligible over the entire simulation.
One problem I see with this approach is the unlikely case that a ray will hit the gap almost parallel to the two coplanar surfaces and then just fly through in-between the two geometries.

Another idea would be to call rTrace twice for each ray. Once in an environment where only the geometry surrounding the ray is present (to check whether a current random step would remain inside the material), and once for an environment where all the geometry except the one surrounding the ray is present (to check which would be the next material hit after leaving the current one).

What do you think about these approaches?

Yes, that is one possibility, but it would be faster if you don’t shoot infinitely long rays in that case.
If you shoot the ray with t_max as sampled distance depending on the volume density distribution, then a miss would mean you’re still inside the volume and a hit would mean you reached a border between volumes.
The miss would advance the volume sample and the ray generation program would create the next sampled volume ray.
When hitting something, that would be handled as usual.

You need some of these information on the per ray paylod anyway to be able to calculate the effective eta between volumes and to return the new volume material properties in case of a transmission event.

In my implementation the random walk itself is mostly implemented inside the ray generation and miss shaders. The closest hit program has a special case for the in-scattering direct lighting calculation.

Two scene setup cases possible:
If you support nested materials, then the hit can be another object and material fully inside the current volume. Or if all volumes are known to be completely closed, you would only need to handle the backface hits of the current volume.
Latter would also allow to handle the coplanar faces case, by simply only looking for backface hits when inside a volume. The anyhit program implementation would need to take care of that. That would pick the proper face of two coplanar surfaces, because the frontface one of the outside material would be ignored. Again, that would be wrong for nested materials, because you would ignore front face hits incorrectly.

That’s why I said proper solution to handle both would become complex and scary (read: potentially a lot slower).
You could for example gather all hits along a ray direction first, find the shortest two distances and if there are multiple hits in an epsilon environment around the nearest distance, take the backface one first to leave the volume and then continue from there with a more robust self-intersection avoidance test using primitive IDs instead of the epsilon to be able to handle the next face entering the adjacent volume at the same position.

Note that above description would only need ray generation and anyhit programs, because all the continuation and shading would need to happen inside the ray generation program.
That’s going to be slower than if you would be able to offset the surfaces to separate the individual intersections.
Also being able to do all these shading calculations inside the ray generation program instead of the the closesthit program would require a special material handling. In my renderer implementation those are all bindless callable programs.

A completely different and much more robust solution would be if you actually managed to model the scene to only have individual surfaces as borders between two volumes with no gap at all.
You would need to implement a material model where you store the volumetric properties of both sides, so not always assuming vacuum on the outside. Then there wouldn’t be any issues with coplanar faces at all because there weren’t any left.

I just tried out the second approach (calling rTrace twice) and I think that it works.

So far I only have implemented Box geometries. For each geometry, I implemented two intersection programs:

  1. default intersection program (as in the SDK tutorial)

  2. a copy of the above intersection program, but with an if clause that prevents the call of rtPotentialIntersection in case the ray direction does not point into the box

Then, I create a geometry group for all of my boxes. Then, I create a copy of it and exchange the default intersection program with 2).

Also, I create two contexts: “top_object” and “top_object_onlyEntering”. In the ray generator program, I then follow the strategy discussed before (1. - 6. from my first post), but at every step I call rTrace with both the top_object and the top_object_onlyEntering.

The top_object closest hit tells me the step length. And the top_object_onlyEntering closest hit tells me what the other material is that I hit (since it will not yield hits with the surrounding material).

Of course, in the case of finite step lengths (as in my Monte Carlo scattering setting), I do not call the second rTrace if the randomly sampled step length is shorter than t_hit from the first rTrace with the top_object.

Thank you for the tip with tracing the ray with a finite tmax. I implemented that now.

In general I would like to thank you for your extensive and quick advice on this topic.

Interesting approach! I would need to check that out when I find time.

When doing the copy of the sub-tree to have the second intersection program assigned to the Geometry nodes, make sure to share the buffers at the Geometry nodes and share the Acceleration nodes at the GeometryInstances above to not duplicate the size of the scene, just in case you’re running into memory problems.

To iterate on this a little more in case others are interested:

Another more elegant solution to handle coplanar faces without the need to change the original scene would be to use two different ray types. One ray type would only handle front faces, the other ray type only back faces.
That needs to be detected inside the anyhit programs to be able to ignore the other condition, and the closest hit program would just return the hit distance, material properties, and maybe some attributes inside the per ray payload.
Then shooting two rays with these two ray types inside the ray generation program would be enough to detect and handle coplanar faces and nested materials.

Four resulting cases to be handled inside the ray generation program:
1.) If the two nearest hits are in an epsilon environment around the resulting intersection distance, then that would be the coplanar case and could be handled as single boundary between volumes. In case of a transmission to the adjacent volume, the material information on the top of the material stack would need to be exchanged against the new material.
Note that this also means that the simple scene epsilon method to prevent self-intersections would be applicable.
2.) If the nearest hit is on a front face, handle that as usual and if there is a transmission, push the entered volume’s material information onto the material stack.
3.) If the nearest hit is a on a back face, that must be from the current volume. In case of a transmission event, pop the current material from the material stack.
4.) Nothing got hit. If that was while shooting a volume scattering ray with limited distance, progress to the next volume sample point and calculate the next volume scattering direction and distance. The materials stack remains unchanged.
If that was while outside the volume, no object was hit. End of path.

Depending on additional information known about the scene setup (e.g. no nested materials), the above can also be simplified a little.