How to record hits while traversing a mesh?

I’d like to use OptiX to traverse meshes, generate and trace rays, and record the hit meshes. Are there any relevant examples to refer to?

Trace depth : 1

I think you’ll need something like this:

  1. A ray should be fired at each mesh from the center of gravity of the mesh.
  2. You need to run several optixlaunch.
  • Are the dimensions of optixlaunch the right number of rays?
  1. If a ray fired from one mesh hits another mesh, intersection and anyhit must be newly created to record.
  • Can I use an inner triangle intersection?

Hi @dlatjq3,

Sounds like an interesting project! Your description makes me think of texture baking - is that an accurate description of what you want to do? Conceptually this blog post might help you (note the code is from an outdated version of OptiX, so use it to help answer your questions, but don’t expect working reference code): Baking With OptiX

Here are a few thoughts, but note I obviously don’t know exactly what you need or what you’re doing, so apologies in advance if any of my suggestions don’t make sense for you.

For question 1 - firing a ray from inside your source mesh may run into complications if the mesh has any folds or overlaps from the perspective of the center of gravity. It might be worth thinking about whether you can or should sample the surface as an alternative to starting rays inside the mesh.

For question 2 - it’s up to you do design the mapping between your buffers and your launches and your meshes. It might indeed be easiest to use multiple launches, say one per mesh, but if you’re processing many source meshes this way and your resulting buffer for each one is relatively small, then you might end up being performance limited. If that’s the case you can think about how to pack and index multiple meshes into a single launch, or batching them into a small number of launches.

For question 3 - I don’t understand the question, but there are some options. If your meshes are closed and oriented consistently and you have backface culling enabled, then you might be able to fire rays from inside a mesh, use only a closest hit program, and get hits only on the outside of other meshes. From a performance perspective, it’s ideal if you can avoid needing an anyhit program. If you want to intersect the source mesh (with backface culling disabled) and record information about the source mesh in addition to the subsequent hits on other meshes, then you will need to use an anyhit program. It is possible to do this, but there may be some slightly tricky bookkeeping. The anyhit program does not necessarily process hits in t order, so you will need to identify the source mesh and track the closest external mesh, and you will need to be prepared to filter out internal hits from external meshes, and to update the ray’s tmax on hits so the ray can terminate before traversing the rest of your scene.

I hope that helps!



Thank you for your answer.

I’ve found that what I want is similar to the baking program and I’m checking out previous posts from the forums.

I couldn’t download the “optix_prime_baking”(GitHub - nvpro-samples/optix_prime_baking: Shows how to bake ambient occlusion at mesh vertices using OptiX Prime) sample code. Can I get that sample?

The picture above is part of what I’m going to do.
I need to fire a ray from that mesh and would like advice on how to determine the direction of the ray.
I think

  1. Calculate the center of gravity of each mesh and fire the beam based on it.
  2. Set a constant angle (fov) and emit rays as much as fov.

Is there any other good way? Is there any way to use BVH or optix API?

The number of meshes is estimated to be around 10k, and we are aware that this is a time consuming task.

My goal is to accurately record all mesh indices hit for a ray fired from a mesh and save time.

Thank you

Hey so did you have trouble with the download itself, or did the sample appear empty once you got it? The code was intentionally deprecated and deleted in a git commit, but it’s all still there in the git history, so all you need to do is go back 1 commit.

I was able to see the code by doing this:

git clone
git checkout HEAD^

Just continue to keep in mind that it might be difficult or impossible to compile or run this code. Consider it nothing more than notes on how you might structure a baking app.

I might be able to offer some suggestions, but I don’t quite understand the task or your goals yet.

The bunny is a good example of a mesh where firing rays from the center of mass of the bunny will intersect the source mesh multiple times. For example, some rays will exit the body and the re-enter the ears. Is this acceptable for your use case?

Ultimately, do you want to output some kind of texture of distance or neighbor information? Or do you want to store information for each face or each vertex of a mesh?

Your picture makes me wonder if maybe I’m making the wrong assumptions about your terminology. In the image, there are multiple rays emanating from the center of a single triangle. Does ‘mesh’ mean the entire bunny, or does mesh mean a single triangle? Is your scene closer to a single bunny with 10,000 triangles, or closer to a space containing 10,000 bunnies? Do you want to fire rays from the centers of each triangle, or really from the center of the whole bunny mesh. Are you trying to capture a physical quantity, like the closest distance to the nearest other bunny (say, for some physics calculations), or do you want the closest distance to anything include the current mesh (perhaps for ambient occlusion or something similar)? How do you want to use the resulting data? When you say you want to record all mesh indices hit for a given ray, do you mean you want all meshes along the ray, not just the nearest other mesh, even if the ray passes through dozens or hundreds of meshes?

I understand completely if you aren’t able to discuss details for any reason, but if you can elaborate more then we might be able to find better examples - there’s lots of existing literature and software for this kind of thing that might be able to help you.


Thank you for your answer

My explanation was very lacking.

The optix_prime_baking sample is downloaded.

A mesh means one triangle, and a rabbit means one rabbit made up of 1,000 triangles.

Rays are fired from the center of the triangle and directed to the other triangle (999).

So it looks like we need to generate 999 rays from one triangle mesh.

The rays do not reenter the ears or pass through the rabbit’s body.

Ultimately I would like to get the mesh index information hit by the ray and get a 1,000x1,000 matrix that records it.

Thank you

Aha my assumptions were wrong, thanks. Your added description makes me think of “radiosity” algorithms - it was a way people were using to solve for global illumination a few decades ago. Part of the calculation involved computing the “form factor” or “view factor” between every possible pair of triangles/quads in the scene, and part of that was accounting for the amount of occlusion, if any.

Do you want only one ray between each pair of triangles? You can trace multiple rays and compute the fractional visibility, if that makes sense. (Is your question partially about how to sample the surface and/or calculate your ray origins?) Do you care which side of each triangle is facing inward or outward? If there is no possibility of exiting and re-entering, then I assume the scene shape is convex?

I’m trying to think of ways to answer the unanswered questions you’ve asked, so I can at least help a little. ;) Given your most recent description, filling a 1k x 1k matrix of hit indices in a single OptiX launch will be relatively simple. If your scene is 1k triangles, then you will have 499,000 unique pairs to check in this example. This is a relatively small launch size, so you can probably easily store the result in a single global memory buffer. All you need to do is use your launch index in the destination buffer to represent the IDs of the two triangles you want to check - you can use some kind of implicit arithmetic scheme, or you can record an explicit list of pairs to check in a buffer that has as many entries as your output buffer. If your scene becomes 10k x 10k triangles or larger, then it will consume significantly more memory, and you might need to break your job into multiple launches. The ideal is to minimize the number of launches and maximize the size of each launch, so you can take advantage of the GPU’s highly parallel nature.

I’m rambling a little, so let me know which questions you’d still like help with, or if anything I talked about needs clarification. I’m not certain yet if it’s more helpful to discuss algorithmic concepts or just OptiX-specific techniques.

One sample that might be helpful for you in the current OptiX SDK is optixRayCasting - this sample demonstrates how to pass a buffer of rays to OptiX, then do ray tracing, and finally save all the results to a buffer. It sounds like the structure of your app might be similar conceptually. Just be aware that generating the rays dynamically on the GPU is typically much faster than storing rays in a buffer in memory.



[ figure 1 ]

My intention is to achieve figure1.
I want to find the indices of other triangles that are visible from within a triangle. (single object)

Initially, I thought that only one ray per pair of triangles would be sufficient.
It seems that there might be situations where triangles cannot be accurately identified.

I need advice on how to handle the following scenario:


[ figure 2 ]

condition: TraceDepth is 1 and using only closest hit.

  1. Emit a ray from the centroid of (a) to (b).
    1-1) (a) → (b): hit

  2. Emit a ray from the centroid of (a) to (c).
    2-1) (b) is hit first.
    2-3) Since TraceDepth is 1, tracing terminates.
    2-4) (a) → (c): miss

  3. Emit a ray from the centroid of (a) to (d).
    3-1) (a) → (d): hit

  4. Emit a ray from the centroid of (a) to (e).
    4-1) (d) is hit first.
    4-2) Since TraceDepth is 1, tracing terminates.
    4-4) (a) → (e): miss

In the scenario above, I want 2-4) to be recorded as a miss ( (c) is obscured by (b) )
and 4-4) to be recorded as a hit ( (e) is not obscured by (d) ).
To achieve this, it seems that using only one ray per pair of triangles won’t be sufficient.
I would like to ask for advice on this.

It seems that I’ll need the barycentroids and positions of each triangle to set the ray origin and direction in the raygen function.
Can I pass this information as “launch parameters”? If so, I’m considering passing an array of triangle centroids and indices.

The orientation of the triangles is not of interest.
I’m assuming that the scene is convex in shape.

The number of triangles in the scene could increase, potentially reaching 100k or more.
Therefore, I might need to execute optixLaunch multiple times, as you suggested.
Can I use a loop statement to run optixLaunch multiple times?

Your responses have been immensely helpful to me.
Thank you.

I have additional questions.

I am testing the optixMeshViewer sample code.

In the closehit function, we passed the optixGetTriangleBarycentrics() value instead of the color to be displayed on the screen.

I expected the pixels inside one triangle to show the same color, but that wasn’t the case.

Isn’t the optixGetTriangleBarycentrics() value returning the center of gravity of the triangle hit by the ray?

I will attach the code and results.

Thank you

Hey, okay, so a few things to answer here.

Regarding a couple of comments you made about “Since TraceDepth is 1, tracing terminates.” we might want to clarify what “trace depth” means. That’s referring to how many times you can call optixTrace() recursively, and it does not affect traversal of a single ray. If you request a closest hit, then your rays will terminate on the closest hit, so the behavior you described might be correct, as long as you have disabled your anyhit shaders. If you enable anyhit shaders, the ray will report all intersections along the ray unless you explicitly terminate the ray in your anyhit program, during traversal, otherwise the ray will continue for multiple intersections regardless of the TraceDepth setting.

it seems that using only one ray per pair of triangles won’t be sufficient.

Right, yeah I was trying to hint at this earlier. We need to better define what the question really is. Scenario #4 is showing a situation where triangle (d) is partially occluding the visibility of triangle (e) from triangle (a). It happens to completely block the ray between (a) and (e)'s centroids, which illustrates why you can’t rely on a single ray between the centroids to give you the correct answer.

If you want to know whether there is any visibility between triangles (a) and (e), then you will want to use multiple rays. Tracing a ray between the centroids gives you a very rough and biased approximation of the visibility between the triangles. This approximation might be sufficient if you have many small triangles, but otherwise if you need a better approximation, you can reduce the bias of your visibility query by randomizing the rays, and you can improve the accuracy of your approximation by sending more rays. For example you could send a whole batch of rays, each starting from a uniform random location in triangle (a), and aimed at another uniform random location on triangle (e). By sampling the volume between the two triangles with multiple rays, you will be much more likely to find some unoccluded rays, and therefore know that these triangles have partial visibility. Similarly, you will also know with greater certainty that triangle (d) is partially occluding the space between (a) and (e).

I might need to execute optixLaunch multiple times, as you suggested.
Can I use a loop statement to run optixLaunch multiple times?

Yes, that’s easy and straightforward. The SDK sample called optixPathTracer does “progressive” rendering where it uses each subsequent launch to improve the image by blending new results with old results. This is a good simple example of looping the launch.

Isn’t the optixGetTriangleBarycentrics() value returning the center of gravity of the triangle hit by the ray?

No, the result of the call to optixGetTriangleBarycentrics() is giving you the coordinates of the hit point of your ray, using barycentric coordinates. These coordinates tell you how to find your hit point using a weighted sum of the vertices of the triangle. The barycentric coordinates are the weights. So they will always vary across the face of triangle that has non-zero area. You will find that a hit point that is near to one of the three vertices will return barycentric coordinates with one coordinate close to 1.0, and the other two close to 0.0. The centroid of your triangle is defined by the barycentric coordinates (1/3, 1/3, 1/3). Barycentric coordinates always sum to 1.0, and you only need two of them to reconstruct a point.

It seems that I’ll need the barycentroids and positions of each triangle to set the ray origin and direction in the raygen function. Can I pass this information as “launch parameters”? If so, I’m considering passing an array of triangle centroids and indices.

Launch parameters are rather limited in size. Those live in constant memory, and the launch params buffer has a maximum size of 64 kilobytes, I believe. And OptiX might use a little bit of that IIRC. So for a large buffer of information needed for raygen to run, you should use a normal global memory buffer, i.e., something you allocate with cudaMalloc or the like.

Also if you want to use multiple rays to sample the visibility between each pair of triangles, I would recommend generating your random samples in the raygen program, rather than storing your sampling information in a buffer. It will be much faster to generate samples on the GPU during raygen.

You could do something like this, perhaps: (just an example based on what I think I understand, customize to suit, or ignore this if I’m misunderstanding what you need.)

  • Store your triangles (vertices & indices) in global memory buffers.
  • Pass the pointers to these buffers to OptiX via launch params.
  • In raygen, have each thread be responsible to process 1 pair of triangles
  • In raygen, loop over N rays
    – For each ray, generate 2 pairs of uniform unit random numbers, so 4 floats each in [0,1] (examples of GPU random number generation are in the OptiX SDK)
    – Use one pair as the barycentric coordinates of your ray origin inside the “start” triangle (remember you only need any 2 out of the 3 bary coords)
    – Use the other pair for the bary of your ray destination inside the “destination” triangle
    – Compute the start point in object/world space using the barycentric formula (note you need to condition your random numbers when they sum to > 1. To make a random sample in the unit square barycentric, fold it across the diagonal. i.e. (psuedocode) u,v = rand(), rand(); if (u+v > 1) then u,v = 1-v, 1-u;
    – Compute the destination point similarly
    – Subtract the destination point from the start point to compute a ray direction (no need to normalize unless you want to).
    – Call optixTrace() and test whether the ray hits the destination triangle
  • When your loop is complete, record the result into your visibility matrix. You could record M if all rays were blocked, and H if any of the rays made it to the destination triangle. (Or you could record the count of ray hits on the destination triangle, or a floating point value of the percentage of hits, etc… there are multiple options. Using 1 bit for H & M could make it very compact and memory efficient, but you might find ways to use the hit count or percent.)


It’s a beautiful response.

Most of the questions I had are now resolved.

I think I can achieve good results through the methods you’ve provided.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.