Multiple surfaces, mutliple buffers

Hi all,

In the project I’m working on, I need to store a variable number of triangulated meshes and to intersect them independently.
What I’m used to do while dealing with triangulated meshes is to store data about vertices, triangles and normals into buffers :

rtBuffer<float3, 1> vertices_buffer;
rtBuffer<int3, 1> indices_buffer;
rtBuffer<float3, 1> normals_buffer;

My intersection program looks like this :

RT_PROGRAM void mesh_intersection(int primitive_index)
	float t, u, v;
	const int3 vertice_IDs = indices_buffer[primitive_index];
	const float3 v0 = vertices_buffer[vertice_IDs.x];
	const float3 v1 = vertices_buffer[vertice_IDs.y];
	const float3 v2 = vertices_buffer[vertice_IDs.z];

	if(intersect_triangle_branchless(ray, v0, v1, v2, ray_data.normal, t, u, v))

I read about the possibility to have buffers of buffers, which could be useful while dealing with several meshes :

rtBuffer<<rtBufferId<float3, 1>, 1> vertices_buffer;
rtBuffer<<rtBufferId<  int3, 1>, 1> indices_buffer;
rtBuffer<<rtBufferId<float3, 1>, 1> normals_buffer;

However, what happens to my intersection program? How can I tell him to look into the right vertices_buffer[i]?


Could you describe your ray tracing algorithm and the expected result data more precisely?
What exactly do you mean with intersect the meshes “independently”.

It’s unclear if you need to intersect each mesh as if it’s the only mesh inside the scene.
Or if you need to spawn more rtTrace() calls at the hit points to generate the results.
Or how often you need to launch a rendering to get the final result.

For example, if each mesh should be handled as if it’s the only one inside the scene, you could use a Selector node and a visit program to select each mesh individually for intersection with rtIntersectChild(index), but that either needs multiple launches or a huge result vector (depending on the launch size and number of meshes.)

Thanks for your quick answer ! I’ll try to explain myself better.

I’m working on ultra-sonic wave propagation simulation.
Here are two ray-tracing steps in on cycle of my algorithm :

  1. First, I need to intersect a mesh that models a physical piece (let's say a cylinder).
  2. Then, when the first ray has intersected the cylinder, I need to know in which direction the new ray has to go, which is not always simple in ultrasonic physics. To do so, I either do some expensive calculations (which I'm trying to avoid) or work more geometrically by intersecting what is called slowness surfaces. There are three of them and depending on the situation, I may need to intersect one, two or three out of the three. Data about the intersection with these slowness surfaces give me what I need to know about the new rays.

So what I need is, depending on the case, being able to intersect any of the three meshes at each intersection with the cylinder. In addition, I have three slowness surfaces for each material of my scene, which is why the total number of meshes I have to deal with is not known by advance.

I looked at Selector nodes. That could indeed help, but I still don’t understand if it is possible to have several geometries sharing the same intersection program while not having the same primitives.

For example, let’s say I have four vertices/triangle buffers declared like this :

rtBuffer<float3, 1> vertices_buffer_mesh1;
rtBuffer<float3, 1> vertices_buffer_mesh2;
rtBuffer<int3, 1> indices_buffer_mesh1;
rtBuffer<int3, 1> indices_buffer_mesh2;

How do I get my intersection program to read vertices and triangles data in the right buffers depending on whether I want to intersect mesh1 or mesh2 ?

EDIT : I think I got it. A solution would be to declare a variable geometryID in the context of each geometry. It would then have the right value when read.
I had forgotten about search order for variables !