Intersection point

I know we can use OptixGetPrimitiveIndex() to get the primitive(triangle here) index of intersection, but is there a way to get the exact point of intersection in the primitive?

Yes, of course. The geometric primitive intersection routine is responsible for providing the necessary attributes to be able to calculate that.

For the build-in triangle intersection routines you get the barycentric coordinates which are used to interpolate any vertex attributes on a triangle, like the position, shading normals, texture coordinates etc. over the area of the hit triangle.

For custom primitive intersection routines it would be your responsibility to calculate the required information. For example a sphere could calculate polar coordinates phi and theta as attributes, a parallelogram would calculate normalized coordinates (u, v) over the two edge vectors as attributes, etc.

You need to be able to access the per triangle vertex attribute data which you can determine via the primitive index, optional triangle index array, and the original vertex attribute arrays.

Please work through the OptiX docs and samples and search the code for “barycentrics”.
Code showing that can also be found in Ingo Wald’s OptiX 7 course example which adds texturing.

I am using a single closest hit program and trying to query the intersection triangle vertex data.


What is wrong with this command?

Yes, and what is the exact problem you are facing?

Please do not ask questions of the type “I did something and it’s not working.” on a developer forum.
For programming questions you need to be a lot more precise about what you did and what your problem is to get any attention from professional developers.

For a start,

OS: Ubuntu 18.04, GPU: Quadro GP100, Driver: 435, OptiX: 7.0, CUDA: 10.1
There is no error message, it always just produces 0.00,0.00,0.00 as the result.
It works when I read from the original vertices like in IngoWald’s example.
My best guess is that it could be a problem with the sbtGASIndex, as the rest are queried in the program.

extern "C" __global__ void __closesthit__photon()
    HitGroupData* rt_data = (HitGroupData*)optixGetSbtDataPointer();
    const int    prim_idx        = optixGetPrimitiveIndex();
    PhotonPRD* prd = getPRD();
    OptixTraversableHandle handle = optixGetGASTraversableHandle();
    float3 ver[3];
    printf("%f, %f, %f\n",ver[1].x,ver[1].y,ver[1].z);

“Driver: 435”
We would need the digits after the decimal point as well to be sure to test the right things.

You changed the sbtGASIndex argument from 0 to 1 between the two posts.
Whats’s your OptixBuildInput.triangleArray.numSbtRecords for that GAS?

Could you provide a minimal complete standalone reproducer? (Best in an OptiX SDK example.)

(using: Optix 7, VS19 on Win10, 441.87 driver)

I have a question regarding the sbtGASIndex, as used here:

I couldn’t find in the documentation or forum what sbtGASIndex stands for, and how I would retrieve it in my rg() program. I need the Vertex data to sample random points on the primitive to shoot rays off.
Does it have to with this function “optixGetInstanceIdFromHandle()”, where I would just use the correct GasHandle for my current primitive?

Thanks for your help.

The optixGetTriangleVertexData() needs the Geometry Acceleration Structure (GAS) handle which contains the triangle vertex data.

It only works if the GAS has been built with the flag OPTIX_BUILD_FLAG_ALLOW_RANDOM_VERTEX_ACCESS.

You can query the GAS handle with the device function optixGetGASTraversableHandle, but that is not available inside the ray generation program (RGP), as you can see in this table:

Means if you want to call optixGetTriangleVertexData() inside the RGP, your closest hit program would need to store the required arguments to the payload, so that you have them available inside the RGP once the optixTrace call returned.

Another way would be to store the respective GAS handle and number of primitives inside the launch parameters block to make it available to the RGP so that you can use optixGetTriangleVertexData() on it. (Don’t! See below.)

The sbtGASIndex argument in that function is only non-zero if your GAS contains multiple build inputs.
Find more explanations here:
“SBT GAS index”:

With all that said, you don’t need any of that if you want to generate rays by starting from the triangle primitives (in world coordinates).

For that the simplest way would be to keep the vertex data you used to build the acceleration structure inside their device pointers and assign them to fields inside your launch parameter block, along with the number of primitives and indices if needed. That launch parameter block can be accessed anywhere inside the device programs.
I would recommend this method if there aren’t any memory constraints. No need for optixGetTriangleVertexData() in that case.

This is similar to storing the vertex attributes and indices in shader binding table hit record data.
Track on host:
Assign to SBT record:
Access via SBT record and interpolate attributes:

In your case, move the assignment to the launch parameters block and move the attribute access device code to the ray generation program and calculate your primary rays as you like.
If there are multiple GAS, use an array of those tracking structures. If there are instances and transforms involved, you’d need to store the transform matrices and indices to the geometry data along as well, to calculate rays in world space.

Thanks for your answer!

What exactly is the computational downside of accessing the primitive coordinates from the GAS inside the rg()?
Is it only computationally more expensive, or are there some other issues I am unaware of?
This is currently the way I implemented it.

Starting from the optixMeshviewer example in OptiX 7:
Are the normals of the primitives also stored in GAS and how would I access them?
To sample points on a hemisphere of the triangle I need the outwards facing normal of said primitive.
Right now I am using the vertex data from the GAS to calculate the normals, but it somehow seems that the order of the vertices is arbitrary and i thus get randomly inwards or outwards facing normals.

I figured I could just start a ray 1 normal away with the negative normal as dir and use optixGetHitKind and adjust the normal if needed. This seems very inefficient.

Could you guide me in the right direction?

Thanks for your help!

That’s described here:
“The potential decompression step of triangle data may come with significant runtime overhead. Furthermore, enabling random access may cause the GAS to use slightly more memory.”

No, GAS only contains the positions.

That would be an issue of your model data.
OptiX returns the vertices as-is and the built-in triangles default to right-handed coordinate space and front face being counter-clockwise winding order.
That can be switched with OPTIX_INSTANCE_FLAG_FLIP_TRIANGLE_FACING per instance.

That wouldn’t even work if there is is something between that origin and the face you actually want to hit.

Again, I would store the vertex attributes used to build the GAS inside the launch parameters directly.
That way you have everything available inside the ray generation program to sample points on the triangles, calculate face normals or whatever is needed and available inside the vertex attributes, and generate directions into he upper hemisphere above that.
This isn’t really difficult, it’s just that the optixMeshViewer might not be the best program to start with, because it doesn’t work like you need it.

Using the OptiX 7 meshviewer example:

Can I copy the vertex buffer which is passed to OptiX in the Scene.cpp for this purpose?

        triangle_input.triangleArray.numVertices                 = mesh->positions[i].count;
        triangle_input.triangleArray.vertexBuffers               = &(mesh->positions[i].data);
        triangle_input.triangleArray.indexFormat                 =
            mesh->indices[i].elmt_byte_size == 2 ?

That’s not enough.
First, mind that triangleArray.vertexBuffers gets a pointer to an array of CUdeviceptr, because that can handle motion blur.
If mesh->positions[i].data is a single CUdeviceptr then there is no motion blur. (Well, it’s an OBJ file, there isn’t.)

Explaining the general case:
If you want to access all mesh vertex attributes (position, normal etc) for all meshes in a scene inside the ray generation program, you’d need to store the following information inside your OptiX launch parameters:

  • buffer with device pointers per each mesh’s vertex attributes,
  • if meshes are indexed, then the mesh attributes are just attribute pools, and you need a buffer of pointers to the index buffer per mesh, defining the actual primitives’ topology.
  • the number of primitives per mesh,
  • the number of meshes.
  • if the scene uses instanced geometries, meshes are used multiple times and you’d also need to store the reference mesh index, the effective transformation, and the number of instances, that is the number of unique paths from top-level root to geometry. (This can get complicated or huge with multi-level instanced hierarchies.)

Inside the ray generation program you can then map each launch index to a specific thing in your scene data, like sampling an instance, a mesh, a triangle for each launch, or all at once, whatever works.

I think I don’t fully understand what you said.

In the scene.cpp

triangle_input.triangleArray.vertexBuffers = &(mesh->positions[i].data);

I use this pointer to copy the raw verteces from the device back to host and calculate my normals.
I assumed that they would be stored sequentially as float3s (in object space)
prim1.A prim1.B, prim1.C, prim2.A, prim2.B…

Is this indeed correct? Some vertices of the same primitive are identical. (which they shouldn’t)

Also could you explain me, what is the difference between indexvbuffer and vertex buffer are?

Thank you for your help!

I think I figured it out.

Each primitive is defined by a set a indices why denote the location of the vertices on the buffer.