Built in Triangles - Normal with OptiX 7.2

Good afternoon,

I have created a simple “box” with 24 triangles and 12 indices and would now like to compute a normal for each triangle primitive. Does OptiX 7.2 have a built-in call for computing the normal of a triangle or does this have to be computed separately?

Thank you for any help.

Hey @picard1969,

OptiX does not have a function to return normals per se, but it does have support for querying the vertices of these triangles while inside your closest hit shader. https://raytracing-docs.nvidia.com/optix7/guide/index.html#device_side_functions#vertex-random-access

I’m assuming that what you want here is to compute the geometric facet normal of the triangle (as opposed to vertex normals, or using normal mapping, or some other type of shading normal).

Specifically, you can call optixGetTriangleVertexData() to retrieve the 3 vertices of the triangle that was hit. In order to use this function, you need to build your GAS BVH using the build flag OPTIX_BUILD_FLAG_ALLOW_RANDOM_VERTEX_ACCESS.

The idea would be to query the 3 vertices, let’s call them P0, P1 and P2, and then compute cross(P2-P1, P1-P0) in your hit shader. (I think that’s the right calculation for a counter-clockwise triangle in a right-handed coordinate system…)

For completeness, you can also lookup your vertices in your own vertex buffer if you like. If you look at the SDK sample called optixPathTracer, it reads from the original vertex buffer. We provide the alternative accessor function optixGetTriangleVertexData() not just for convenience, but also so that you are free to delete your mesh vertex buffer after building your BVH and before rendering, in order to save memory. You can find examples of using optixGetTriangleVertexData() in the SDK samples called optixDynamicGeometry, optixMotionGeometry, and optixVolumeViewer – all three of these are computing the triangle normal more or less like so:

float3 vertices[3] = {};
optixGetTriangleVertexData( optixGetGASTraversableHandle(), optixGetPrimitiveIndex(), optixGetSbtGASIndex(), time, vertices );
// compute normal from vertices
float3 N = normalize( cross( vertices[1] - vertices[0], vertices[2] - vertices[0] ) );
  • edit: oops I forgot that optixMotionGeometry and optixVolumeViewer are both newer than OptiX 7.2. You can see what I’m talking about in the 7.2 version of optixDynamicGeometry. To see the others, you can download the latest version of the OptiX SDK.


You can also pre-calculate the normals and pass them along your geometry in separate buffers instead, and as David explained, you can provide the vertex positions yourself as well.

The OPTIX_BUILD_FLAG_ALLOW_RANDOM_VERTEX_ACCESS is useful to reduce the overall size of data which needs to be hold on the device, if you’re running low on memory, but it comes with a memory and performance impact itself. Read this chapter for other caveats with this method:

Example code generating the object-space vertex attributes for some generic shapes can be found in my OptiX 7 examples. Look at the *.cpp files for plane, box, sphere, and torus. Note that these are all using per-vertex normals but for the box these match the face normals.

For counter-clockwise triangle vertex winding in a right-handed coordinate system the front face normal is
float3 face_normal = normalize(cross(v1 - v0, v2 - v0));
The second cross() function above has it correct.

Mind that the above code calculates the object-space geometry normal. You’d normally need to transform it by the current object-to-world transform matrix to get it into the world space the ray is in inside the closest hit program afterwards. If you need it inside the any hit program, that operates in object space.

Thank you @dhart and @droettger, your suggestions worked like a charm.

I am concerned about the potential for performance impact regarding the use of the OPTIX_BUILD_FLAG_ALLOW_RANDOM_VERTEX_ACCESS though. Both of you mentioned passing normals as a separate buffer to OptiX, how would this most efficiently be accomplished? Using SBT or as part of the launch parameters or something totally different?

I want to say as part of SBT that the closesthit program/shader can access, but am unsure of exactly how an array of vertices/indices can safely be ported to a SBT.

Thank you both again for the assist.

edit: I am trying to build a “box” from triangles, so I probably need to compute normal to a given face versus individual triangle.

There are different method how to make vertex attribute data accessible to the shader programs.
Available methods depend on the render graph hierarchy you’re using.
In the end it boils down to the question:
“How can I get a pointer to device data per GAS to be able to index into it with the current geometric primitive index.”

If you’re just using a single GAS, then all vertex attributes can be held in linear buffers, either one with interleaved data or one per attribute. The CUdeviceptr(s) would then simply be put into your constant launch parameter block and can be accessed from any shader program.

If you’re using the usual two-level architecture with IAS->GAS structure there are more options and it depends on the way you’re want to structure your SBT. Mind that there is the user defined instance ID and SBT offset fields per instance which can be used to index into user defined data.

I explained the different options multiple times before.
Check this post and read all four forum threads linked in there and concentrate on my answers.

I am trying to build a “box” from triangles, so I probably need to compute normal to a given face versus individual triangle.

These normals should be the same anyway. Just look into my code. I don’t even calculate them. In object-space they are the six unit vectors, two on each coordinate axis.

1 Like

If for some reason you can’t use Detlef’s suggestion to formulate your normals directly as constants without calculating any cross products (which sounds like it could be fastest method as long as it’s branchless), then one thing to keep in mind for performance is that memory access may be more expensive than any math you do to compute normals. Besides reducing memory, another benefit of using the OptiX random vertex access method is that it will access the same memory that was just used for intersection before your hit program is called, so your vertices are more likely to be in your cache when you ask for them. Keeping either your normals or your vertices in a separate buffer will result in one or more additional separate memory accesses that have a higher likelihood of missing the cache, and this could easily end up being more expensive than asking for the vertices and doing some subtractions and a cross product in your hit shader. It’s pretty easy to try all of these different methods and compare the performance and analyze the cost of math compute vs memory access using Nsight Compute. I’d recommend trying it multiple ways and profiling and examining the results carefully, it’s a worthwhile exercise and will give you more confidence in your decisions and improve fluency with the profiling tools.



Thanks again @droettger and @dhart. Great information as usual.

Good thing to keep in mind - thanks.

I think at this stage I will keep using the OPTIX_BUILD_FLAG_ALLOW_RANDOM_VERTEX_ACCESS and take advantage of the likelihood of cache hits when accessing vertex data. This may change as the model(s) grow in size later, but for now this looks like a great option.

Thanks again

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.