GeometryTriangles: setVertices and setBuffer(vertex_buffer)

I am working in a scene, where geometries are updated during runtime - see

The optixGeometryTriangles sample inside OptiX, first uses

geom_tri->setVertices( num_vertices, vertex_buffer, RT_FORMAT_FLOAT3 );

to update the vertices for the internal intersection program and then

geom_tri["vertex_buffer"]->setBuffer( vertex_buffer );

for the attributes program - so far my understanding.

What is the purpose of setting vertex position data using the two different commands above? What is the difference between the two?

My setup: Windows 10 Pro 64bits, using the display driver 426 and Cuda 10.1.168 on an RTX4000. Since changing the drivers would involve major testing effort in my company and OptiX 6.5 needs at least the 435 driver, I have to stick to OptiX 6.0 for the time being.

I searched for further information in the documentation and the forum, but could not find information beyond the related post below:

The C API documentation inside the OptiX API Reference can be found when combining the names of the C++ wrapper objects and function names:
GeometryTriangles::setVertices maps to rtGeometryTrianglesSetVertices which is this:
(If you exchange the 6_5 against 6_0 in these links you get the old docs.)

The C++ wrapper defines some overloads for the standard cases which calculate the remaining arguments not shown in the above setVertices() call.
That makes the buffer and vertex data layout in that known to the GeometryTriangles node. (It’s similar to a vertex array setup in OpenGL, if that helps.)

Similar for all other of the C++ wrapper calls which map directly to a C API.
(I seriously recommend to not use any of the C++ wrapper functions which have been added for convenience, like removeChild() with an object as argument because there is no C API for that. Instead track the child index as well and use the version with the child index argument, much faster!)

The geom_tri[“vertex_buffer”]->setBuffer( vertex_buffer ); is aconvenience functionality of the C++ wrapper which implements the operator[] with a string argument to handle RTvariable construction and lookup.
It first queries if an RTvariable of that name has been declared at this scope, and if it doesn’t exist, creates it at that object’s scope (here at the GeometryTriangles node). The member function setBuffer() assigns a buffer to that variable.

That buffer in the argument can then be accessed by the attribute program assigned to the GeometryTriangles node under the name “vertex_buffer”. To be able to access that in device code, you need a matching rtBuffer declaration with the same name and format or validation fails.

All of the above can be seen inside the OptiX SDK 6.0.0\include\optixu\optixpp_namespace.h header which implements that and calls the underlying C API. You can single-step debug through that with the OptiX API Reference open to lookup what the called C functions do.

Sidenote on the OptiX C++ wrapper: Note that it’s not ref-counting the underlying OptiX objects! You must keep track of the C++ objects and call destroy() explicitly to avoid memory leaks. (One of my pet peeves with all OptiX SDKs < 7.)

Now when you want to update the vertex data inside that buffer for an animation you would just update that vertex buffer data (map, write, unmap the buffer) and if the topology doesn’t change (amount of vertices and, if indexed, the connectivity information stays the same => morphing) neither of the two calls above are required again. Just all acceleration structures on the way to that leaf GeometryTriangles node need to be marked dirty to rebuild or refit the affected acceleration structures during the next launch.

Thank you for explaining the setVertices over the C++ wrapper and for giving further information on how buffers first create/use an RTVariable and then assign the buffer to it!

Would this mapping and unmapping trigger copying data between cpu and gpu? I have all vertices of my animation inside optix buffers at the beginning of the animation and the vertices do not change.

In the end my original question goes back to: is it possible to access the vertices (or triangles) set through setVertices (respectively setTriangleIndices) in the attribute program? Then I would not need the setBuffer function and the readability of the code would improve without using map and unmap.

Yes. What is copied into what direction and when depends on the buffer type (input/output/input_output) and mapping arguments (e.g. write_discard).
David explained that here before:

Basically for geometry data your buffers should be input only and if you change the whole input buffer content you map() with write_discard, to avoid needless copies to host.
The unmap() operation marks the buffer contents dirty and again depending on additional buffer flags, either the unmap() or normally the next launch() will copy the input buffer data to the GPU device(s).

Here’s some code doing that for custom geometry nodes (OptiX 5 based code had no GeometryTriangles), but would be the same thing for OptiX 6 buffers.

This chapter on buffers in the OptiX Programming Guide explains some more details:

But you said “where geometries are updated during runtime”.
If the data is all static, even better, then the whole procedure above only needs to happen once, but that’s the usual method.

Really, the presence of the setBuffer() function in the code should be the least of your problems.
The setVertices() function only makes the buffer object and it’s layout description known to the GeometryTriangles object for the following acceleration structure build. That call doesn’t involve any buffer data upload to the device. The map(), write, unmap() operation is normally required to fill input buffers. (The exception would be CUDA interop which is even more involved.)

If you want to access the contents of a buffer you either need to have it assigned to an existing rtBuffer variable on the device and that is done per that variable’s name at that scope.

Explaining with the code excerpts of the above link:

// Assigning the buffer to a variable on host side: 

And the matching code on device side accessing the buffer contents, here inside the intersection program. With GeometryTriangles that would happen inside the attribute program.

Or you would need to use bindless buffer IDs which are integers which allow to have buffers accessible in structures or arrays.
One example is here, for data needed by the importance sampling of a spherical environment light:

Now with all that said, I understood that you’ve been implementing a flip-book animation in OptiX 5 using Selector nodes and visit programs, which have been removed in OptiX 6 and beyond because they interfered with the hardware BVH traversal. (I was in a conference call with your development department before.)

If your selector node was the top-level object and you switched through the individual animated geometry by selecting one of many children to traverse in the selector visit program, the exact same could be achieved in OptiX 7 by having an array of top-level OptixTraversableHandles which could be all uploaded at once and then animated through the same way, just by using a different top level handle in the primary ray optixTrace() call for each animation step.
That is not so easy in OptiX 6 because that doesn’t have arrays of rtObjects, but would require individual variables and a really ugly switch-case, which you described in one of your previous posts.

If the animation is not the top object, things get complicated.

OptiX 7 also allows to access the triangle vertex position (only) data inside the acceleration structure, so that the vertex positions don’t need to be kept on the host after the acceleration structure has been built:

As a company, you’d need to prepare for faster qualification of newer display drivers, because that is where the complete graphics core implementation resides. I would really recommend to bite the bullet and update to OptiX 7 as soon as possible.