OptixWhitted: how to insert scene from OPENNURBS ON_Mesh?

I am trying to import the scene data from an openNURBS ON_Mesh object into Whitted sample. In the whitted sample the geometry data is somehow hard coded:

// Metal sphere, glass sphere, floor
const GeometryData::Sphere g_sphere = {
    { 2.0f, 1.5f, -2.5f }, // center
    1.0f                   // radius
const GeometryData::SphereShell g_sphere_shell = {
    { 4.0f, 2.3f, -4.0f }, // center
    0.96f,                 // radius1
    1.0f                   // radius2
const GeometryData::Parallelogram g_floor(
    make_float3(32.0f, 0.0f, 0.0f),    // v1
    make_float3(0.0f, 0.0f, 16.0f),    // v2
    make_float3(-16.0f, 0.01f, -8.0f)  // anchor

and the geometry is inserted via OptixAabb:

// Load AABB into device memory
OptixAabb   aabb[OBJ_COUNT] = { sphere_bound(g_sphere.center, g_sphere.radius),
                              sphere_bound(g_sphere_shell.center, g_sphere_shell.radius2),
                              parallelogram_bound(g_floor.v1, g_floor.v2, g_floor.anchor) };
CUdeviceptr d_aabb;

    ), OBJ_COUNT * sizeof(OptixAabb)));
    OBJ_COUNT * sizeof(OptixAabb),

// Setup AABB build input
uint32_t aabb_input_flags[] = {
    /* flags for metal sphere */
    /* flag for glass sphere */
    /* flag for floor */

In my case I have:

ON_3fPointArray ON_Mesh::m_V
ON_3fPointArray ON_Mesh::m_N
ON_3fPointArray ON_Mesh::m_T 
ON_3fPointArray ON_Mesh::m_F

How could I load the openNURBS ON_Mesh object into the WhittedState?

Make sure to read the entire sample, and play with it, and understand everything about how the geometry is loaded. Once it’s more clear how the sample works, it will become much easier to understand how to modify it. It might help to first introduce 1 new additional geometry type into this sample without removing anything else, just so you don’t have to change everything at once.

The snippet you provided is loading the geometry AABB bounds into an array and uploading that to the GPU, but that part does not have the geometry data. A pointer to the geometry data is passed separately via the SBT. The AABB bounds are used to build the BVH. The geometry data is used in the intersection program.

The hard-coded geometry arrays are just arrays, there’s nothing special about them. You can use dynamic arrays just as easily, and still pass the same pointers to the data. That might be another good exercise in this sample before trying to tackle the openNURBS mesh - try replacing the hardcoded geometry with std::vector or something, and read several spheres from a file.

You can also inspect all the code for the optixMeshViewer sample, which does read a mesh from a file. This sample is more complicated than optixWhitted, but already does something that is closer to what you’re asking.


Thanks for your quick response,
I had focused earlier in the OptixMeshViewer sample which is indeed much closer to what I want to do. Unfortunately the specific sample loads a file in order to insert the geometry and the scene in general. What really confused me was the usage of the sutil::Scene object and more specifically the BufferView:

    struct MeshGroup
        std::string                       name;

        std::vector<GenericBufferView>    indices;
        std::vector<BufferView<float3> >  positions;
        std::vector<BufferView<float3> >  normals;
        std::vector<BufferView<Vec2f> >   texcoords[GeometryData::num_textcoords];
        std::vector<BufferView<Vec4f> >   colors;

        std::vector<int32_t>              material_idx;

        OptixTraversableHandle            gas_handle = 0;
        CUdeviceptr                       d_gas_output = 0;

        Aabb                              object_aabb;

Since what I do have from openNURBS ON_Mesh is ON_3fPointArrays, then in order to create my sutil::Scene::MeshGroup I should insert these ON_3fPointArrays into the vectors of MeshGroup. However, I am not sure how to insert the ON_3fPointArray ON_Mesh::m_N, for instance, into the std::vector<BufferView<float3> > normals.
A snippet code example would be really helpful.

Ah, BufferView is just a class that provides a “view” of an array: an interface to a slice and/or sub-portion of an array. It’s a little like a std::span but with some extra features like stride. Look at the BufferView.h header file and that might clarify it’s purpose. BufferView is being used mainly because of the way GLTF files are stored and organized; GLTF provides single buffers of float data that can be used for multiple meshes or sub-meshes. The array of BufferViews gives you access to each separate mesh piece, even when there’s a different number of source buffers than mesh pieces. This is useful for consolidating lots of similar data into single large buffers, and can be beneficial for performance by reducing the number of allocations needed. You can see code snippets of how BufferView gets used by reading through sutil::bufferViewFromGLTF() in Scene.cpp.


You don’t have to use BufferViews at all if you don’t want to. I don’t know what openNURBS data looks like, but if each mesh comes in separate buffers, and there’s no data interleaving within buffers, then you can replace the BufferViews with bare pointers. Alternatively, a BufferView will be completely transparent if you set byte_stride = elmt_byte_size, (or just set byte_stride=0), set data to point to the start of your buffer, and set count to be the number of elements in the buffer. That means the BufferView would not do anything particularly helpful, but maybe it’s a way to avoid having to change all the code in the optixMeshViewer sample while you add the openNURBS file handling.


I wouldn’t recommend using the optixMeshViewer example as basis in that case. It’s tailored to the GLTF file format requirements and is not actually handling the full GLTF spec.
That the BufferViews inside the MeshGroup class assume specific template types for the GLTF attributes is one of its shortcomings. That won’t always work.
The MeshGroup class is also specifically handling the GLTF’s way of defining a Mesh as multiple Primitives (each matching a draw call in OpenGL) where each primitive can have a different Material (which is also the reason for that aabb_input_flags array). That’s why the geometry acceleration structure build is using multiple build inputs and one SBT entry for each of them.
That would be much too complicated for the things you asked for.

Are you planning to intersect NURBS primitives directly or is that ON_Mesh already tessellated to triangles?

In the latter case, you should look at examples which are using built-in triangles in OptiX and generate geometry acceleration structures (GAS) from that and build scenes by putting these under instances inside an instance acceleration structure (IAS). The traversable handle of that is your scene’s root you use inside the optixTrace calls then.
The optixWhitted example is also not doing that. It’s only showing some hardcoded custom primitives.

The introductory examples in my OptiX examples described here
build a scene by creating some runtime generated shapes (plane, box, sphere, torus) using indexed triangles and an (interleaved) vertex attribute array on the host.
Then the vertex positions of that are used to build GAS with built-in triangle geometry.
(Note that the GAS are not using compaction, yet to keep the introductory examples simple. That is shown inside more advanced examples of that repository later and is recommended.)

These GAS are put under OptixInstances to place them into the scene:

Then all these instances are built into an instance acceleration structure:

That example is using interleaved vertex attributes.
If your vertex attributes are separate arrays instead, you would need to change the GeometryData structure from using one CUdeviceptr attributes to the four arrays you have.
That would obviously also require to change all accesses to these inside the device code.
Also if you don’t use indices, that would need to be changed everywhere the geometry primitive index is used during AS build and vertex attribute accesses.

I would recommend to start with a new project from scratch for your case and only copy and adjust the example code handling the geometry first.
Then implement some ray generation, closesthit and miss programs which can hit/miss your geometry with the simplest possible shading, e.g. only showing object space normals as colors and a constant background color in the miss shader.
Only when that is working, implement some more elaborate lighting and shading routines.
The most advanced example in this repository goes as far as using a full blown Material Definition Language (MDL) based material system.

The examples are also showing different ways how to design the shader binding table (SBT). While the initial programs use a hit record per instance with SBT pointer data, later examples like rtigo10 and the MDL_renderer use an SBT with one hit record per material shader and use the user-defined OptixInstance instanceId field to reference any additional data per instance.

Now if the plan is instead to intersect NURBS geometry directly with a custom intersection program you would need to implement first, then all the built-in triangle geometry acceleration structures would need to be replaced with the AABB around these NURBS primitives for the optixAccelBuild inputs and then a shader binding table with hit records using that custom intersection program as well.