Load Scene with Optix


I’m new to Optix and I’m just getting used to it. My question is about loading objects into the scene.

Right now I’m working on a raytracing program to return intersection point and color from a .gltf 3D object. After discovering the existence of the wrapper for Python, I’m trying to do it with Python, but it’s not clear to me how to load the scene in optix. I’ve seen C++ examples using sutil::loadScene, but I don’t see that I can do the same with the Python wrapper and Optix documentation does not provide much information on the loading of the objects.

Anyone can help? Thanks!

The good news is, I released a GLTF_renderer example in my OptiX Advanced Examples last week:
(Links to the github repository further up in that thread, inside the fourth post.)

That was originally based on the OptiX SDK optixMeshViewer example, but I replaced almost everything because that example was too limited.

Please read the GLTF_renderer specific README.md about what that example can do and what it cannot, yet.

The example source code itself is pretty well documented.

Note that GLTF has a very specific way how it defines meshes. A GLTF Mesh consists of Primitives and that is meant in the sense OpenGL draw commands, so each Primitive can hold arbitrary many geometric primitives (here triangles), and materials are assigned to these Primitives.
That specification results in a very specific way to build geometry acceleration structures (GAS). A Primitve (draw command) is one build input, and there is one shader binding table (SBT) hit record entry per such Primitive in that example at this time.

The bad news is that it’s quite involved for an OptiX beginner, but there are also simpler examples (the intro ones) showing how to generate geometry at runtime. and then different ways how to load files into a host side scene graph which can then be flattened down to an efficient OptiX render graph.

I have not used the Python wrappers, but once you’ve understood the OptiX API, it shouldn’t be a problem to migrate to that, though if you don’t have a Python library which already loads a GLTF file for you, things will get involved. I wouldn’t recommend doing that before you’re familiar with the OptiX API itself.

Thanks for your reply!

I’ll read the example and further study the API. You said working with .gltf is complex for a beginner. Is there any recommended 3D format file to get a beginner-friendly load implementation? I’m creating the file myself, so I could try to save it to a more friendly format while it allows me to recover intersection points and colors.

Thanks again!

For more information about the glTF file format please have a look into the Khronos glTF 2.0 specifications here:
The cheat sheet there might be helpful to understand what index inside the *.gltf means what.

The *.gltf files themselves are actually *.json files which normally reference raw binary files (*.bin) with the buffer data.

Just look at some of the simple ones inside the Khronos glTF-Sample-Assets

The glTF file format is rather OpenGL centric. There are quite some things in there which do not directly map to OptiX or more precisely CUDA, like memory alignment restrictions being different, which requires copying data around, or specific input data formats required by OptiX. etc. You’ll see that in my example.

There shouldn’t be a problem generating glTF files if you have understood its format.
It’s basically a JSON file and one or more raw binary files which contain the buffer data used for the different elements. These are interpreted via Accessors, which in turn use BufferViews onto these Buffers.
Everything is connected with indices to arrays of these things.

A much simpler format would be OBJ and its associated MTL material description, but that is ASCII and will load comparably slow and its indices are one-based. (I regularly convert huge OBJ files to GLTF to save repeated loading time.) If you need per-vertex colors, that is also pretty uncommon and non-standard behavior in OBJ. Most parsers do not handle that.

I’m creating the file myself, so I could try to save it to a more friendly format while it allows me to recover intersection points and colors.

If you do not need to rely on a standard file format, you would be free to implement whatever data format you’d like as well. Though it would be better to be able to compare results in other renderers. GLTF supports per-vertex colors if you need that.

Just read the cheat sheet, read the specs, and some of the simpler *.gltf files and you’ll see how that works.

For an OptiX beginner you need to understand how to build acceleration structures.
My examples have runtime generation function for shapes (plane, box, sphere, torus) which generate the vertex attributes and then build the geometry acceleration structure (GAS) from the position attributes of these.
Then these are placed under a top-level instance acceleration structure (IAS) which places them into the world.
This post explains some options: https://forums.developer.nvidia.com/t/passing-per-vertex-attribute-data-into-a-shader-program/279321/2

The other crucial thing you need to understand in OptiX is the Shader Binding Table (SBT) which is rather flexible and can have different layouts depoending on what your application needs. As said GLTF is a little special in that area.
Please read the OptiX Programming Guide and these forum threads:

Hello again.

I’ll follow your suggestions. Regarding the file format, I understand using the .obj format would only slow the loading process. On my application, loading the scene slower would be no problem while the ray-tracing is fast enough. If it would be easier for a beginner… Is there any example loading from a .obj?

Thanks again!

Well, inside the OptiX SDK 7 and 8 versions the optixMotionGeometry example is still including the tinyobjloader library which was used to load OBJ files in the past, but it looks like it’s not actually used anymore. The loadScene() calls are using GLTF via the tinygltf loader instead.

So maybe look at the tinyobjloader examples themselves first.
You need to get the data into host memory first which is as simple as calling tinyobj::LoadObj and then you’d need to pass the desired data over to OptiX.

When writing OBJ files you would just need to write text data according to the OBJ and MTL specifications.

This short article describes enough details to be able to write simple OBJ and MTL files: https://en.wikipedia.org/wiki/Wavefront_.obj_file

These are more detailed descriptions of the OBJ and MTL file formats.

Loading the OBJ scene is not only slower but depending on the precision of the floating point output as text, the float values might also be less precise.
I would really recommend using a binary format when your scenes tend to be big and need precision.

My own examples are using the assimp library which can also load OBJ files among over 40 other file formats. The GLTF_renderer example is using fastgltf instead.

Hello again,

I’ve been working on understanding how to load a model from GLTF to optix. I think the GLTF part is under control and I’ve been able to extract triangles from GLTF and create a GAS structure with them. My main problem right now is to load albedo color information for each triangle. If I’ve understood right, that should be done using SBT. Right now I use a single Triangle Build Input to load all triangles under the assumption I can assign a different value to each triangle using the SBT to be later extracted using a closest-hit program but I’m not 100% sure if I’m right. Following the Programming Guide, section 7.4 (Optix 7.7) shows an example of accessing record on device in a closest-hit program and it says the meshIdx obtained is for the Build Input (I currently have only one). Will I be able to access different triangle color or I’ll need a Triangle Build Input for each triangle?

Thanks again!

Will I be able to access different triangle color


or I’ll need a Triangle Build Input for each triangle?


My main problem right now is to load albedo color information for each triangle. If I’ve understood right, that should be done using SBT.

That is one option.
It depends on how you want to address your data.
You can put device pointers to any attributes you want into the additional data per SBT, or you could use the additional fields available inside an OptixInstance to index into some array of device pointer.

If you want to have a different color per triangle, that is basically just an array of data (e.g. RGBA as uchar4 or float4) which has the same number of elements. You allocate that on the device with some cudaMalloc and copy the color data there.

Then you need to be able to access that device pointer inside the closest hit program and fetch the data via the same index you use for the triangle vertex attribute lookup, which is the optixGetPrimitiveIndex result.

Now the remaining problem is how to provide that device pointer with the color data in a way that you can access it in your closest hit program, and two different ways are described inside the post I already linked above.

While that is using vertex attributes, means the color attributes would be per vertex and not per triangle, the mechanism is the same, just with different array sizes.

If you only have one GAS in you scene, it’s even simpler, because then you’d need only one array with color values and could put the device pointer of that into the launch parameters, which means it’s globally accessible in any OptiX device program domain.

I do not recommend using a render graph structure with only one GAS, even though many OptiX SDK examples do that for simplicity. It’s not the fastest solution and using a top-level instance acceleration structure is faster on RTX boards and will offer more flexibility for addressing data and different SBT layouts