Graph Nodes in Optix

This seems to be a major discussion point over here, with debates about local v.s. world space, and acceleration structures, and the reasons for possible design decision. ex// anything in world space skips the needs for a transform. And the better I understand the underline design decisions, the better I can work with the API.

Currently, we have a world or scene root that is a rtGroup. It is a sort of object container, along with an acceleration structure for spatial sorting, it references individual objects composed of (in parent -> child order)

rtTransform ( spatial orientation)
rtGeometryGroup ( allows connection to acceleration structure)
rtGeometryInstance ( links geometry to material)
rtGeometry ( actual object to render)

While what I’ve seen most commonly is… a scene root, if any. Then a set of instances (hold material, transform) which point to geometry (all local space, including acceleration structure in local space). Although an instance can have different material, not having a transform would make it of little use, I imagine.

I just wanted to gather some thoughts on the way the Graph nodes are set up, and what may be some of the advantageous.

That’s solely depends on the requirements you have. There are important performance differences to consider though.

OptiX renders fastest with a flat hierarchy.
If you need to manipulate whole groups, adding Transform nodes saves most of the acceleration structure rebuild needed instead of updating the geometry required otherwise. For any affine animations Transforms and a Bvh acceleration structure at the root with “refit” should be the fastest way.
If your application’s scene graph has the possibility of very deep transform hierarchies, a naive translation to OptiX nodes will not be the fastest. Mind there can be factors in rendering performance between a complicated hierarchy with many small geometries and a fully optimized scene.

You might want to read some of our Developer Technology Professional Visualization team’s presentations on techniques for fast scene graph traversal resp. the validation and traversal of a scene tree built from that.
Links here (more on GTC 2015 next week):

The other level of local vs. world coordinate space decision is in the closest_hit program implementation. It’s also your choice if you want to transform world space hits into local space and calculate BSDFs there or if you do calculations in world space. Some things are simpler in local space. It’s just a matter of the number of dot-products you require.

“more on GTC 2015 next week” I’ll be there! And thanks for the link.

I prefer to do everything in local space. Although, I’d assume it would be possible to start with a world space accel structure and using a single matrix back transform to local and forward transform to a new location at once. (like what is common in bone skinning)

Agreed on “very deep transform hierarchies” We save it for the cpu. If you want a hierarchical scene graph of objects local to other objects, Keept it, do the transforms on cpu (walk the tree and calc final matricies), link each tree node to a single layer deep ( a list ) object in optix, then pass that final matrix into it.

So, in summary, we would like to have

Geometry (local space) with an accel structure (local space)

and instances thereof, which allow placing geometry in multiple locations, each with its own material.

With that in mind, does this make sense? Or is there a simpler set up.

rtGroup <- world or scene root that is a container for everything else.

Instances… consisting of: ( this is the part where it gets a bit more complicated)
rtTransform ( spatial orientation)
rtGeometryGroup ( allows connection to acceleration structure, gets it from our geo object)
rtGeometryInstance ( links geometry to material)

Geometry … which can be instanced, our geo object
rtGeometry ( actual object to render)

Right, that’s the usual structure for a scene with instances.
See OptiX Programming Guide chapter 3.5.6. Shared Acceleration Structures, Figure 4 for an example how that looks for a scene which has two transformed instances of the same GeometryGroup.

Note that Groups and GeometryGroups hold or share the acceleration structure, not the Geometry or GeometryInstance nodes.

“Own materials” won’t work with model instancing via Transform nodes like that.
The Material is assigned to the GeometryInstance which is part of the sub-tree under the GeometryGroup under the Transform. Means the Material in that is the same for any Transform having this sub-tree as child.

You would need to create individual sub-trees with different Material nodes to be able to use different materials on instanced model geometry. It doesn’t matter if the Material is actually using a different closest_hit implementation or just different material parameters.
That will become costly for bigger scenes. Build times scale linearly with the number of nodes in the scene.
What you definitely should do is sharing the acceleration structure among all GeometryGroups holding identical geometry to save a lot of memory.

The Material is assigned to the GeometryInstance . Yup, that is what I meant. We actually just use a single material (for our purposes) with each instance having a set of params used by that material, such as specularity, transparency, diffuse color.