I’ve been reading documentation for OptiX 7.6, specifically the Programming Guide for about a week now to try to get a feel for things. I understand the basic structure of the RTX API I think, but I can’t seem to nail down what native format the geometry primitives are in when OptiX routines are running. I think I ran into a couple opaque types and kind of moved on. I’m confused because I’ve seen a few different examples that seem to do things very differently.
I have the SDK. Could someone point me to documentation or a code sample where I can see explicitly how I need to manipulate, say, mesh data to get it into device memory? Maybe the scene type and acceleration structure is just obfuscating things for me.
Context: I don’t care about rendering. I want to take a fixed set of rays and intersect them with my scene. I know how to do that, but the scene loading just isn’t making sense yet.
Triangle mesh build inputs are described as an array of floats, three per vertex. Optionally you can provide an index buffer, which is an array of integers, three per triangle. Curves and spheres have similarly simple in-memory formats.
It’s called by all of the runtime generated shapes, for example, the box here which consists of only 12 triangles. https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_runtime/src/Box.cpp
Note that the acceleration structure only uses the vertex position of the four vertex attributes (vertex position, tangent, normal, texcoord) defined there.