Access transformed vertices in Optix

Hello,

is there a way to access values of the current vertices of a transformed geometry group within RT_ROGRAM?
It seems that the buffer of those vertices does not get updated, hence values stay the same. So is this information somehow transferable to RT Programs?

(Due to old GPU, Optix 4.1.1)

Regards,
Matthias

You don’t have vertices on a GeometryGroup. What you can get are the vertex attributes from one of the GeometryInstance’s Geometry underneath that. If that is a 1:1 relationship to your GeometryGroup, then yes.

Depending on what you need there are multiple ways to do that.

For example, if you only require the vertex positions of the primitive you hit, you could write them as additional attributes inside your intersection program. With GeometryTriangles in OptiX 6.0.0 that would go into the attribute program.
Mind that these are in object space. You’d need to transform them in world space via the rtTransfrom*() functions (recommended) or by rtGetTransform(object_to_world) and an own matrix multiplication. This seems to be your main point of the question.

Looks like this for normals. Same with rtTransformPoint() for positions and rtTransformVector() for tangents or bitangets.
[url]https://github.com/nvpro-samples/optix_advanced_samples/blob/master/src/optixIntroduction/optixIntro_07/shaders/closesthit.cu#L72[/url]

More flexible and most likely generally faster than above method is to store bindless buffer IDs of the vertex attribute buffer and (optional) index buffer at your GeometryInstance. (At the GeometryInstance because any_hit and closest_hit programs do not have access to the Geometry scope directly. See program variable scoping here: [url]http://raytracing-docs.nvidia.com/optix_6.0/guide/index.html#programs#program-variable-scoping[/url] )
Then you can access both inside the any_hit and closest_hit programs, fetch whatever primitive index you want and retrieve all vertex attributes you need, and again transform them to world space.

Be careful, if you need these attributes to do calculations against the rtCurrentRay, mind that it is in different coordinate spaces in these program domains. Inside the closest_hit program it’s in world space, in the any_hit program it’s in object space.
[url]http://raytracing-docs.nvidia.com/optix_6.0/guide/index.html#programs#program-variable-transformation[/url]

Even more generally, you can store bindless buffer IDs of the vertex attributes and index buffers of your Geometry in any context global data structure and access them from any of the program domains at will.
You just need to have a way to identify the geometry, e.g. like with an index variable defined at the GeometryInstance scope. But you only have the current transformation available inside the any_hit and closest_hit programs.

For example, I’m using that last method to implement arbitrary mesh lights. A global buffer contains these bindless buffer IDs (and some other data including the current transformation) inside a light definition structure per light in the scene, which means I can pick any light in the scene and have their geometry data available for sampling.

(Note that OptiX 5 supports a super-set of the GPUs supported in OptiX 4. Both support Kepler GPUs.)

Detlef, thanks for your detailed answer.

I want to launch rays from triangles of the transformed geometry and it seems that rtTransformPoints is not allowed for my ray generation program.

So I am trying to implement the bindless buffer ID approach with a global buffer which should give access in a ray generation program as well, correct?

Right, the ray generation program scope doesn’t have the transformation hierarchy available. There was no BVH traversal at that point.

What you’re doing is also explicit sampling of the mesh triangles and that requires to store the object-to-world transformation along with the bindless buffer IDs to be able to do the necessary transformations inside the ray generation program to calculate the origin and directions.
If you base that on shading normals you would also require the inverse transpose matrix if there is non-uniform scaling applied.

Here is an example of such buffer IDs inside a structure,
[url]https://github.com/nvpro-samples/optix_advanced_samples/blob/master/src/optixIntroduction/optixIntro_07/shaders/light_definition.h#L60[/url]
which you then put into a buffer with RT_FORMAT_USER and element size set to sizeof(your_struct). Search for the LightDefinition initialization in that OptiX Introduction code.

I didn’t provide a code example for the mesh lights, yet, but this description explains all steps:
[url]https://devtalk.nvidia.com/default/topic/1036173/optix/what-is-the-best-way-to-explicitly-descend-scene-graph-implement-sampling-of-arbitrary-mesh-lights-/post/5264265/[/url]