VTK-Optix triangle mesh write/read operations

Hello,

I buid a VTK triangle mesh, saved it in glTF format and was tring to open it with optixRaycasting example. I think I should try to send the triangles and the fields directly instead of saving, as problems like that might occur as the two softwares have different specifications.

Thank you,

Rafael Scatena

AProcessing glTF buffer ''
||byte size: 52868508|
|---|---|||uri: data:application/octet-stream;base64|


	Processing glTF material: ''
		Base color: (1, 1, 1)
		Rougness:  1
		Metallic:  0
		Using default base color factor
	Processing glTF mesh: 'mesh0'
		Num mesh primitive groups: 1
			Num triangles: 8819782
			Has vertex normals: false
			Has texcoords_0: false
			Has texcoords_1: false
			Has color_0: true
	Caught exception: gltf accessor component type not supported

The glTF loading routines inside the OptiX SDK aren’t supporting all allowed glTF data formats.
For example uchar vertex indices, and in your case most likely color attributes which aren’t float3, etc.
There have been some discussions about these limitations before:
https://forums.developer.nvidia.com/t/optixwhitted-how-to-insert-scene-from-opennurbs-on-mesh/278134/5
https://forums.developer.nvidia.com/t/misaligned-address-exception-when-rendering-some-gltf2-models/107169/2

You should be able to determine what it was by setting breakpoints on the code line throwing that gltf accessor component type not supported error message inside the template function bufferViewFromGLTF

In case you want to look at glTF 2.0 files with OptiX, I wrote this GLTF_renderer example which handles these things and a lot more:
https://forums.developer.nvidia.com/t/optix-advanced-samples-on-github/48410/16
https://forums.developer.nvidia.com/t/load-scene-with-optix/291957/2
It’s also not supporting all glTF 2.0 features, yet, like any other primitive type than triangles, animation, skinning morphing, unicode filenames.
I’m currently adding animation support.

Also VTK should be able to support OptiX ray tracing via an Anari plugin.
Look for VTK and VisRTX here: https://www.khronos.org/anari/

Hello,

I created the mesh with VTK, which is more of a high-level programming tool. I don’t have access to how colors, textures, and the index coordinates are stored most of the time. Do you think that if I transferred all the information from VTK to Anari, where I could have more control, and then saved it to CFTL, I could open the CFTL file with the optiXRaycaster example?

Thank you very much,

Rafael Scatena

What exactly is your goal? There are so many options of what could be done, that I’m not going to explain all of them without knowing where this is going.

I could have more control, and then saved it to CFTL, I could open the CFTL file with the optiXRaycaster example

I have no experience with VTK, Anari, or VisRTX and don’t know what you mean with CFTL. You mean glTF?

I don’t know if the Anari representation of the VTK data helps in any way to produce a glTF file which the very limited OptiX SDK example scene loader can handle.

I mentioned Anari and VisRTX because that is already implementing a renderer backend for VTK which is using OptiX to visualize the VTK data somehow. So if that visualization with GPU ray tracing was your goal, there already exists a solution. And since VisRTX is open-source you can look at it or even change it if you wanted.

Even if you can produce a glTF file the OptiX SDK scene loader can handle, the optixRaycasting example is even more limited in what it does.
That example only loads the triangle mesh data and renders a single image where the colors are just visualizing the shading normals.
Also that example is very special in that it uses OptiX only to do ray-triangle intersections and does everything else (ray generation and shading calculations) inside native CUDA kernels, which is a so called wavefront renderer implementation. (Search the forum for that term.)

Again, if you can produce a glTF file, my GLTF_renderer example is most likely able to load it because its implementation is more glTF spec conformant than the loader code inside the OptiX SDK examples.
Its renderer algorithm is a global illumination path tracer with supports quite a number of glTF material extensions which makes the rendering quality far superior to the intentionally simple OptiX SDK examples.

Since all that is also open-source, you could very easily exchange the renderer code against something else as well, including different light transports or a wavefront renderer approach.
Extending the optixRaycasting example to do the same is a lot more work than to cut down the GLTF_renderer to something simpler.
Maybe have a look at the forceUnlit code path I added yesterday, which is effectively a “render only the base colors” override which is also super fast.
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/GLTF_renderer/cuda/hit.cu#L1170

Hello again,

I am building a medical physics application. But now I realized I can`t use GLTF because it does not support double vertice positions. I will need to switch to .ply.

Regarding the basic examples, my medical physics application is x-ray simulation. For that I need triangle-ray intersection tests, an acceleration structure for the mesh and the trace functionality. Rendering is only secundary in the application.

Taking that into account, should I work with the basic examples, or should I work directly with the advanced examples (Optix Apps)

Thank you

If you really require double precision data, things will get a lot more involved than you might expect because OptiX itself uses floating point for all built-in functionality.

Let me copy some comments about double precision in OptiX from a separate discussion.

Please read these first:
https://forums.developer.nvidia.com/t/about-getting-optix-closer-to-double-precision/273522
https://forums.developer.nvidia.com/t/how-does-optix-code-compilation-work/218678/14

Those two threads basically contain all the caveats and options you have in OptiX when using double values.

The caveats are:
The built-in geometric primitives use float.
The ray origin, direction, tmin and tmax are all floats.
All transformation matrices are float.
The AABB for any geometric primitive (built-in or custom) are six floats.
Means all acceleration structures are using floats and the ray traversal will always use floating point data throughout all OptiX code.

The options are:
When you want to have double precision for the geometry, you can use double types inside your buffers and shader code, but you must make sure that all OptiX input data is float.

That means you can build custom AABBs around your double precision positions so that the floating point AABB encloses it, means making the AABB bigger by rounding the extents outwards to the nearest floating point.

Then you’d need to implement a custom ray-primitive (triangle) intersection using your double precision position data and a custom double precision ray structure you track inside your per ray payload as described inside the linked thread above.

Mind that consumer GPUs and even most workstation GPUs have rather slow double precision implementations, depending on the product as low as 1/32 or even 1/64 of the floating point performance.
Only the older Volta GPUs and the high-end data center products have fast double precision but these all don’t have raytracing cores. Means using double precision is potentially really slow compared to floating point depending on the GPUs you use.

Usually using floating point precision is good enough.
When building the scene geometry carefully it’s also possible to regain some precision.
Things like self-intersection avoidance becomes important when dealing with intersection precision issues.
That can be done in a totally robust but slower way by explicitly ignoring the primitive from which a ray started (needs anyhit programs) or it can be implemented with carefully crafted offsets of the ray as shown inside the OptiX Toolkit: https://github.com/NVIDIA/optix-toolkit. Look for ShaderUtil/SelfIntersectionAvoidance.h.

Since you’re then not using any of the OptiX built-in geometric primitives for your double precision triangles but provide only the AABBs for custom geometric primitives (in your cases double precision triangles), you must implement the ray-primitive intersection program yourself.

That intersection program must set the floating point ray tmax value for accepted hits accordingly to result in a proper BVH traversal, as well as return the double precision intersection attributes you require.

Mind that there are only eight 32 bit unsigned int intersection attribute registers, so you probably want to return the barycentric coordinates in double precision, which requires four of those attribute registers.

You should also provide the double precision intersection distance as well as attribute, to be able to calculate the world space hit point along your own double precision ray inside the closest hit program.
Because when only providing the double precision barycentrics, you would be able to calculate the object space hit point on the triangle, but if you’re not also having the object to world space matrices in double precision accessible somewhere, then the transformation from object to world space could only use the floating point matrices on the current transform list. (None of those transforms apply when you’re not using instance acceleration structures but have the whole scene data in a single GAS in world space. That is just a remark, not a recommendation.)

That’s the same as for the built-in triangle intersection routine which also sets the ray tmax and returns the barycentric beta and gamma values, just as floats. (optixGetTriangleBarycentrics() is just a convenience function which gets the first two intersection attribute registers into a float2. You need to mimic that but in double precision.

That intersection program belongs into the Shader Binding Table hit record for all hit records and ray types which should be able to intersect your custom triangle primitive.

There are floating point triangle intersection routines inside the legacy OptiX SDK 5.x version before there existed RTX boards. Just download the OptiX SDK 5.1.1 version from the older versions’ download page, install that and search the source code for intersect_triangle.

Now with regard which OptiX application framework you should look at, would definitely look at my OptiX Advanced Examples build environment if you plan to build a standalone OptiX application which runs in any location on disk.

Note that the OptiX SDK examples hardcode some local build paths into the executable and only run where they have been built by default unless you set two environment variables. (Search the OptiX SDK code for getSampleDir and sampleFilePath.)

Please read these threads.
https://forums.developer.nvidia.com/t/why-am-i-getting-optix-dir-notfound/279085/4
https://forums.developer.nvidia.com/t/code-organization-and-cmake/290506
https://forums.developer.nvidia.com/t/question-about-add-a-cu-in-my-project/284761

Your main problem will be getting the ray tracing to work in double precision.
The first hurdle is to generate the render graph, then to implement the double precision programs.
So it would actually make sense to take only the CMake build environment of one of my OptiX examples and then start from scratch by copying only the minimal things you need from the example code.
That’s also the better learning experience than trying to change all existing code.

  • Start with building custom geometric primitive GAS from your double triangle data.
  • Then put an instance acceleration structure (IAS) on top.
  • Build an array of device pointers to your vertex attribute data which can be indexed with the OptixInstance instanceId (no need to store that data inside SBT data, That way you can keep the SBT to a single hit record.
  • Build a launch parameter structure with field for that IAS traversable handle, a pointer to the array with the per instance vertex attribute pointers, a pointer to some output buffer which should receive the results (size to launch dimension elements).
  • Implement a custom intersection program which calculates something from the vertex attribute data of the hit primitive. You’d need to be able to set the optixReportIntersection arguments when you found some closer hit than the current ray tmax.
  • Write a closest hit program which set some color on the per ray payload (registers).
  • Write a miss program which sets a different color on the per ray payload.
  • Write a simple ray generation program which shoots primary rays into the scene.
    That needs to write the color on the per ray payload to the output buffer (one element per launch index) at the end.
  • Build the OptixProgramGroups, and create an OptixPipeline.
  • Calculate and set the OptiX stack size of that pipeline.
  • Build the Shader Binding Table (SBT).
  • Launch your ray tracing algorithm with an optixLaunch and see if things work.
  • Change things until it works as you need.

Done. Not so easy though with the double precision stuff.

While debugging the host code, enable the OptiX validation mode and set a logger callback to see if OptiX complains about any data you sent to it.

Models using PLY files can also build a hierarchy of individual *.ply models which are usually held in a respective sub-folder structure on disk (e.g. like this famous Power Plant model).

My OptiX examples implement a very simple scene graph on the host which can mimic arbitrarily deep scene graph structures and then flatten that to the most efficient OptiX render graph using a single instance acceleration structure (IAS) as root and individual geometry acceleration structure (GAS) beneath that.
That happens here: https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/rtigo12/src/Raytracer.cpp#L526

That’s all in floating point precision though and you would need to change all that to double!

All of my OptiX advanced examples implement more or less complex unidirectional path tracers.
While it’s simple to exchange the ray tracing application from any of the examples for me, some use a simpler shader binding table (SBT) structure where fewer changes are required.

So I would recommend looking at the rtigo12 example which doesn’t implement cutout opacity support and has one of the smallest possible SBTs. You probably need only one hit record anyway if everything behaves the same.
Please read this thread discussing SBT layouts: https://forums.developer.nvidia.com/t/sbt-theoretical-quesions/179309

Except for the intro_* and the GLTF_renderer example, all the other examples support multi-GPU rendering, so in case you’ll need that in the future, rtigo12 would be a good foundation.

It should actually be able to load PLY models already through the ASSIMP library importer I use, but I have never tested what happens with property double data. I definitely don’t support that in any of my examples, so I don’t expect that to work. You’d need to change all that code then

1 Like