Display GVDB Voxels/Volumes in Omniverse in Real Time

Hello,

I have a real-time voxel simulation running in GVDB that I would like to display in Omniverse in real time. For this simulation, we produce a solid surface that changes approximately every frame. From some discussions over Discord, it seems like the only option is to run the whole simulation and write the volume to a file, which is slower than I’d like.

Are there any options for programming a voxels in real time at the moment, such as with CUDA? If not, are there any options for custom rendering, such as writing my own Hydra render delegate and using it only for certain objects in the scene, or somehow communicating with the RTX renderer? Or is there any other way to do dynamic geometry, since the volume I would like to render doesn’t need to be transmissive?

Thanks,
Patrick

Hello @VIII! I reached out to the development team to help you with your questions!

Hi @VIII, we don’t currently have a path from GVDB to the RTX renderer.

The only path would be through NanoVDB, but this is currently used only in a ‘static’ way where volumes are stored on disk and loaded through the UsdVolume schema.

Since you only need opaque rendering you could run marching cubes on the volume and output the geometry directly to the USD mesh primitives. One way to do this is through a custom OmniGraph node, where you wrap your GVDB simulation and essentially get it to pass triangle vertices/indices out of the node and into a mesh primitive.

Since users can only write Python OG nodes you could take a look at Warp for an example of how we do this: warp/OgnDeform.py at main · NVIDIA/warp · GitHub

You can change topology on the fly so marching cubes output should also work.

Hope that helps.

Miles

2 Likes