VXGI and GVDB

I got to GVDB via OpenVDB and to there from Voxel Cone Tracing, so in my mind at least there’s a very close relationship between what’s become NVidia VXGI and the newer NVidia GVDB. I notice that Cyril Crassin is at NVidia now too, so I’m hoping you’re all sitting round chatting about the scope for the two projects working together. If you are I’d appreciate any insight into where you’re at as I’m not keen to reinvent any wheels, and if you’re not, please find a mutually convenient bar! I’m very much looking forward to the fruits from this collaboration.

GVDB Voxels and VXGI serve two quite different needs.

VXGI is intended for real-time, interactive lighting in games via voxel cone-tracing. It is embedded in the graphics pipeline (DX/GL) and achieves very fast polygon-to-voxels since objects in the game scene may be moving. Rendering is designed to integrate with game engines, both in performance and quality.

GVDB Voxels is a framework for voxel storage, compute and rendering. A typical use case for GVDB is fluid simulations for motion pictures (no so for VXGI). GVDB was designed to be generic/flexible. For example, unlike VXGI, polygons-to-voxels in gvdb was designed to be exact and out-of-core, and uses CUDA rather than the graphics pipeline. While one could implement cone-tracing in GVDB, it would be slower in a gaming context than VXGI, but may have other benefits. These APIs have different uses.

I don’t suppose anyone has an open source implementation of cone tracing over GVDB? I’ve been trying to get a simulation-quality SVOCT project working for quite some time now with little luck.

Hi David - it’s unlikely you haven’t stumbled on Cyril Crassin’s GigaVoxels, and it predates GVDB and OpenVDB, but just in case you haven’t seen it it’s at http://gigavoxels.inrialpes.fr/ and it’s well worth a look.

The brilliant Mr Crassin is squirrelled away inside NVidia somewhere so I kinda hope that’s what he’s doing in his spare time, but as (the equally brilliant!) Rama points out above there’s a mismatch in the goals of the different projects.

Still, we live in hope…!

I started from his PhD dissertation actually, and following his work is what lead me to GVDB.

My problem revolves around non-visible light simulation (IR, UV, RF, etc.) where aliasing is pretty much the death knell of fidelity. The other problem is that I’m dealing with scenes that can run into hundreds of billions of polygons and visible distances measured in kilometers to tens of kilometers.

Thus I’m forced into out of core no matter what and the complication of trying to voxelize and conetrace has so far stumped me.

With any luck I can get GVDB to voxelize and handle the memory but I was hoping I could adapt VXGI to handle the actual conetracing.

Hi David and Mike,

The idea of storage with GVDB and cone-tracing with VXGI is a nice one, but unlikely to happen soon. First, VXGI achieves its magic with global illumination using a proprietary, highly optimized data structure internally, over which it is very natural and efficient to do cone tracing. Because of this it is unlikely that VXGI would read directly from another data structure in memory like GVDB. On the other hand, GVDB was designed to be a generic voxel storage mechanism.

On option would be to store data in GVDB, and then transfer it to VXGI just for rendering. However, this won’t happen soon since the transfer is not defined, and it would also require you store the volume data twice (once in GVDB, again in VXGI).

A better option is using GVDB for storage and perform the cone-tracing algorithm in GVDB itself. The main issue there is that the cone-tracing algorithm depends on having access to level-of-detail, that is voxel data which is downsampled at multiple resolutions. This LOD storage is not supported in GVDB 1.0.

Performing cone-tracing and light baking is a goal of GVDB. At present it’s a future goal. (Of course, the GVDB code is online, so anything is possible. :)

If the goal is just to cone-trace very large polygonal scenes, you may have better luck in using an out-of-core acceleration structure to store your polygons in a “scene map” – a uniform division of 3D space, where each cube region contains a list of polygons. Then, you use VXGI to dynamically regenerate the cone-trace volume as the camera passes through the scene map and activates new space. VXGI is fast enough at generation it should handle dynamic loading, and you can help it by using smaller partitions.

Regards,
Rama