16-bit source texture workflow

Does NVTT maintain 16-bit source data into the BC5 compressor?

Hi Jeremy! Could you elaborate a bit on what part of the pipeline you’re wondering about?

Inside the NVIDIA Texture Tools Exporter, we use 32-bit-per-channel floating-point colors all the way up to the compressor (we convert 16-bit-per-channel signed data to 32-bit-per-channel floating-point when reading images from Photoshop). So, for instance, we use 32-bit floating point when generating normal maps. However, BC5 compresses to an 8-bit-per-channel color space. For potentially less quantization (but no alpha channel), BC6 decodes to 16-bit floating-point HDR data. If you have the storage space, we also support exporting uncompressed R16G16 textures.

The NVTT library works similarly: all nvtt::Surface objects and image processing (except for nvtt::Surface::quantize()) use 32-bit-per-channel floating-point RGBA colors. However, compressing usually introduces some quantization.

Hope this helps!

1 Like

That answers my question, thanks. I wasn’t sure if source data was clamped on input or not. This presentation from Anton Kaplayan from several years ago shows benefits of avoiding the 8bpc quantization step until the compressor. I was seeing some very similar banding artifacts on low frequency gradients and wanted to request 16-bit normals from artists but only if NVTT could maintain that.

Hmm, let me double-check in fact - there might be something here! Looks like the compressor might be converting to 8-bit UNORM at the very last stage, while the BC4 and BC5 D3D functional specs require at least UNORM16 filtering, which is what Kaplanyan’s relying on. So although the endpoints are 8-bit, the interpolants might not be.

I’ll check internally and get back to you on this - thanks for spotting this!

1 Like

Were you able to get any further information on this?