Block Compressed Textures with Optix: No bilinear filtering?

I’m attempting to use mipmapped block compressed textures in Optix 7.2, specifically the BC7 and BC5 formats.

The issue is that I’m unable to use enable linear filtering when using block compressed textures, due to an error from the cudaCreateTextureObject function: “linear filtering not supported for non-float type.”

The issue seems to be the cudaChannelFormatDesc the underlying cudaArray_t / cudaMipmappedArray_t is created with. Specifically, the cudaResourceViewDesc documentation indicates that for block compressed formats the cudaChannelFormatDesc should be configured as 32 bits per channel, of unsigned format (so uint4, for BC5 and BC7). The 32 bit uint type seems to be incompatible with linear filtering, hence the error message.

Is there any workaround for this issue besides implementing bilinear / trilinear filtering myself in software? It makes block compressed textures essentially unusable by default in an optix based renderer.

Another issue I’ve encountered is that the mip tail for block compressed textures doesn’t seem to work correctly - specifically, mip levels smaller than 4x4 seem to be impossible to create, because the cudaMipmappedArray_t needs to be created with dimensions divided by 4.

None of these issues should be hardware limitations, as it all works properly in D3D12 / Vulkan.

Hi @bps, what texture read mode are you using? My understanding is for linear filtering to work, you’d need to use cudaReadModeNormalizedFloat, and not cudaReadModeElementType.

I’m not a CUDA textures expert by any means, but if that’s not the solution, I’ll dig around to find a better answer - though you may want to check on the CUDA forum separately (CUDA Programming and Performance - NVIDIA Developer Forums). OptiX is agnostic to texture format, as long as the format is supported by CUDA.


Hi @dhart, thanks for the response. Yep I’m using cudaReadModeNormalizedFloat, it doesn’t make a difference unfortunately, which is inline with this comment in the cuda documentation:

“cudaTextureDesc::readMode specifies whether integer data should be converted to floating point or not … Note that this applies only to 8-bit and 16-bit integer formats. 32-bit integer format would not be promoted…”

I also posted in the general CUDA forum you linked, although I have a pretty bad track record actually getting responses there :)

Okay, we’ll proceed with looking for a canonical answer. Detlef usually knows more about the texture formats than I do, he may be available to reply tomorrow. He confirmed your observation about the small 4x4 mip tails in a thread here: How to create cudaTextureObject with texture raw data in Block Compressed (BC) format? - #4 by droettger


For what it’s worth, I’ve been told the CUDA team is actively adding first class support for BCn textures into the CUDA API, currently in the process of addressing the issues you’ve raised. The OptiX team is unable to comment on when the changes will arrive in the API & driver, but I hope that’s some consolation. I’m not aware of any workaround for the linear filtering, I wish I had a better answer to offer, so maybe temporarily using uncompressed textures or implementing software filtering is a decent stop-gap measure until the API support is public?


Thanks for the information, it’s good to know that the CUDA team has this on their roadmap. I have implemented software filtering, but I don’t think the results are 100% as expected - I suspect cuda may be reading the same value across the entire compressed 4x4 block but I’m not sure (could also be a bug in my bilinear implementation). Regardless software filtering gets pretty unruly as soon as anisotropic filtering is involved, so I may just have to reduce texture resolution and go without compression.

Oh yeah, anisotropic filtering could be a deep rabbit hole I guess. Ugh, sorry to hear the software filtering effort isn’t working out either, that’s frustrating. We’ll bookmark this and update you when the BC format updates in CUDA will be released.