I am creating mipmapped textures in OptiX 3.9.1 and would like to ask a few questions about those.
I noted that since OptiX 3.9 mipmaps are defined on the buffer, not on the sampler. (And if I’ve understood the posts on this forum correctly, mipmaps were not really supported before OptiX 3.9.)
The docs read that in OptiX 3.9 rtTextureSamplerSetMipLevelCount was deprecated and rtBufferSetMipLevelCount was introduced.
However, I am still able to call TextureSamplerObj::setMipLevelCountsetMipLevelCount, which uses rtTextureSamplerSetMipLevelCount. According to the docs this method is only deprecated as of OptiX 4.0.
A call resultes in an “OptiX Error: Unknown error”. Could this be a bug?
In the sample rayDifferentials, which uses mipmaps, there are no references to either “setMipLevelCount”. Instead, the desired number of mip levels is fed as a parameter to ContextObj::create2DLayeredBuffer. Is there a reason why ContextObj::createBuffer does not support this same approach?
Also I would be very happy if someone would be so kind to describe the use of the following three TextureSamplerObj methods to me in a little more detail than the docs do: setMaxAnisotropy, setMipLevelClamp, and setMipLevelBias.
Regarding TextureSamplerObj::setMipLevelCount, the “Unknown error” is because you’re passing in a value other than 1 for number of mip levels. I would treat this as a bug in the docs you linked, which should probably say that both the C and C++ versions of this call are deprecated in 3.9, not 4.0. However, I don’t know if I’m going to fix the docs for 3.9 at this point since we probably will not do another release of that branch.
For your question about createBuffer not taking a mip levels argument: I believe that is covered by the existing “createMipmappedBuffer”. Due to the name, it doesn’t appear side by side in the Doxygen-generated html page. I recommend consulting the header (optixu/optixpp_namespace.h) directly.
I’ll post info about the 3 TextureSamplerObj methods separately.
The texture sampling params have direct correspondences to CUDA:
- setMaxAnistropy → cudaTextureDesc::maxAnistropy
- setMipLevelClamp → cudaTextureDesc::minMipmapLevelClamp, maxMipmapLevelClamp
- setMipLevelBias → cudaTextureDesc::mipmapLevelBias
These are also very similar to texture filtering parameters from OpenGL/DX. Here are some search hits for max anistropy:
I’ll sketch out the other two, but please find more official references for these :)
Mipmap level bias moves each texture lookup “up” the texture mip pyramid by some amount. Increase it to blur the result. Default is 0.0.
Mipmap level clamp forces the lookups to stay between two levels in the pyramid. For example min=0, max=0 would force all lookups to use the base level 0 (the highest res level), which would alias on a ground plane. Default is (0, 1000) or some other big number for max.
A good way to debug these is to load mipmaps with a unique color at each level, e.g., on the ground plane of the ray differentials sample.
Thanks for your explanations.
It’s good to know what values should be used for parameters to obtain a desired effect.
Also I had indeed overlooked createMipmappedBuffer in de documentation. Using that I’ve got it working.
(I wrote a thank you reply before, but I see I failed to correctly post it.)