I’m working on an OpenGL engine that provides interop with Optix. It was working with Optix 3.8 and CUDA 7.0.
I’ve upgraded to Optix 3.9 and CUDA 7.5 and some of my examples provide an exception as shown in the title.
I’m assuming it is texture related, and the textures do come from an OpenGL context.
I’ve read in the Optix docs that optix will get all the sampling info from the gl texture, but my engine uses texture samplers, so the texture itself has default texture sampling settings.
Could this be the issue? What’s new in 3.9 that is causing this exception?
Related to texturing, OptiX 3.9.0 added support for mipmaps, cubemaps, and layered textures.
The OptiX API always supported specifying these, but so far only LOD 0 was used.
The “2” in the exception message comes from the RTfiltermode enum RT_FILTER_NONE.
OptiX doesn’t read all texture object state from OpenGL. The programming guide says “OptiX automatically detects the size, texture format, and number of mip map levels of a texture”. Note that this does not mention the texture filtering mode.
For the OptiX texture samplers with mipmaps, please call rtTextureSamplerFilteringModes() with last argument mipmapping set to RT_FILTER_NEAREST or RT_FILTER_LINEAR instead of the default RT_FILTER_NONE.