createTextureSamplerFromGLImage fails with RT_ERROR_INVALID_VALUE

I am trying to create a texture sampler from gl textures and for some reason this always fails with RT ERROR INVALID VALUE.

The exception thrown says:
“Invalid value (Details: Function “RTresult _rtTextureSamplerCreateFromGLImage(RTcontext, unsigned int, RTgltarget, RTtexturesampler_api**)” caught exception: Unsupported texture format. 261)”
but no further information what exactly is wrong on the format.

The gl texture format is GL_RGBA, which should be supported. Any idea what might cause this or what i can check?

I am calling the function like that:

TextureSampler sampler = context->createTextureSamplerFromGLImage(textureId, RT_TARGET_GL_TEXTURE_2D);

I am also checking gl errors before calling the interop function (as it was recommended in this forum).

I am using Optix 4.1.0 and Cuda 8.0.61_375.26 in OpenSUSE 13.2

Edit: ok seems that I mixed things up, GL_RGBA is NOT supported right? So in my case with GL_RGBA8 it works…

Correct, GL_RGBA is not an an internal format but a user format which together with the data format defines the input layout of texture data.
The internal format argument in a glTexImage call is the relevant type for interop.
If GL_RGBA worked for the internal format argument in OpenGL, it’s used in the sense of “components” without precision qualifier like from legacy OpenGL 1.0 times. That’s better not used for any texture internal format argument anywhere nowadays. Instead prefer the ones with explicit precision qualifier in the name.
The supported ones are listed inside the OptiX Programming Guide Appendix A. Supported Interop Texture Formats.

Thanks, makes sense.

I was able to create the sampler for the gl texture, however there seems to be still a problem with sampling higher mip map levels using the resulting texture.

I tried to reproduce the problem with a minimal setup where I generate an opengl texture with mip maps manually. In the cuda program i do the following:

float4 texColor = optix::rtTex2DLod<float4>(textureUnit0, texcoord0.x, texcoord0.y, 4.0);

I attached a screenshot of the results.

In the left image i am creating the sampler directly with createTextureSamplerFromGLImage() as above. For some reason I get this weird results where half of the time it seems like a lower layer (or even the first layer) is sampled. Sampling the first (original sized) layer works without any artifacts.

In the right image i copy the texture and all its mip maps to a buffer which i then set on the sampler via sampler->setBuffer(). There everything seems to be ok and the right mip map is sampled every time.

Do you know why this can happen when I use the createTextureSamplerFromGLImage() function?
snapshot1.png

Ok, that would need a reproducer to investigate.
The only OptiX SDK example I know of which used mipmaps is the OptiX 3.9.1 rayDifferentials example but that is not using OpenGL interop to create the OptiX texture samplers.

You could try to generate an OptiX API Capture (OAC) of the failing case and if that can successfully reproduce the failure in-hose here, there wouldn’t be a need for a standalone reproducer project.
Please have a look into this thread’s last post for instructions how to enable that:
https://devtalk.nvidia.com/default/topic/803116/?comment=4436953

Or if you have a minimal reproducer project in source form anyway that would be helpful to look at.

Thanks for your answer, I recreated the problem using the optixTextureSampler sample. Should be easy for you to reproduce with that.

The sample generates a mip map texture in openGL. All the mip map levels have distinctive colors and only the first one is pink. The program samples all mip map levels from left to right.

The upper part of the resulting image is using a sampler that is created with gl interop. The lower part uses a sampler which is initialized by copying the whole texture to a buffer.

When you start the sample you can see, that using the sampler created with interop results in sampling the first mip map layer half of the time.

I also attached a screenshot how the result looks here on my pc
optixTextureSampler.zip (8.35 KB)

Seems to work for me. The resulting image shows the exact same pattern of nine vertical stripes (pink, g, b, r, g, b, r, g, b) in the top and bottom half of the image, which matches the color of the mipmap levels 0 to 8 from left to right, which I verified with a more colorful initialization.

Means this becomes a question of system configurations.
I’ve run it on Windows 10, Quadro P6000, 378.66 display drivers.

Additionally to your given configuration information OptiX 4.1.0, CUDA 8.0.61_375.26, OpenSUSE 13.2, what are your installed GPU(s)?
Could you also please attach an image of the incorrect result you’re seeing?
Have you tried updating the display driver?

(Note that you used four times too much data for the OpenGL texture initialization. The address and offset calculations there are not on GLubytes as in the host buffer case, but on GLuint already.)

Thanks for your fast answer. Ok good to hear that this might be only a local problem.

I have a GTX 970 running with 367.57 display drivers.

In the screenshot you can see my incorrect result. I will try to update the drivers as soon as possible and will report back of the results.

(Thanks also for the hint on the data waste - it was a quick example anyway - but good to know)
snapshot2.png

Unfortunately updating to the latest graphics driver 384.59 did not solve the problem. Will try to run it on a coworkers pc tomorrow.

ok tried it on four additional machines today, result is the following:

970 GTX - not working
1080 GTX - not working
M2000M - working
K4100M - working

so seems like it only works properly on quadro GPUs. All others show the artifacts I posted earlier. Is there a chance this might get fixed for future releases?

Thanks for testing.
Given that it works on specific configurations, this sounds more like a driver than an OptiX issue.
I filed a bug report against OptiX 4.1.0 for initial triage.

Thanks for your help!