GL Interop with ARB Bindless Texture

the following gives me the error “cudaErrorInvalidResourceHandle”.

U32 texture;
glCreateTextures(GL_TEXTURE_3D, 1, &texture);
glTextureStorage3D(texture, 1, GL_R32UI, 16, 16, 16);
auto handle = glGetTextureHandleARB(texture);

cudaGraphicsResource_t resTex = nullptr;
auto err = cudaGraphicsGLRegisterImage(&resTex, texture, GL_TEXTURE_3D, cudaGraphicsRegisterFlagsReadOnly);

without the glGetTextureHandleARB() it works perfectly fine.

is there any possibility to still directly map the texture ?

i’m not sure if it will work correctly so maybe can shed light on this, but i can register the texture first and then get the handle.

Thread Necromancer uses Thread Resurrect. It’s super effective.

As it turns out, this is still not fixed three years later. Which is inconvenient as I would need this right now. In a large visualization application with hundreds of textures (all used bindless), some might or might not be needed for CUDA processing, depending on the user actions. I don’t want to register all textures upon creation, as most of them will never be shared with CUDA. Yet, is this the only way to go? How large is the overhead of registering a texture? Is this only a negligible struct/pointer somewhere or are actual data copied/pinned?

Nice of you to bring this topic up.

I have a similar issue, but i want to point out that if you first call cudaGraphicsGLRegisterImage and only afterwards glGetTextureHandleARB it doesn’t fail. I know you might know this already but it would have been nice to know for somebody else reading this.

The issue with this in our case is that cudaGraphicsGLRegisterImage takes about half a millisecond while we sometimes need to create several thousands of textures within one frame. For our application we simply can’t afford these kinds of delays.