NV_DX_interop working with Intel but not NVIDIA

Hello,

I’m trying to convert a Directx 9 surface into an OpenGL texture using NV_DX_interop; however, it is not working on NVIDIA GPUs while it is working perfectly on Intel, so there must be something I’m doing wrong but I can’t figure out what. I created a simple application where I have an OpenGL window, and I’m sharing a directx 9 surface with an Opengl texture. At each frame, I update the directx surface by changing the blue component (so the surface goes from black to blue over 255 frames, and so on), and then I bind the OpenGL texture on a quad that I render.

Here is some part of my code (I can send the entire code if necessary) to illustrate what I’m doing:
// Once my OpenGL window is created using SDL, I create the directx surface like this:
D3DPRESENT_PARAMETERS d3dpp;
ZeroMemory(&d3dpp, sizeof(d3dpp));
d3dpp.Windowed = TRUE;
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD;
d3dpp.hDeviceWindow = GetActiveWindow();
d3dpp.BackBufferFormat = D3DFMT_X8R8G8B8;
d3dpp.MultiSampleType = D3DMULTISAMPLE_NONE;

Direct3DCreate9Ex(D3D_SDK_VERSION, &d3d9);
d3d9->CreateDeviceEx(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, d3dpp.hDeviceWindow, D3DCREATE_HARDWARE_VERTEXPROCESSING | D3DCREATE_PUREDEVICE | D3DCREATE_MULTITHREADED, &d3dpp, NULL, &d3d9Device);
d3d9Device->CreateOffscreenPlainSurface(width, height, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &sharedSurface, &sharedSurfaceHandle);

// Then I initialize my interop object like this:
interopHandle = wglDXOpenDeviceNV(d3d9Device);
glGenTextures(1, &texture);
wglDXSetResourceShareHandleNV(sharedSurface, sharedSurfaceHandle);
glSharedTextureHandle = wglDXRegisterObjectNV(interopHandle, sharedSurface, texture, GL_TEXTURE_2D, WGL_ACCESS_READ_WRITE_NV);

// Then in my rendering loop, I update the surface by locking it, updating the pixels and unlocking
// And finally I draw my quad like this
wglDXLockObjectsNV(interopHandle, 1, &glSharedTextureHandle);

glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D, texture);
glUseProgram(program);
glUniform1i(texUniform, 0);

glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);

glUseProgram(0);
glBindTexture(GL_TEXTURE_2D, 0);

SDL_GL_SwapWindow(window);
wglDXUnlockObjectsNV(interopHandle, 1, &glSharedTextureHandle);

Everything seems to be initialized properly. I do not get any DirectX or OpenGL errors. The interop object seems to be initialized properly. Again, on Intel GPUs, this code works and I can see my texture going from black to blue over and over again. On NVIDIA, I get a black texture, and it looks like a bunch of threads get created and killed, and after a few seconds, I get a “NVIDIA OpenGL Driver” popup with the message “A large number of errors have been detected. […]”. There is obviously something I’m not doing correctly but all the samples I’ve looked at over the past 3 days look similar to what I’m doing. If anyone could help, that would be great!

Thanks,
Cyril

So I have tried to do the same thing with a DirectX 11 texture instead using NV_DX_interop2. And this works fine on NVIDIA now. The interop and OpenGL code is the same except that I no longer call wglDXSetResourceShareHandleNV which is not necessary here.
So why can I not convert a Directx 9 surface into an OpenGL texture using NV_DX_intero on NVIDIA and teh code posted above?
If someone from NVIDIA, or someone who’s done it in the past could help me understand what is going on, that would be great.

I also confirm this problem with NV_DX_interop on a 330m and a 750 Ti – black-screen and then the driver will crash ~10 frames in throwing up a “exception” dialog.

Amazingly, the same code runs fine on AMD and Intel hardware.

I’ve isolated this problem to the “CreateOffscreenPlainSurface(…)”.

If you change it to an unlockable “CreateRenderTarget(…)”, it then works on Nvidia.

hi~
我也在学习这些内容,我想知道每一帧frame 数据怎么怎么贴到纹理上的,可以给点帮助吗