Parallel Encoding with D3D11 Texture as Input


I am trying to encode in parallel on two chips (GTX 1080) when using D3D11Texture as input method. I have made this work previously with OpenGL to CUDA to NVENC. But it seems something in the encoder will be messed up when trying this on a D3D11Texture, maybe because the device context in D3D11 is not thread safe.

I tried creating two different devices, and thus two separate device contexts should be pulled in the for me unseen NVENC code. This works to some extent but encoding will take longer and longer as more frames are processed, it is random and occurs outside my code in NVENC. So I figured maybe two devices will mess up the scheduling or similar, so tried running with just one device, but unless I lock the encoding this will crash or hang the process on some of the NVENC API calls, so I cannot get a gain from using 2 NVENC chips this way.

My question is if it is possible to utilize NVENC in parallel with D3D11Textures as input?