Optix 6.0 rtContextSetD3D11Device fails

I’m trying to setup D3D11 interop with OptiX 6.0 in the context of a progressive lightmapper, but I’m failing at the very first step rtContextSetD3D11Device with RT_ERROR_INVALID_VALUE.

Is the D3D11 interop supported on OptiX 6.0?

Here’s the optix Code

SD3d11Context d3d11 = create_d3d11_device();

optix::TContextPtr pContext = optix::createContext(optix::ERtxMode::Yes);

RTcontext ctx = pContext->getUnderlying();
RTresult rtr = rtContextSetD3D11Device(ctx, d3d11.mpDevice);
// rtr == RT_ERROR_INVALID_VALUE

I’ve tried creating the D3D11 device with and without a swap-chain if that matters:

struct SD3d11Context
{
    ID3D11Device *        mpDevice    { nullptr };
    ID3D11DeviceContext * mpContext   { nullptr };
    HWND                  mWindow     {};
    IDXGISwapChain *      mpSwapChain { nullptr };
};

//#define CREATE_D3D11_SWAPCHAIN

SD3d11Context create_d3d11_device()
{
    SD3d11Context result;

    D3D_FEATURE_LEVEL featureLevels[] = {D3D_FEATURE_LEVEL_11_1};
    D3D_FEATURE_LEVEL resultFeatureLevel;

#ifdef CREATE_D3D11_SWAPCHAIN
    HINSTANCE mod = GetModuleHandle(NULL);
    result.mWindow = CreateWindowEx(
        WS_EX_APPWINDOW,
        "Static",
        "Test D3D Window",
        WS_DISABLED | WS_POPUP,
        0, 0, 1, 1, //a 1x1 window at (0,0)
        NULL, NULL, //no parent and no menu
        mod,
        NULL
    );
    if (result.mWindow == NULL)
    {
        uint32_t lastError = GetLastError();
        std::string errMsg = std::system_category().message(lastError);
        LOG_ERROR("Failed to create window: ", errMsg);
        return result;
    }

    DXGI_SWAP_CHAIN_DESC swapChainDesc;
    ZeroMemory(&swapChainDesc, sizeof(DXGI_SWAP_CHAIN_DESC));
    swapChainDesc.BufferCount        = 1;
    swapChainDesc.BufferDesc.Format  = DXGI_FORMAT_R8G8B8A8_UNORM;
    swapChainDesc.BufferDesc.Width   = 1;
    swapChainDesc.BufferDesc.Height  = 1;
    swapChainDesc.BufferUsage        = DXGI_USAGE_RENDER_TARGET_OUTPUT;
    swapChainDesc.OutputWindow       = result.mWindow;
    swapChainDesc.SampleDesc.Count   = 1;
    swapChainDesc.SampleDesc.Quality = 0;
    swapChainDesc.Windowed           = TRUE;

    HRESULT hr = D3D11CreateDeviceAndSwapChain(
        nullptr,                    // IDXGIAdapter
        D3D_DRIVER_TYPE_HARDWARE,   // Driver Type
        nullptr,                    // NULL unless driver type is software
        0,                          // Device creation flags
        featureLevels,              // Pick from the default feature levels
        1,                          // Number of feature levels
        D3D11_SDK_VERSION,          // SDK Version
        &swapChainDesc,             // Description of the swap chain
        &result.mpSwapChain,        // out swapchain pointerr
        &result.mpDevice,           // out device pointer
        &resultFeatureLevel,        // Feature level
        &result.mpContext           // Device context
    );
#else
    HRESULT hr = D3D11CreateDevice(
        nullptr,                  // IDXGIAdapter
        D3D_DRIVER_TYPE_HARDWARE, // Driver Type
        nullptr,                  // NULL unless driver type is software
        0,                        // Device creation flags
        featureLevels,            // Pick from the default feature levels
        1,                        // Number of feature levels
        D3D11_SDK_VERSION,        // SDK Version
        &result.mpDevice,         // out device pointer
        &resultFeatureLevel,      // Feature level
        &result.mpContext         // Device context
    );
#endif
    if (hr != S_OK)
    {
        LOG_ERROR("Failed to create D3D11 device: ", std::system_category().message(hr));
        result.mpDevice  = nullptr;
        result.mpContext = nullptr;
    }

    return result;
}

Here’s the system information:

[1][SYS INFO    ]
OptiX Version:[6.0.0] Branch:[r419_50] Build Number:[26129760] CUDA Version:[cuda100] 64-bit
Display driver: 425.31
Devices available:
CUDA device: 0
    0000:9E:00.0
    GeForce RTX 2080
    SM count: 46
    SM arch: 75
    SM clock: 1815 KHz
    GPU memory: 8192 MB
    TCC driver: 0
    Compatible devices: 0

No, the D3D interop functionality was removed in OptiX 4.0.0 (not a typo).

You can work around it by going through the CUDA interop.

The gist of it is that you can create your DX11 resource, register that for CUDA, map the CUDA resource and pass the pointer to an OptiX buffer. There’s couple of nitty gritty details in how you create a DX11, CUDA and OptiX compatible resource that I can’t remember anymore, but you can find my adaptor code here Bifrost3D/Adaptor.cpp at master · papaboo/Bifrost3D · GitHub that’ll hopefully clear some of it up. Readability tip: In the code OFoo is just an owned Foo* resource.

Thank you for the confirmation Detlef. Might be worth removing the headers from the SDK release

Thank you papaboo for the workaround, in particular reference to a working implementation. I will go through it in detail. If I’m reading some of it right though, it seems you can write directly from OptiX to the D3D buffer by mapping it through the CUDA interop. So you’re not writing to a dedicated OptiX buffer and then copying it using CUDA GPU to GPU to the D3D buffer mapped with CUDA like is suggested here https://devtalk.nvidia.com/default/topic/1028119/optix/-solved-optix-5-interop-directx-11-example-/

I wrote that code a while ago, but that is how I interpret it as well. :)
It was a huge step up performance-wise compared to reading pixels back to the CPU and re-uploading them to the GPU.
I just realised that you have an RTX card though. When I run this code on my RTX machine the image looks distorted. It works on every other card (Quadro and GeForce) that I’ve tested it on, so it might just be because I haven’t gotten around to upgrading to OptiX 6.0 and CUDA 10 until two days ago. I’ll re-test it when I’m back at work.