NVENC: Realtime encoding using ID3D11Texture2D as input?

Hi everyone,

I’d like to encode an RGB D3D11 texture directly to the NVENC encoder has anyone already done this, if so where can i find an example? (almost like screen to encoder pipeline realtime encoding to h.264)
After a lot of trial and error i’ve gotten the NVENC encoder to actually initialize and so now it’s ready for frames in a format i currently cannot prove will work, it would be great if someone could confirm this wil work.

Also because it seems the NVENC chip only takes YUV whats the best way to go about converting my RGB frame to YUV? Are there any Nvidia libraries that do this? (i’m assuming there is)

Hi I have similar questions. were you able to find a solution?

I also have the same concern. The samples provided are for YUV only. Anyone knows hoe to do it for RGB/RGBA?

Of course you can use ARGB or ABGR as input format. Check NV_ENC_BUFFER_FORMAT, it defines several formats. Simply copy you RGB data into EncodeBuffer after NvEncLockInputBuffer(), and NvEncUnlockInputBuffer() after copy done.

Have anyone tried it?
I had an error when I used ARGB input format.

Hi! There are absolutely no problems to use d3d11Texture2d as input. Check params and debug your params. Do not forget that you shoud use same input buffer as d3d11Texture2d, thus you can use CopyResource for surface loading into encoder.

I’ve tried to copy data from ID3D11Texture2D to NV_ENC_INPUT_PTR using ID3D11DeviceContext::CopyResource and I have access violation.

  1. Should I use NvEncRegisterResource with NV_ENC_INPUT_RESOURCE_TYPE_DIRECTX and ID3D11Texture2D?
  2. How can I pass ID3D11Texture2D directly to encoder without copying it through host memory(CPU)?

Here is my code

NV_ENC_INPUT_PTR    mInputBuffer = nullptr;
ID3D11Texture2D*    mSurface = nullptr;

NVENCSTATUS Init(uint32_t width, uint32_t height)
	D3D11_TEXTURE2D_DESC desc = {};
	desc.Width = width;
	desc.Height = height;
	desc.Usage = D3D11_USAGE_DEFAULT;
	desc.BindFlags = D3D11_BIND_RENDER_TARGET;
	desc.CPUAccessFlags = 0;
	desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
	desc.ArraySize = 1;
	desc.MipLevels = 1;
	desc.SampleDesc.Count = 1;
	desc.SampleDesc.Quality = 0;

	auto hr = mD3D11Device->CreateTexture2D(&desc, nullptr, &mSurface);
	if (FAILED(hr))
		PRINTERR("mD3D11Device->CreateTexture2D mSurface failed hres: 0x%X\n", hr);
	NV_ENC_CREATE_INPUT_BUFFER createInputBuffer = {};
	createInputBuffer.version = NV_ENC_CREATE_INPUT_BUFFER_VER;
	createInputBuffer.width = width;
	createInputBuffer.height = height;
	createInputBuffer.memoryHeap = NV_ENC_MEMORY_HEAP_VID;
	createInputBuffer.bufferFmt = NV_ENC_BUFFER_FORMAT_ARGB;

	auto nvStatus = m_pNvHWEncoder->NvEncCreateInputBuffer(createInputBuffer);
	if (nvStatus != NV_ENC_SUCCESS)
		return nvStatus;

	mInputBuffer = createInputBuffer.inputBuffer;

	void * nvRegisteredResource = nullptr;
	auto stride   = uInputWidth * 4;
	nvStatus = m_pNvHWEncoder->NvEncRegisterResource(NV_ENC_INPUT_RESOURCE_TYPE_DIRECTX, mSurface,
													width, height, stride, NV_ENC_BUFFER_FORMAT_ARGB,


	// Update mSurface
	auto dst = static_cast<ID3D11Resource*>(mInputBuffer);
	mD3D11ImmediateContext->CopyResource(dst, mSurface); // <<< Exception thrown at 0x00000002 in nvenc.exe: 0xC0000005: Access violation executing location 0x00000002.
	return m_pNvHWEncoder->NvEncEncodeFrame(...);

The problem is in RegisterResource function


It has 2 args.

There is everything OK with texture description, but there is a problem with

nvStatus = m_pNvHWEncoder->NvEncRegisterResource(NV_ENC_INPUT_RESOURCE_TYPE_DIRECTX, mSurface, width, height, stride, NV_ENC_BUFFER_FORMAT_ARGB, &nvRegisteredResource);

Check your API method calls.

As wrote in api documentation, you must register buffers with the same type as it will use as source. Check out nvidia encoder api samples NvEncoderD3DInterop, but use not d3d9 surface, but d3d11 texture.

We are successfully using ID3DTexture2D textures with NVENC in our project.
The textures passed to NVENC are in DXGI_FORMAT_B8G8R8A8_UNORM format (as received from DWM). No transcoding is performed - simply passing the texture to NVENC works just fine.
But we still didn’t manage to make it work on Win7. On Win8.1/Win10 it works, on Win7 we get the same NV_ENC_ERR_UNIMPLEMENTED from nvEncRegisterResource as you do.

SlavikKrasnitskiy and ArderBlackard thanks for your help!

I also found, that nvEncRegisterResource accepts only off-screen d3d9 surfaces, at least on Windows 7.
But I can’t copy data from render target to off-screen surface until device 9 Ex.
Is there any way to register and use render target (d3d9) in encoder?
Can I transfer data from render target to off-screen surface in Direct3D9(not Ex)?

Unfortunately I can’t help with your question directly, but if you don’t mind using D3D11+CUDA - this is the working approach.
After all the attempts to register D3D11 resources directly in NVENC, we implemented transcoding of the D3D11 RGB textures to CUDA buffers containing frame data in NV12 format (the transcoding itself is amazingly fast with CUDA). The NVENC, when initialized to work with CUDA context, produces correct video output on Win7 as well as on Win10 (didn’t try on Win8 yet).
So the final pipeline we’ve got is ID3D11Texture -> CUdeviceptr -> Encoded H264 data.

Hi guys,
Is there any GPU model limitation on using ID3D11Texture2D with DXGI_FORMAT_R8G8B8A8_UNORM format to register as input? I am using a GTX 660TI on Windows 10 64bits with Video_Codec_SDK_7.1.9 and nvEncRegisterResource is always returning INVALID_PARAMETER no matter what I try.

I am doing this:

      ZeroMemory(&tex2DDesc, sizeof(tex2DDesc));
      tex2DDesc.Width = 1280;
      tex2DDesc.Height = 720;
      tex2DDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
      tex2DDesc.SampleDesc.Count = 1;
      tex2DDesc.SampleDesc.Quality = 0;
      tex2DDesc.Usage = D3D11_USAGE_DEFAULT;
      tex2DDesc.CPUAccessFlags = 0;
      tex2DDesc.BindFlags = D3D11_BIND_RENDER_TARGET;
      tex2DDesc.ArraySize = 1;
      tex2DDesc.MipLevels = 1;
      ID3D11Texture2D * pTexture = nullptr;
      HRESULT hres = _pMyDevice->CreateTexture2D(&tex2DDesc, NULL, &pTexture);

    NV_ENC_REGISTER_RESOURCE registerResParams;

    memset(&registerResParams, 0, sizeof(registerResParams));

    registerResParams.resourceType = NV_ENC_INPUT_RESOURCE_TYPE_DIRECTX;
    registerResParams.resourceToRegister = (void*)pTexture;
    registerResParams.width = width;
    registerResParams.height = height;
    registerResParams.pitch = width * 4;
    registerResParams.bufferFormat = NV_ENC_BUFFER_FORMAT_ABGR;

    nvStatus = m_pEncodeAPI->nvEncRegisterResource(m_hEncoder, &registerResParams);


SOLVED!!: Sorry, I was silly enough to try to register this resource on a NV_ENC_DEVICE_TYPE_CUDA initialized HW Encoder. Now using NV_ENC_DEVICE_TYPE_DIRECTX and this resource is properly registered.