Performance of encoding while rendering

I have an app opened in the background that is doing some heavy rendering. In another app I only encode a video using nvenc. When I don’t render in background I get about 6-7 ms for encoding a 4k frame. If I render in the background I get unstable 9-16 ms.

It appears that performance of hardware encoding depends on the rendering workload happening on the GPU. Is this something to be expected? Is it because GPU memory bandwidth is “shared” among rendering and hardware encoder?

I did encode an empty buffer created with nvEncCreateInputBuffer.

Hi maxest,

Can you elaborate more on what hardware configuration, OS version, driver version you used?

What is the heavy rendering referred to? Did you compare the CPU and GPU utilization while reproducing and while not reproducing the issue?

Is it possible for you to provide detailed reproduction steps for me to reproduce this issue internally?

Thanks.

Windows 10.
An alienware with GF RTX 2080.
Driver date: 4/9/2019
Driver version: 25.21.14.2531 (can’t use the newest because our app crashes on it)

I made a repro where on one thread I do rendering (main thread), and on another I do encoding. I create the encoder using NV_ENC_DEVICE_TYPE_CUDA mode (in the actual app, not this repro, our input buffers are Vulkan buffers mapped to CUDA pointers).

Case where rendering is off (the render function early outs without rendering anything, not even presenting results):
Encode time: 6.5 ms
Video Encode utilization: 90%

Case with rendering on:
Encode time: 23 ms
Video Encode utilization: 30%

Encoding and rendering are not tied to each other in any way. The rendering and encoder do not touch each others’ resources in any way.

The only reason I can think of where rendering affects performance of encoding would be GPU memory bandwidth. On the other hand I would expect than when I create a buffer with nvEncCreateInputBuffer it would be created in encoder’s “private memory”, or sth?

I can send you in private message link to the repro app.

Hi maxest,

Thank you for helping out with the reproduction. We have the issue reproduced and is tracked internally as 200539272.

Thanks.

Thank you.

Hi Mandar,

Do you have any new info on the issue?