H.264 video encoder latency

I am trying to encode video at ultra low latency settings and I am seeing effect for which I don’t have good explanation.

Environment is: NVIDIA 1080 Ti GPU, 1920x1080@60 video signal, NV_ENC_PRESET_LOW_LATENCY_DEFAULT_GUID and other low latency targeting settings (I am under impression fine tuning with settings is not so much important).

I am encoding in Windows 10, I am using async version of the API and I wait for event that signals completion of bitstream. (It is not directly related, but I also did polling and subframe readback and the behavior is consistent through all the modes).

The problem is this: when I am doing encoding I see encoding latency of around 3 ms per frame during a few first seconds of encoding, then it goes up to 7 ms, then in a while it stabilizes on 10 ms level. Sometimes it can go back to 7 ms for some time and then jumps back again to 10 ms.

When I refer to encoding latency it is about time between NvEncEncodePicture call and event triggered as a result of encoding completion.

The encoding is rock solid in terms of stability and so is this latency pattern. I see that hardware can do 3 ms encoding but it switches to slower mode, such as [I am guessing] a result of thermal self-balancing. I wonder if there is any way programmatically or via interactive setup to affect this behavior?

Other things I tried:

  1. Same code on GTX 750 system does encoding at constant 15 ms latency without going up and down

  2. Synthetic test producing video stream using Direct2D with encoding that on 1080 Ti ASAP (effective frame rate is much higher than 60! - 10x or so) results in 6-7 ms latency.

  3. Synthetic test producing video stream using Direct2D on another GPU with cross-GPU transfer and encoding that on 1080 Ti ASAP (effective frame rate is still much higher than 60 - 8x or so) results in 4 ms latency of the encoding step.

I see that GPU encoder is doing well at stable low latency when it’s idle and the encoder is basically capable to operate fast. However in scenario with live 60 fps signal, mixed load (encoding and frontend application activity such as 3D app or video playback application) and underloaded GPU in general I see that encoder degrades its performance to 10 ms latency for no clear reason.

Is there something I could try or maybe there is a well known reason for such behavior?

Thanks!

Things go better with pictures, so here we go - blue line indicates how encoding latency changes as encoding goes:

Hi,

Could you please share the following information:

  1. Driver and SDK version you are using
  2. To be clear, the issue you are concerned about is the fluctuation in GTX 1080i latency when operating under mixed load, is that correct? Secondly, are you saying that the latency is consistent in GTX 750 with no fluctuation even if used under mixed load, is the correct?
  3. Is it possible to share your application? Even a binary would help. We need to look at the behavior locally for the engineering team to investigate.
  4. What are the encode API parameters you are using?
  5. Also if possible please share your input content.

Thanks,
Ryan Park

Thank you for your attention to our challenge.

The primary system I am using for testing displays this:

MY RIG
Geforce GTX 1080 Ti
Driver Version 391.01
Intel… i7-8700K CPU @ 3.70 GHz
15.86 GB RAM
2560 x 1440, 144 Hz

We are building against Video Codec SDK 6.0.1. To my best knowledge the issue is present when I was using SDK 8.0.14, even though it might be worth a re-check.

Yes, the issue is related to mixed load. The originally posted issue is exhibited with the following load:

  • there is a media player application playing back a 60 fps video clip
  • capturing application does Desktop Duplication API, obtains a copy of desktop with media player running on foreground, then scales down or just crops 1920x1080 portion of feed’s video into encoder
  • registers encoder latency

The image above corresponds to the load mentioned above.

Additionally, I tested encoding of generated content (simple Direct2D output into Direct3D 11 texture then used as encoder input), with encoding running ASAP. If I generate video on this 1080 Ti GPU then encoder is running at stable 7 ms/frame. If I use integrated Intel GPU for Direct2D and then pass the prepared texture to 1080 Ti GPU, then encoding is at constant 2-3 ms/frame.

I will see and follow up pm what test app I can provide.

Having a reproduction app built (rypark, I messaged you on it), I see that the problem is not specific to GTX 1080 Ti. I can see the problem with GTX 750 as well when I measured more accurately:

What is the conclusion of this problem?
What causes the fluctuation?

1 Like