Hi,
I am buildig a low-latency encoding and streaming server and for starters I was orienting on your NvEncoder sample. I realized that you, for example for 1080 resolution, build a buffer queue of 16 frames when no B-frames are used, otherwise you use numB + 4. I am wondering what the point of this is? In the comment in line 800 you say “min buffers is numb + 1 + 3 pipelining” → why? Wouldn’t it be reasonable to feed data as soon as it’s available?
What also seems strange to me is, in NVENC_VideoEncoder_API_ProgGuide.pdf (version 7.1), in Table 1, you recommend the following NVENC settings for low-latency use cases:
- Low-Latency High Quality preset
- Rate control mode = Two-pass CBR
- Very low VBV buffer size (Single frame)
- No B Frames
- Infinite GOP length
- Adaptive quantization2 (AQ) enabled
However, when I change these settings, I cannot recognize any difference. On the other hand, modifying the length of the buffer queue is very much noticable. Even in the NvEncoderLowLatency sample, you create a buffer of 3 frames and only call CNvHWEncoder::ProcessOutput when all buffers are occupied, causing a latency of 4 frames. I now modified my code to process the output immediately after feeding data, thus right after calling CNvHWEncoder::NvEncEncodeFrame and got the frame delay down to 1 by this. I wonder why you wait until there is a fourth frame arriving before you feed the first to the encoder, I am sure there is a reason. Is what I’m doing not recommended? If not, why?
Thanks!