FFmpeg encoding time decreasing at repeated experiments

I’m measuring the encoding time of NVENC over various test sequences for different parameters like preset, rate control mode, etc. While performing the experiments I noticed a rather strange thing happening: when I run the exact same FFmpeg command back-to-back, I get significantly improved encoding times on the second and even the third run. After that the encoding time reaches probably to its best value and saturates. This happens for all different combinations of parameters, please see here an example log file showing the FFmpeg output from three back-to-back runs with NVENC HEVC encoder. In summary, the encoding time in the first run is 38 fps, second run 57 fps, and the third run 161 fps.

I also used the sample app AppEncPerf found in Video Codec SDK samples to measure the maximum attainable performance:

./AppEncPerf -i FoodMarket4_3840x2160_60fps_8bit_420.yuv -if iyuv -s 3840x2160 -fps 60 -codec hevc -profile main -preset ll_hp -bitrate 5M

and obtained the following result:

[WARN ][13:55:13] File is too large - only 100% is loaded
[INFO ][13:55:13] Encoding Parameters:
        codec        : hevc
        preset       : ll_hp
        profile      : main(hevc)
        chroma       : yuv420
        bitdepth     : 8
        rc           : constqp (P,B,I=28,31,25)
        fps          : 60/1
        gop          : INF
        bf           : 0
        size         : 3840x2160
        bitrate      : 5000000
        maxbitrate   : 0
        vbvbufsize   : 0
        vbvinit      : 0
        aq           : disabled
        temporalaq   : disabled
        lookahead    : disabled
        cq           :
        qmin         : P,B,I=0,0,0
        qmax         : P,B,I=0,0,0
        initqp       : P,B,I=0,0,0
nTotal=2000, time=9.27 seconds, FPS=215.8

AppEncPerf creates two threads and runs different encoding sessions on each thread. So I guess it’s normal that a better encoding time is obtained compared to FFmpeg, also considering the software overhead. However, I’m confused why FFmpeg is showing different (and improved) results for the same encoding parameters and same command.

Any help is appreciated, Thanks.

Hi chronosynclastic,

looking at your ffmpeg performance numbers it seems likely that IO is the bottelneck and some caching is getting you better results in the later runs. As your input stream is about ~750MB/s the performance from the first run would indicate that your IO is limited to around ~450MB/s which would be inline with the perf I would expect from a normal SSD.

AppEncPerf is preloading the whole file into memory, so the IO will not affect the encoding perf.

Best regards,