On the Application Note for Video SDK 13.0, it shows achievable encoding FPS for different GPU families, presets, rate control and tuning:
Beneath the chart it states “Above measurements are made using the following GPUs: GTX 1060 for Pascal, RTX 8000 for Turing, RTX 3090 for Ampere, and RTX 4090 for Ada. All measurements are done at the highest video clocks as reported by nvidia-smi (i.e. 1708 MHz, 1950 MHz, 1950 MHz, 2415 MHz for GTX 1060, RTX 8000, RTX 3090, and RTX 4090 respectively). The performance should scale according to the video clocks as reported by nvidia-smi for other GPUs of every individual family. Information on nvidia-smi can be found at https://developer.nvidia.com/nvidia-system-management-interface.”
Does that mean that an 5080 with a boost clock speed of 2.617 Ghz can encode more quickly, on a single NVENC chip, than a 5090 with a boost clock speed of 2.407 Ghz, so I should scale the FPS values for a 5080 GPU by 2617/2407 =1.0872?
Thanks in advance for any help.