I am currently trying to understand Nvidia encoding performances on various Gpus. We are dealing with heavy encoding performance issues and we need to choose the best Gpu for our needs.
I assume that modern GPU architecture have better encoding and decoding engine hence better performances. However, How the number of nvenc chip exactly affects encoding performances, assuming we have no other limitation?
I am using this chart to get number of Nvenc embedded on each GPU : https://developer.nvidia.com/video-encode-decode-gpu-support-matrix
I raise this question because we have done some benchmarks with an old Quadro M6000 and a 2080Ti and the M6000, although way older seems to perform perfectly well if not better than the 2080 Ti at encoding.
We are trying to identify reliable criteria to build our solution’s architecture. Can we assume, for instance, that a Titan Volta would perform exceptionally well at encoding given its 3 nvenc chips?