The question is “Q1. Did Turing cease to support field encoding function in H.264 ?” and the answer is NO. Although English is not my native language, to me this answer means “Turing did not cease to support field encoding in H.264”. That’s why I asked for a proof.
Hi, NVIDIA and everyone.
Sorry, I’m not good at English.
I’ll give a supplementary explanation about my question.
“Field encoding: no” means NV_ENC_CAPS_SUPPORT_FIELD_ENCODING returns 0 .
I think this might be a driver bug.
Because I think that NVIDIA doesn’t removes H.264 field encoding function that is useful and already implemented.
Is this a bug ? Or this is not a bug ?
I want to hear the answer from NVIDIA to clarify whether Turing supports H.264 field encoding function or not.
Turing supports H,264 field encoding ?
Or Turing doesn’t support H.264 field encoding ?
And if Turing doesn’t support this function, please tell me the reason why NVIDIA removed this useful function.
I think only Turing supports “HEVC B-frames”.
But I want to hear the answer from NVIDIA to clarify whether older GPU(Pascal etc.) supports “HEVC B-frames” with new SDK and driver, or not.
New comparsion between Pascal and Turing, this comparsion is only for one NVENC engine, so all Pascal GPUS with 2 NVENC engines (1070+) will be faster than any Turing GPU when transcoding more than one stream.
And from those specs Turing will not be able to encode 8k@30fps in realtime, it can transcode only 2x 4:2:0 4k@30fps streams realtime, 8k will require at least 4 streams. Also there are not any informations about 8k transcoding anymore.
Now it is official!
There are also more info about decoding, which is much faster than Pascal and on Quadro RTX cards 2 NVDEC engines.
Charts to do not much current performance, Turing with preset slow is much slower compared to Pascal. Turing with preset medium is indeed faster compared to Pascal and also gives better encoding quality. Of course it is not 200% faster that nvidia claimed before.
Increase decoding performance is more important imo. My Quadro P2000 NVDEC is always maxed out when transcoding 4K , while NVENC is loading around 30%.
There is better quality for H264, PSNR +0.2dB (8%), but encoding performance are only 56% of Pascal generation!
When compare Pascal preset hq and Turing preset fast, quality is better by +0.1dB (4%) and encoding speed is same (1 NVENC Turing = 2 NVENC Pascal)
When we compare high quality profile, encoding performance of Turing are only 24% of Pascal!!! => Not suitable for any 4K content, but quality is almost same as SW libx265, PSNR +1dB (40%)!!!
So I still don’t get why Nvidia didn’t put 2xNVENCs on Quadro RTX 5000,6000, it could be killer product.
Performance for SW encoding were mesured on 1CPU core, so for example on some dual socket Epyc with 64 CPU cores (128 threads) performance could be same as for 1 Nvidia RTX GPU :))))
imo PSNR should not be used to compare quality between SW and HW encode, because x264/x265 lean toward to perceptual optimized. Netflix’s VMAF is a better metric .
For SW transcoding is used this conversion of preset in libx264 and libx265
fast = fast
hq = medium
slow = slower
H264 profile MAIN - 2.70% → 3.00 Mbit/s rate on RTX = 3.08 Mbit/s on GTX
H264 profile HIGH - 16.48% → 3.00 Mbit/s rate on RTX = 3.49 Mbit/s on GTX
H265 - 25.78% → 3.00 Mbit/s rate on RTX = 3.77 Mbit/s on GTX
H265 with B-frames - 45.64% → 3.00 Mbit/s rate on RTX = 4.37 Mbit/s on GTX
There is very minor difference between libx265 and NVENC for H265 at around 1.2% without B-frames and 10% with B-frames, so only place for improvment for now is B-adaptive, B-pyramid and B-refs which current SDK doesn’t supports for H265.
NVENC with HIGH profile is even better than libx264 medium at around 23%, so use NVENC!! :)))
I also did tests with other parameters and improvment for H265 is always around 42-47% for any bitrates (1.5 Mbit/s, 3 Mbit/s, 5 Mbit/s).
If only NVIDIA release the NVENC/NVDEC chip as a separated hardware for professional video services. We don’t need those damn ray tracing and tensor cores .
I’m not sure I understand what that Linux beta does - if he driver is adding support for SDK 9 wouldn’t ffmpeg or whatever need rebuilding with SDK 9 to take advantage? Or do you have early access, Thunderm?
Your results look really great though, thanks again for sharing them.