Now when Turing is publicly announced would it be possible for Nvidia to share more detailed information about the NVENC implementation?
My particular interest is if the HEVC encoder now also supports B frames and if you added 4:2:2 support, however performance comparison and any other information is also of interest.
I’ve read that under Turing there is the same quality at 25% reduced bitrate for HEVC. I’d like to know what technologies have been added (B-frames maybe?) and will this improvement work out of the box or does FFmpeg etc need adding support.
I would also be interested if NVDEC is improved, for example if 10- and 12-bit VP9 decode is supported in all GPU’s. And maybe if the GPU can assist with decoding the new AV1 codec in any way.
Not really the depth I want but it’s what NVIDIA seam to release to us so far.
Turing GPUs also ship with an enhanced NVENC encoder unit that adds support for H.265 (HEVC) 8K encode at 30 fps. The new NVENC encoder provides up to 25% bitrate savings for HEVC and up to 15% bitrate savings for H.264.
Turing’s new NVDEC decoder has also been updated to support decoding of HEVC YUV444 10/12b HDR at 30 fps, H.264 8K, and VP9 10/12b HDR.
Turing improves encoding quality compared to prior generation Pascal GPUs and compared to software encoders. Figure 11 shows that on common Twitch and YouTube streaming settings, Turing’s video encoder exceeds the quality of the x264 software-based encoder using the fast encode settings, with dramatically lower CPU utilization. 4K streaming is too heavy a workload for encoding on typical CPU setups, but Turing’s encoder makes 4K streaming possible.
Thanks - it’s not clear whether these improvements will be automatic, or need software changes and an updated SDK, which hasn’t apparently been released yet.
I had a few members of a greek forum to test their new RTX 2080 and RTX 2080 Ti in NVENC. The results are really bad. Performance is lower than GTX 1070 Ti with single encode (240-260 fps for turing vs 310 fps for Pascal (1070ti) in command: ffmpeg.exe -hwaccel cuvid -c:v h264_cuvid -f mpegts -i HD-h264.ts -vcodec h264_nvenc -preset slow -c:a copy -f mpegts -y output.ts
Input file is from ffmpeg samples, url: https://samples.ffmpeg.org/V-codecs/h264/HD-h264.ts
Tests were done in Windows since owners got the cards for gaming and don’t have Linux.
And the worst thing, both RTX 2080 and 2080 Ti drop to half performance with 2 concurrent encodes, indicating only 1 NVENC.
So, if you need NVENC performance, you should stick with Pascal, at least for now. Epic fail for Turing.
How about quality - personally I’m more interested in what the quality of the encode is like vs Pascal.
Clearly they’ve only include one NVENC unit, so performance of parallel encodes is not going to be as good, but do you members see any difference in the quality?
Quality has been mentioned a few posts above, currently seems similar to Pascal.
Performance is worse even with single encode, although in my case the performance difference is not tha big (250 vs 310 fps - around 20% worse) as Thunderm reported (150 vs 300 fps)
What is the point of having high speed with poor quality?
It seems that Nvidia is not going to develop video encoding. Therefore, we have no feedback about NVENC from Nvidia.
Nvidia, say something, please. Nvidia, #Nvidia, @Nvidia