Video Codec SDK 9

We are still waiting for release SDK 9 for Turing, it is already more than 3 months from release of Turing and nothing happends, is there some release date?

I also joined early access program without any response from Nvidia.

Yeah I signed up too but didn’t hear anything. Ah well.

Apparently the early access is “by invitation only” even though the site allows you to apply. I’m usually a staunch supporter of nVidia but here I think they are letting down the community. I have been coding for CUVID/NVDec for many years and provided much useful feedback to nVidia over the years (, but I’m inexplicably not eligible for this early access. It’s a slap in the face. And not even bothering to reply to people who apply? Disgraceful.

I just applied as well, but if Donald Graft couldn’t get access to it, then I shouldn’t have bothered.

Video Codec SDK 9 ready drivers are there, 418.30 BETA drivers

Who cares? It’s the SDK 9 that we need and want. What good is driver support for it if we can’t get it?

nVidia has rudely ignored our applications for access, which they solicited (!), and our forum administrator Ryan has gone absent when we need him the most.

That is for Linux only, Windows version is not released yet.

This is the worst soft-launch of a physical product ever.

NVIDIA has launched hardware in September 2018 and it may work as advertised in April 2019.

Video SDK 9 is now available!

So let me get this straight – Turing is slower than the 2nd Gen Maxwell and Pascal? Like between 1.4x and 4.2x slower?

That is, if I am reading the table 3 (NVENC performance) correctly?

Basically, since RTX 2080 Ti (TU102) has one NVENC while 1080 Ti (GP102) has two, the latest and greatest (and most expensive so far) will be two times slower than previous generation, at least when looking at high-quality dual pass preset for H.264?

The footnote 1 in the table here:

Is telling me that one Turing NVENC is the same as two Pascal NVENC because of improvements in performance and quality.

And yet the table 3 from NVENC application note in SDK 9.0 says that Quadro RTX 8000 is pushing 306 FPS ON ONE NVENC engine, while Quadro P2000, which is not even in the same price range or directly comparable, is pushing 432 FPS (1.4x faster) also with ONE NVENC engine?!?

Not to mention that you don’t give numbers from Quadro P5000 or P6000 which have TWO NVENC engines and would thus push at least 864 FPS (2.8x faster!) if not more?!?

And if you consider that GP100 or GV100 have 3 NVENC engines the way this is presented really turns into an outright insult to our intellect.

I understand that NVIDIA is trying to make a good all-around product and that some sacrifices had to be made to make room for RT and Tensor cores, but you absolutely should not go below current highest performance with your next generation product while jacking up the prices and claiming they are, in fact, faster without real evidence to support that claim. Or you do what you did, but then you can’t fault people for not buying your arguments and your products.

What a joke has NVIDIA became as of late, shame on you guys.

…and wasn’t it nice of Nvidia to announce this to all those people who registered for “early access”…!

Latest FFmpeg Zeranoe Windows builds now include SDK9 and the new “b_ref_mode” flag.

Although whenever I set b_ref_mode to middle using the latest Zeranoe windows build, I get a huge output of “invalid DTS: xxxxx Invalid PTS: xxxxx in output stream 0:0, replacing by guess” messages…

We understand that the Video Codec SDK 9.0 release was a bit delayed and we appreciate your patience. We will try to improve the gap between the hardware and the SDK release in the future.

About early-access notification, as you now know, we decided to directly do a public release of SDK 9.0 and that’s why many of you did not get any responses from us despite applying for early access. We also have had some staffing issues in monitoring the forums and so Ryan or NVIDIA staff have not been very active here for past few weeks.

However, rest assured that we do monitor these forums. Even if you don’t get a response on each query, we are taking every feedback/question and internalizing it to prioritize on the engineering roadmap.

Thank you.

Thx for answer, i understand that there was alotof work with next gen GPU, I have one very important question. Quality of Turing NVENC is very good, but can we expect any significant performance improvments on current RTX implementation of NVENC? This question is important for us, because we are designig video encoding systems with our partners based on Nvidia GPU for almost 3 years.

[never mind]

Hello Thunderm,

As you may be aware, performance is dependent on quality. In Turing, the major focus has been on improving NVENC encoding quality. At higher quality, the loss of performance is expected (compared to older GPUs). At higher quality, any further NVENC performance improvements are unlikely with Turing GPU.

May I ask if there is difference on performance between Tesla T4 and Quadro RTX 5000 ?

SDK 9.0.18 -> 9.0.20

9.0.20 deletes expression about TU117 in “NVENC_Application_Note.pdf” and “NVENC_VideoEncoder_API_ProgGuide.pdf”.

GEFORCE GTX 1650 (TU117)

-> NVIDIA Encoder (NVENC) Yes b[/b]