Video Encode and Decode GPU Support Matrix

Hello @Benutzer799 and welcome to the NVIDIA developer forums!

All supported formats are listed in the matrix, so 4:2:2 is not nativley supported. Since there is considerable difference between 4:4:4 (uncompressed) and 4:2:2 you should not assume inclusion here.

Hope that helps.

Could the RTX 40 series GPUs and the L40 GPU be added to the matrix?

Why the RTX 40 Ada Lovelace lineup lacks support for 4:2:2 HEVC formats? Please clarify, as to why we are expected to rely on Intel’s iGPU (Quicksync) for decoding these codecs when NVIDIA’s own dGPU is not equipped to do so. Failures to support the latest codecs from NVIDIA’s end force us to narrow our choices in the CPU department. How long do I have to continue buying outdated 10nm++++ Intel CPUs?

Also, the 3080Ti Mobile GPU is incorrectly labelled as ‘GA106’ when it is actually GA103S, so may please amend the corrections.

Hello,
Is this table up to date? Can you check this issue?

Thank you

I’m trying to better understand columns 5-7 of the decode matrix here. What’s the difference between “# OF CHIPS”, “# OF NVDEC/CHIP”, and “Total # OF NVDEC”?

For example in the new RTX Ada/Quadro line, there are some that are 1-4-1, some are 1-2-2, etc. I can’t find the correlation beyond simply “more is better”.

Hi there @bbuckley4444 and welcome to the NIVIDA developer forums!

I thought I could answer this immediately, but the for example the “1 4 2” combo I cannot. Ireached out internally to get some clarification.

Thanks!

Hello again @bbuckley4444,

there will shortly be an update to the support matrix tables which will be clearer.

And for GeForce and Workstation cards it is safe to go with the “more is better” approach.

Thanks!

Thanks for the update @MarkusHoHo. Looks like that’s been changed now.

I guess this turns into more of a product question now, but how does that look in practice for the total number of decode streams? Is it just double the number of streams if there are 2 vs 1 Total # of NVDEC? (my use case is ingesting 8+ 1080p60 capture card inputs in OBS).

The interesting thing is they all say 1 chip now, including on the higher end Ada cards that are usually described as having dual encoder chips. So do the higher end Ada cards just have a single but larger encode/decode chip? I’ve messed around with the dual encoding on my 4090 but am looking at getting an RTX 4000 SFF for a portable rig.

Please, could you add jetsons to this matrix?

@bbuckley4444 If you choose a GPU with 2 NVDEC instances than, depending on available memory and bandwidth of course, you will theoretically be able to decode double the number of streams. In practice your mileage will vary of course due to other possible overhead.

The “Chip” count means the number of physical GPU dies on the Board. Since no consumer cards exist that have more than one GPU die, you will not find anything but 1 there. Of course some server setups have 8 or 16 GPUs, while the old Tesla M10 had 4 for example.

But same generation GPUs will also have same generation NVENC/NVDEC chips, no differences.

@masip85 I don’t think that would make much sense since Jetson hardware uses different video technology than GPUs in this matrix. For video capabilities it is easier to check the detailed tech specs on the Jetson pages, for example Jetson Orin.

I hope this helps.

In nvidia specifications webpage , av1 is not present.

But here , av1 is specified:

Here too:

Is that correct?

Here it works … but is it hardware accelerated?

Yes, as far as I understand there is HW accelerated AV1 encoder with AGX Orin (only).
See:
https://docs.nvidia.com/jetson/archives/r35.1/DeveloperGuide/text/SD/Multimedia/AcceleratedGstreamer.html#gstreamer-1-0-plugin-reference

You can find this in the NVIDIA download center as well. For example searching for all Jetson Orin variants will list the data sheets which show that Orin Nano supports AV1 envode through software, while Orin NX lists it as supported by the SOC.

Could the Blackwell GPUs be added to the Video Encode and Decode GPU Support Matrix?

They will, as soon as there are actual SKUs available to buy.

Thanks for your patience!

Environment: rtx4080ti videocodec12.0 cuda11.1 opencv
How can I solve the problem of GPU encoding resolution limitation? I have an 8x 4k video stream that needs to be concatenated into a super high resolution (3840 * 4, 2160 * 2) video stream with 2 rows and 4 columns for encoding and streaming. However, upon checking the matrix supported by the GPU, the maximum support for H264 encoding is 4k, and the maximum support for H265 encoding is 8k. May I ask how to break through the encoding resolution limit and perform GPU encoding on ultra high resolutions (3840 * 4, 2160 * 2) or higher, or what other methods can be used to achieve this?