Video Encode and Decode GPU Support Matrix

Hello @Benutzer799 and welcome to the NVIDIA developer forums!

All supported formats are listed in the matrix, so 4:2:2 is not nativley supported. Since there is considerable difference between 4:4:4 (uncompressed) and 4:2:2 you should not assume inclusion here.

Hope that helps.

Could the RTX 40 series GPUs and the L40 GPU be added to the matrix?

Why the RTX 40 Ada Lovelace lineup lacks support for 4:2:2 HEVC formats? Please clarify, as to why we are expected to rely on Intelā€™s iGPU (Quicksync) for decoding these codecs when NVIDIAā€™s own dGPU is not equipped to do so. Failures to support the latest codecs from NVIDIAā€™s end force us to narrow our choices in the CPU department. How long do I have to continue buying outdated 10nm++++ Intel CPUs?

Also, the 3080Ti Mobile GPU is incorrectly labelled as ā€˜GA106ā€™ when it is actually GA103S, so may please amend the corrections.

Hello,
Is this table up to date? Can you check this issue?

Thank you

Iā€™m trying to better understand columns 5-7 of the decode matrix here. Whatā€™s the difference between ā€œ# OF CHIPSā€, ā€œ# OF NVDEC/CHIPā€, and ā€œTotal # OF NVDECā€?

For example in the new RTX Ada/Quadro line, there are some that are 1-4-1, some are 1-2-2, etc. I canā€™t find the correlation beyond simply ā€œmore is betterā€.

Hi there @bbuckley4444 and welcome to the NIVIDA developer forums!

I thought I could answer this immediately, but the for example the ā€œ1 4 2ā€ combo I cannot. Ireached out internally to get some clarification.

Thanks!

Hello again @bbuckley4444,

there will shortly be an update to the support matrix tables which will be clearer.

And for GeForce and Workstation cards it is safe to go with the ā€œmore is betterā€ approach.

Thanks!

Thanks for the update @MarkusHoHo. Looks like thatā€™s been changed now.

I guess this turns into more of a product question now, but how does that look in practice for the total number of decode streams? Is it just double the number of streams if there are 2 vs 1 Total # of NVDEC? (my use case is ingesting 8+ 1080p60 capture card inputs in OBS).

The interesting thing is they all say 1 chip now, including on the higher end Ada cards that are usually described as having dual encoder chips. So do the higher end Ada cards just have a single but larger encode/decode chip? Iā€™ve messed around with the dual encoding on my 4090 but am looking at getting an RTX 4000 SFF for a portable rig.

Please, could you add jetsons to this matrix?

@bbuckley4444 If you choose a GPU with 2 NVDEC instances than, depending on available memory and bandwidth of course, you will theoretically be able to decode double the number of streams. In practice your mileage will vary of course due to other possible overhead.

The ā€œChipā€ count means the number of physical GPU dies on the Board. Since no consumer cards exist that have more than one GPU die, you will not find anything but 1 there. Of course some server setups have 8 or 16 GPUs, while the old Tesla M10 had 4 for example.

But same generation GPUs will also have same generation NVENC/NVDEC chips, no differences.

@masip85 I donā€™t think that would make much sense since Jetson hardware uses different video technology than GPUs in this matrix. For video capabilities it is easier to check the detailed tech specs on the Jetson pages, for example Jetson Orin.

I hope this helps.

In nvidia specifications webpage , av1 is not present.

But here , av1 is specified:

Here too:

Is that correct?

Here it works ā€¦ but is it hardware accelerated?

Yes, as far as I understand there is HW accelerated AV1 encoder with AGX Orin (only).
See:
https://docs.nvidia.com/jetson/archives/r35.1/DeveloperGuide/text/SD/Multimedia/AcceleratedGstreamer.html#gstreamer-1-0-plugin-reference

You can find this in the NVIDIA download center as well. For example searching for all Jetson Orin variants will list the data sheets which show that Orin Nano supports AV1 envode through software, while Orin NX lists it as supported by the SOC.

Could the Blackwell GPUs be added to the Video Encode and Decode GPU Support Matrix?

They will, as soon as there are actual SKUs available to buy.

Thanks for your patience!

Environment: rtx4080ti videocodec12.0 cuda11.1 opencv
How can I solve the problem of GPU encoding resolution limitation? I have an 8x 4k video stream that needs to be concatenated into a super high resolution (3840 * 4, 2160 * 2) video stream with 2 rows and 4 columns for encoding and streaming. However, upon checking the matrix supported by the GPU, the maximum support for H264 encoding is 4k, and the maximum support for H265 encoding is 8k. May I ask how to break through the encoding resolution limit and perform GPU encoding on ultra high resolutions (3840 * 4, 2160 * 2) or higher, or what other methods can be used to achieve this?

If the application controls both encoder and decoder, this could be done with separate streams. Like 4x 4K streams independently encoded in 4 different encoding sessions. As long as the decode side of the application knows how to display it/compose the frames this should be fine.

Hope this helps.

Thanks for your reply.

Yes, the current idea is to split a large-resolution image (because of the need for an image algorithm, you need to assemble a large-resolution image in advance and bring the algorithm results) into segments, and then use multiple encoding sessions to push the stream separately.

But there are two problems with this:

  1. In fact, the resolution is very large, and it is a puzzle of more than 10 4K cameras, which requires many encoding sessions for RTSP streaming.
  2. After multiple streams are pushed out, the application decoder needs to ensure that each video screen is synchronized, which is also difficult to control.

It would be nice if NVIDIA ENCODER could directly support large-resolution encoding, and theoretically, the computing power of the 4080ti suuper should be able to meet the needs of large-resolution encoding, but unfortunately the API is limited.

I used ffmpeg for testing, if I use the cpu method, I can support large-resolution encoding, but this speed is extremely slow, and specifying the use of NVIDIA encoder will report an error of encoding parameter limit (resolution parameter) error.

Congratulations on the release of the RTX 50 series!

Could they be added to the Video Encode and Decode GPU Support Matrix?

Thank you @EwoutH.

And yes, they will be added in time. We will try to remember to announce it here as soon as they are.