Deepstream Support Blackwell?

Please provide complete information as applicable to your setup.

• dGPU
• DeepStream 7.1
• Question
• Is Nvidia Deepstream able to run on an Nvidia Blackwell Architecture GPU?(5080 or 5070)

Thanks

Currently no.

Thanks.

Main thing I would like is the ability to decode H.264 with a profile of 4:2:2 which the only NVDEC architecture that can do that is Blackwell. Is there a way to rebuild the GStreamer NVDEC plugin to use the new Video SDK that will allow decoding of the new stream types?

No. It is not supported now.

@Fiona.Chen Following up here. Any timelines for supporting 12.0 ARCH (Blackwell)? Some of our clients have requirements for deepstream on RTX5070Ti and hardware is already procured.

1 Like

Please be patient for the future release.

any news?

1 Like

There will be announcement in the forum when the new version is available.

1 Like

Can we get a hint on timeframe? Would it be measured in weeks or months? With no Blackwell Jetsons, is this a generation that could be skipped? Do we have to wait for Rubin?

1 Like

Blackwell Jetson(Jetson Thor) is supposed to be released later this year sometime:

Oh, I didn’t know that and it does narrow things down a bit. It would almost need to be less than 7 months away.

Is there an expected release timeframe you can give us? Is it going to be Q4 of this year?

This is already the only thing we can tell you.

DeepStream users with Blackwell GPUs (RTX 50 series) will encounter “Unsupported SM” errors.
However, there’s a workaround: DeepStream can work with Blackwell GPUs when using the Triton Inference Server as a backend instead of the native inference engine.

Technical Background

According to NVIDIA documentation, CUDA applications built using CUDA Toolkit versions 2.1 through 12.8 are compatible with Blackwell GPUs, provided they include PTX versions of their kernels.

The Problem with gst-nvinfer

When attempting to use the standard gst-nvinfer plugin on Blackwell GPUs, users will encounter:

ERROR: Error Code 1: Internal Error (Unsupported SM: 0xc00)

Root Cause: The TensorRT and CUDA versions bundled with the current DeepStream release don’t include support for Blackwell’s SM (Streaming Multiprocessor) architecture. This results in the inference engine being unable to initialize on RTX 50 series cards.

Solution: Using Triton Server as Separate Backend

Requirements

  • Triton Inference Server Release 25.02 or higher
  • The server should be deployed as a separate backend service

Why This Works

  • Triton Server 25.02+ includes TensorRT 10.8 with CUDA 12.8+ support
  • This combination provides the necessary compatibility layer for Blackwell architecture

Testing Results

While the gst-nvinferserver plugin is not officially certified with the latest Triton Server versions, my testing shows it works without issues on Blackwell hardware.

References

Recommendation

For users with Blackwell GPUs who want to test DeepStream:

  1. Deploy Triton Server 25.02+ as a separate backend service
  2. Configure DeepStream to use the Triton backend
  3. The setup should work despite the lack of official certification

Please note: This is an unofficial workaround. For test purposes only.

1 Like