Announcing general availability for DeepStream 5.0

Key feature highlights

• New inference capability with Triton Inference Server enables developers to deploy models natively in their training framework.

• Choice of developing DeepStream pipeline from either C/C++ or Python with comparable performance

• Build and deploy DeepStream apps natively in Red Hat Enterprise Linux (RHEL)

• Enhanced IoT capabilities, including edge/cloud bi-direciotnal messaging and OTA updates

Read more about the announcement and full list of latest features in our developer news article https://news.developer.nvidia.com/deepstream-5-0-tlt-2-0-ga/

Product page https://developer.nvidia.com/deepstream-sdk

Getting Started https://developer.nvidia.com/deepstream-getting-started

Pull from NGC container DeepStream | NVIDIA NGC

Watch new video tutorial “DeepStream best practices for performance optimization” DeepStream SDK: Best practices for performance optimization - YouTube

Read the updated feature explainer blog “ Building Intelligent Video Analytics Apps Using NVIDIA DeepStream 5.0” (Updated for GA) https://developer.nvidia.com/blog/building-iva-apps-using-deepstream-5-0-(updated-for-ga)

New webinar : Create Intelligent places using NVIDIA pre-trained vision models and DeepStream SDK https://info.nvidia.com/iva-occupancy-webinar-reg-page.html

TLT 2.0 general availability – To find out more, see Announcing general availability for Transfer Learning Toolkit 2.0

4 Likes

Redaction and Reference apps are now LIVE on Github for DS 5.0 GA
Please refer to the links:

Redaction: GitHub - NVIDIA-AI-IOT/redaction_with_deepstream: An example of using DeepStream SDK for redaction
Reference apps: GitHub - NVIDIA-AI-IOT/deepstream_reference_apps: Samples for TensorRT/Deepstream for Tesla & Jetson