Our Open-source team, Neuralet, has focused on developing and improving various Computer Vision (CV) models for real-world applications in the past couple of years. So, we have always been looking for novel tools that help us improve our applications for practical situations. The Nvidia DeepStream SDK is one of the greatest tools we have worked with during these years. Our development team has studied and learned about this excellent Video Analytics toolkit and its Python Bindings for further customization in the past year. They employed this tool to optimize some of our CV models for real-time applications. It has been fantastic for building end-to-end video analytics services; it is fast, compatible with Nvidia GPUs, great for multistream and real-time use cases.
After all, to introduce and ease the learning process of working with DeepStream, especially its Python bindings, we started writing a series of articles about it. In these articles, we tried to introduce the essential components of the Nvidia DeepStream Pipeline. We walked through this Pipeline, explaining important Gst-Elements, Source and Sind Pads, Probes and their capabilities, Gst-Buffer
, Metadata, etc. We also explained how we used DeepStream and Triton Inference Server to deploy our Adaptive Object Detection on Jetsons and X86s. Finally, in the last article, we made our hands dirty by building a Face Anonymizer as a use case to show how DeepStream Python Bindings work.
Here you can find a series of questions you’ll be able to answer by reading this trilogy.: