How can I customize the DeepStream pipeline to process different RTSP streams with different sets of analytics

Please provide complete information as applicable to your setup.

• Hardware Platform: Jetson Orin Nano
• DeepStream Version: 6.3
• JetPack Version: 5.1.2
• TensorRT Version: 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only)

Dear NVIDIA developer team

Can you provide Python examples or references how to customize the DeepStream pipeline to process different RTSP streams with different sets of analytics. I have 5 RTSP urls and 5 analytics model (some models are pre-trained models from NVIDIA DeepStream). I want to customize and put all of them into one main Python script. This is my expected senario e.g.

RTSP 1-2: Inference with all 5 analytics models
RTSP 3-4: Inference with only first, second and fifth analytics models (no third and fourth model for inference)
RTSP 5: Inference only first analytics model

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

There is only c/c++ sample for such case. NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel. (github.com)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.