Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) RTX 3060
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.4.3-1+cuda11.6
• NVIDIA GPU Driver Version (valid for GPU only) 510.85.02
• Issue Type( questions, new requirements, bugs) questions
Say I have trained a custom model. How can I infer my model with DeepStream 6.1 with multiple rtsp and save frames of every rtsp when my model detects objects in Python language?
Thanks in advance