Please provide complete information as applicable to your setup.
• Hardware Platform ( GPU)
**• DeepStream Version 7.1 **
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 12.6
I am working on a DeepStream-based application for a multi-camera setup and need advice on designing the pipeline. Below is the structure of the job configuration our system uses and the challenges we face.
Job Template (Overview):
Each job consists of:
- Job Metadata:
- Job ID (
id
): Unique identifier for the job. - Name (
name
): Descriptive name for the task, e.g., PPE Demo. - Associated Site (
site_id
): ID linking this job to a physical location or project. - Status (
status
): Tracks the lifecycle of the job (e.g., initializing, active). - Created and Updated Timestamps (
created_at
,updated_at
).
- Task Modules:
- A set of enabled or disabled modules defining the types of tasks to perform. Examples:
- PPE: Personal protective equipment detection (enabled in this job).
- Forklift: Forklift detection (disabled).
- Work at Height: Detection of hazardous work conditions (disabled).
- Streams:
- Multiple video streams can be associated with a job. Each stream has:
- Stream ID (
id
) and Name (name
): Unique identifiers and labels. - RTSP URL (
rtsp_url
): The video stream to be processed. - Status (
status
): Indicates the stream’s state (e.g., active). - Task ID (
task_id
): N/A - Task Status (
task_status
): N/A
- Stream ID (
Challenges:
- Single Pipeline vs. Multiple Pipelines:
- Should we implement a single pipeline to handle all streams and modules, or separate pipelines for each job/stream?
- In a single pipeline, how can we dynamically enable or disable models (e.g., skip PPE detection if “PPE” is disabled)?
- Routing and Scalability:
- For jobs with multiple streams and modules, what’s the best way to route streams to specific models (e.g., one stream processes only PPE, another processes multiple tasks)?
- Can a single pipeline scale to handle 10–20 streams with varied tasks?