Hi @fanzh,
I’ve made significant progress and many of the proposed scenarios are fully achievable. I also improved some points I considered important.
I implemented improvements in the DeepStream server based on nvmultiurisrcbin and nvurisrcbin. Most of it is working well; I am still dealing with some bugs or situations that have not been fully tested yet. The work focuses on:
- Full HLS support in nvurisrcbin and nvmultiurisrcbin
- Per-source parameters — each source uses its own overrides instead of static global parameters
- Analytics at source creation — nvdsanalytics configuration applied automatically when the stream becomes active (after stream_id is assigned)
- Metadata downstream — use of an existing field to carry add/update information to the downstream pipeline, accessible in the probe
- New API to update analytics at runtime
The main APIs changed or extended are:
- POST
/api/v1/stream/add — add stream with HLS, per-source overrides, analytics, and metadata
- POST
/api/v1/analytics/update — update analytics and/or metadata for a stream at runtime (NEW)
1. HLS Support (nvurisrcbin and nvmultiurisrcbin)
- Before: Limited or no HLS support in the multi-uri source.
- Now: Full HLS support in nvurisrcbin and nvmultiurisrcbin.
- For
http(s)://...m3u8 URLs, the pipeline uses uridecodebin3 / hlsdemux2 internally as appropriate.
- Each HLS source can have its own overrides via the
hls object in the add body (timeout, retries, auth, proxy, user-agent, extra_headers, etc.).
Example — Add HLS source with per-source configuration:
POST /api/v1/stream/add
{
"key": "sensor",
"value": {
"camera_id": "hls001",
"camera_name": "CDN Stream",
"camera_url": "https://cdn.example.com/live/stream.m3u8",
"change": "camera_add",
"hls": {
"is_live": true,
"timeout": 15,
"retries": 5,
"user_id": "user",
"user_pw": "secret",
"user_agent": "DVP-Client/1.0",
"extra_headers": { "Referer": "https://example.com" }
}
}
}
2. Per-Source Parameters (overrides on add)
- Before: The nvmultiurisrcbin plugin always created one nvurisrcbin instance per source, but all instances used the same (static/global) parameters.
- Now: The parameters sent when adding the source act as overrides for that source, so each nvurisrcbin instance can have different RTSP, HLS, decoder, or file settings.
- That is: in POST /api/v1/stream/add, the optional objects
rtsp, hls, decoder, file apply only to that source, allowing different RTSP, HLS, decoder, and file settings per camera/stream.
Override types supported on add:
| Source type |
Body object |
Description |
| RTSP |
rtsp |
protocol, latency, reconnect_*, udp_buffer_size, drop_on_latency |
| HLS |
hls |
is_live, timeout, retries, auth, proxy, user_agent, extra_headers, etc. |
| Any |
decoder |
num_extra_surfaces, drop_frame_interval, skip_frames, low_latency_mode, disable_audio |
| File |
file |
loop |
3. Analytics at Source Creation
- Before: Analytics (nvdsanalytics) were not configured directly when adding the source, or depended on a global config file.
- Now:
- In POST /api/v1/stream/add you can send
analytics_config inside value.
- The source is created and, when the pad reports the stream_id (assigned by nvstreammux), the server automatically applies the analytics configuration for that stream.
- No config file path is required in the request; the server uses the nvdsanalytics element already in the pipeline.
- Behavior: create source → wait for stream_id → apply analytics_config. Analytics becomes active as soon as the stream is active.
Example — Add stream with analytics at creation:
POST /api/v1/stream/add
{
"key": "sensor",
"value": {
"camera_id": "cam001",
"camera_name": "entrance",
"camera_url": "rtsp://192.168.1.100/stream",
"change": "add",
"analytics_config": {
"roi_filtering": {
"enable": 1,
"class_id": [-1],
"inverse_roi": 0,
"zones": { "road_area": "100,200;1800,200;1800,900;100,900" }
},
"line_crossing": {
"enable": 1,
"class_id": [0, 2],
"zones": {
"entry_line": "950,550;950,650;200,600;1700,600",
"exit_line": "950,450;950,350;200,400;1700,400"
}
},
"overcrowding": {
"enable": 1,
"object_threshold": 10,
"time_threshold_ms": 2000,
"class_id": [0],
"zones": { "intersection": "400,300;1500,300;1500,800;400,800" }
}
}
}
}
The response may include source_id (stream_id), and zones/ROI are applied once the stream is active.
4. Metadata Downstream (probe) - Improved Version
- Before: There was a field that was not used to carry data to the downstream.
- Now:
- The
metadata field (free-form JSON object) is used consistently:
- In POST /api/v1/stream/add:
value.metadata is sent when adding the source.
- In POST /api/v1/analytics/update:
metadata can be sent to update/merge with the existing metadata for that stream.
- This information is available downstream in the pipeline for the user to access in a probe (e.g., via
NvDsCameraInfoMeta / NVIDIA.NVDS_CAMERA_INFO_META).
- Typical use: identify camera, location, preset, tags, etc., in later processing (OSD, recording, business rules) without relying only on camera_id/url.
Example — Add with metadata:
{
"key": "sensor",
"value": {
"camera_id": "cam_entrance_01",
"camera_name": "Entrance",
"camera_url": "rtsp://192.168.1.100/stream",
"change": "add",
"metadata": {
"location": "Building A",
"floor": 3,
"zone": "entrance_principal",
"tags": ["security", "24h"]
}
}
}
5. New API: Update Analytics at Runtime
- Before: Changing zones/ROI/line crossing/overcrowding limited config reload.
- Now: The POST /api/v1/analytics/update API allows updating at runtime for a specific stream:
stream_id (required): Stream ID (e.g., the source_id returned on add).
analytics_config (optional): Same structure as ROI, line_crossing, overcrowding, etc. The YAML config file used by nvdsanalytics is updated and a reload is triggered.
metadata (optional): Merge with the existing metadata for that stream.
- At least one of
analytics_config or metadata must be provided.
This makes the pipeline more dynamic: change presets, zones, and metadata without stopping the stream.
Example — Analytics only:
POST /api/v1/analytics/update
{
"stream_id": 0,
"analytics_config": {
"roi_filtering": {
"enable": 1,
"zones": { "zone1": "100,100;500,100;500,400;100,400" }
},
"line_crossing": { "enable": 1, "zones": { "line1": "..." } }
}
}
Example — Analytics + metadata:
POST /api/v1/analytics/update
{
"stream_id": 0,
"analytics_config": { "roi_filtering": { "enable": 1, "zones": { "road_area": "..." } } },
"metadata": {
"preset_id": 2,
"updated_at": "2025-02-01T10:00:00Z"
}
}
Success response:
{
"status": "HTTP/1.1 200 OK",
"reason": "ANALYTICS_UPDATE_SUCCESS",
"stream_id": 0
}
Summary of API Changes
| API |
Method |
Main changes |
/api/v1/stream/add |
POST |
Full HLS; per-source overrides (rtsp, hls, decoder, file); analytics_config at creation; metadata for downstream. |
/api/v1/analytics/update |
POST |
New/extended: runtime update of analytics_config and/or metadata by stream_id. |
Technical Implementation and Data Flow
This section describes how analytics and metadata flow through the pipeline and how configuration application is synchronized with stream_id.
Note: Several components in this flow (e.g. NvDsStreamMetadataManager, pending analytics config storage, NvDsCameraInfoMeta attachment in the probe) did not exist in the original DeepStream / nvmultiurisrcbin—they were added in this improved version to support per-stream metadata and analytics config via API.
6.1. Mechanism: GST_NVEVENT_PAD_ADDED
The stream_id (source_id) is only known when the new source’s pad is linked to nvstreammux. DeepStream already provides the GST_NVEVENT_PAD_ADDED event: nvstreammux emits this event downstream when a new sink pad is linked, and the event carries the correct source_id.
- GStreamer API:
gst_nvevent_new_pad_added(guint source_id) (emit) and gst_nvevent_parse_pad_added(GstEvent*, guint* source_id) (parse).
- Guaranteed order: Analytics configuration is applied after the pad is linked, avoiding race conditions (config applied before the source exists).
- Consistency: The same mechanism is used by nvinfer, nvtracker, and nvinferserver to create
source_info / addSource(source_id).
6.2. Flow: Add stream with analytics_config
Data flow from POST /api/v1/stream/add with analytics_config to application in nvdsanalytics:
1. REST Server (nvds_rest_server / inside nvmultiurisrcbin)
- Parses JSON.
- If analytics_config is present, stores it in pending_analytics_map[camera_id].
- Also stores metadata (value.metadata) in StreamMetadataManager by camera_id.
2. Stream API / Source Creator (gst_nvmultiurisrcbincreator)
- Calls add_source() with request parameters (uri, rtsp/hls/decoder overrides, etc.).
- Generates source_id (0, 1, 2, …) and creates a new nvurisrcbin instance for that source.
- Registers camera_id ↔ source_id mapping (for later lookup).
- Links nvurisrcbin src pad to nvstreammux sink pad.
3. nvstreammux
- When the new sink pad is linked, emits GST_NVEVENT_PAD_ADDED(source_id) on its src pad.
- The event propagates downstream: nvinfer → nvtracker → nvdsanalytics → …
4. nvdsanalytics (or handler that feeds nvdsanalytics)
- Receives GST_NVEVENT_PAD_ADDED in sink_event.
- Parses source_id with gst_nvevent_parse_pad_added().
- Looks up pending config: source_id → camera_id (mapping) → pending_analytics_map[camera_id].
- Converts analytics_config (JSON) to per-stream YAML sections:
- roi-filtering-stream-{source_id}
- line-crossing-stream-{source_id}
- overcrowding-stream-{source_id}
- (direction-detection-stream-{source_id} if applicable)
- Updates the YAML file used by nvdsanalytics (config-file property) and triggers reload, or applies in memory per implementation.
- Removes the entry from pending_analytics_map for that camera_id.
5. Response to client
- May return source_id and indicate that analytics will be applied after the pad is active (analytics_applied after PAD_ADDED).
Summary: The source is created first; analytics application happens in reaction to PAD_ADDED, when stream_id is already defined, ensuring config is applied to the correct stream.
6.3. Flow: POST /api/v1/analytics/update (runtime)
To update analytics for an already active stream:
1. REST Server
- Validates stream_id (must exist in the list of active sources).
- Validates presence of analytics_config and/or metadata.
2. Analytics Config Manager
- Loads current YAML (same file used by nvdsanalytics).
- Updates or inserts only the sections for the given stream_id (roi-filtering-stream-N, line-crossing-stream-N, overcrowding-stream-N, etc.).
- Preserves sections for other streams (isolation by stream_id).
- If metadata was sent, merges it with the metadata already stored for that stream (source_id).
3. Persistence and reload
- Writes updated YAML (or to a temp file and swap).
- Sends GST_NVEVENT_ANALYTICS_RELOAD_CONFIG_UPDATE(config_path) event to the pipeline.
4. nvdsanalytics
- Handles reload event, rereads YAML file, and reapplies configuration.
- Per-stream contexts are updated; only the changed stream_id is affected.
6.4. Metadata Flow (downstream / probe)
Metadata provided on add or analytics/update must reach frames for use in probes:
- On add: The
value.metadata field is stored by camera_id (pending). After add_source, the source_id ↔ camera_id mapping is registered. A probe in the pipeline (after nvstreammux, e.g. in the same multi-uri bin or a downstream element) uses each frame’s source_id to look up the stored metadata and attach NvDsUserMeta to the frame (type NVIDIA.NVDS_CAMERA_INFO_META, structure with camera_id, camera_name, camera_url, source_id, and a free-form JSON field, e.g. custom_metadata up to 1024 characters).
- On analytics/update: If the body includes
metadata, it is merged with the existing metadata for that stream_id. The same storage is used for the probe, so the updated content is available on subsequent frames.
- Access in probe: Iterate
frame_meta->frame_user_meta_list, identify user_meta->base_meta.meta_type == NVDS_CAMERA_INFO_META_TYPE, and read (NvDsCameraInfoMeta*)user_meta->user_meta_data (camera_id, source_id, custom_metadata, etc.).
6.6. Flow Diagram (overview)
POST /stream/add (camera_id, analytics_config, metadata)
│
┌──────────────────────────────────┼──────────────────────────────────┐
│ ▼ │
│ pending_analytics_map[camera_id] = config │
│ StreamMetadataManager.setPendingMetadata(camera_id, metadata) │
│ │ │
│ ▼ │
│ add_source() → new nvurisrcbin → link pad → nvstreammux │
│ register_source_mapping(camera_id, source_id) │
│ │ │
└──────────────────────────────────┼──────────────────────────────────┘
│
▼
nvstreammux emits GST_NVEVENT_PAD_ADDED(source_id)
│
┌─────────────────────────────┼─────────────────────────────┐
▼ ▼ ▼
nvinfer nvtracker nvdsanalytics
source_info[source_id] addSource(source_id) lookup pending by source_id
→ JSON→YAML
→ update config
→ reload
│ │ │
└─────────────────────────────┴─────────────────────────────┘
│
▼
Probe: per frame (source_id) attaches NvDsCameraInfoMeta
(camera_id, name, url, source_id, custom_metadata)
│
▼
Downstream: OSD, Kafka, recording, etc. read metadata in probe
6.7. Technical Summary
| Aspect |
Implementation |
| When stream_id is known |
When the pad is linked in nvstreammux; GST_NVEVENT_PAD_ADDED(source_id) propagates downstream. Thanks @junshengy |
| When analytics is applied on add |
In the PAD_ADDED handler in nvdsanalytics (or the component that updates nvdsanalytics config): lookup by source_id → camera_id → pending config → JSON→YAML → update file and reload. |
| nvdsanalytics config file |
Path obtained from the pipeline (nvdsanalytics element’s config-file property); not sent in the API body. |
| Per-stream isolation |
YAML has sections *-stream-{id}; update/delete affects only that stream_id’s sections. |
| Metadata on frame |
Stored by camera_id/source_id; probe attaches NvDsCameraInfoMeta (NvDsUserMeta) per frame; analytics/update with metadata merges into the same storage. |
Demo