Per-Source Configuration for nvmultiurisrcbin REST API - Improvements

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
8.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
10.9
• NVIDIA GPU Driver Version (valid for GPU only)
591.74
• Issue Type( questions, new requirements, bugs)
new requirements

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Overview

I am developing an extension to the DeepStream nvmultiurisrcbin REST API (/api/v1/stream/add) that supports per-source configuration overrides. This is still in development and testing phase, but I believe this approach could be valuable for future DeepStream releases.

The Problem

In the current implementation, properties like RTSP latency, RTP protocol, and decoder settings are configured globally on nvmultiurisrcbin. All sources share the same configuration, which becomes limiting when working with:

  • Mixed source types
  • Different network conditions per camera
  • Varying latency requirements per stream

My Solution

I extended the REST API to accept optional per-source configuration objects:

POST /api/v1/stream/add
{
  "value": {
    "camera_id": "cam001",
    "camera_url": "rtsp://192.168.1.100/stream",
    "change": "camera_add",
    "rtsp": {
      "protocol": 4,
      "latency": 300,
      "drop_on_latency": true
    },
    "decoder": {
      "low_latency_mode": true,
      "skip_frames": 0
    }
  }
}

If a property is not specified, the global default is used. This gives users the flexibility to fine-tune each source individually while maintaining sensible defaults.


Configuration Objects

RTSP Configuration (rtsp)

Applies only to rtsp:// sources (rtspsrc element).

Property Type Description
protocol int RTP protocol: 1=UDP, 2=UDP_MCAST, 4=TCP, 7=AUTO
latency int Jitterbuffer size in milliseconds
reconnect_interval int Seconds before reconnect attempt (0=disable)
reconnect_attempts int Max reconnect attempts (-1=infinite)
udp_buffer_size int UDP buffer size in bytes
drop_on_latency bool Drop buffers when max latency reached

HLS Configuration (hls)

Applies only to HLS streams (.m3u8).

Property Type Description
is_live bool TRUE if stream is live (cannot seek)
timeout int HTTP timeout in seconds
retries int Number of HTTP retries
bandwidth_ratio float Bandwidth ratio (0.0-1.0)
max_bitrate int Maximum bitrate in bps
min_bitrate int Minimum bitrate in bps
inactivity_timeout int Stream dead timeout in seconds
user_id string HTTP Basic Auth username
user_pw string HTTP Basic Auth password
proxy string Proxy URL (e.g., “http://proxy:8080”)
proxy_id string Proxy auth username
proxy_pw string Proxy auth password
user_agent string Custom User-Agent header

Decoder Configuration (decoder)

Applies to all source types (nvv4l2decoder settings).

Property Type Description
num_extra_surfaces int Extra decoder surfaces (1-N, higher for 4K streams)
drop_frame_interval int Drop every Nth frame (0-30, 0=no drop)
skip_frames int Skip frames: 0=decode_all, 1=non-ref, 2=key-only
low_latency_mode bool Low latency decoder mode (for I/IPPP streams)
disable_audio bool Disable audio processing for this stream

File Configuration (file)

Applies only to file:// sources.

Property Type Description
loop bool Loop file playback continuously

HLS Implementation

Related to the HLS issues reported in the community:

The original nvurisrcbin does not include dedicated HLS handling. I implemented HLS support using uridecodebin3 with hlsdemux2 and have had success with HLS sources in my testing.


Examples

RTSP with TCP and low latency decoder:

curl -XPOST 'http://localhost:9000/api/v1/stream/add' -d '{
  "value": {
    "camera_id": "cam001",
    "camera_url": "rtsp://192.168.1.100/stream",
    "change": "camera_add",
    "rtsp": {
      "protocol": 4,
      "latency": 200,
      "drop_on_latency": true
    },
    "decoder": {
      "low_latency_mode": true
    }
  }
}'

HLS with authentication:

curl -XPOST 'http://localhost:9000/api/v1/stream/add' -d '{
  "value": {
    "camera_id": "hls001",
    "camera_url": "https://cdn.example.com/live.m3u8",
    "change": "camera_add",
    "hls": {
      "timeout": 15,
      "retries": 5,
      "user_id": "user",
      "user_pw": "secret"
    },
    "decoder": {
      "disable_audio": true
    }
  }
}'

File with loop:

curl -XPOST 'http://localhost:9000/api/v1/stream/add' -d '{
  "value": {
    "camera_id": "file001",
    "camera_url": "file:///data/test.mp4",
    "change": "camera_add",
    "file": {
      "loop": true
    }
  }
}'

Suggestion for NVIDIA

This per-source configuration approach could be a welcome addition to future DeepStream versions. The community would benefit from:

  1. Per-source flexibility - Configure each stream independently via REST API
  2. Native HLS support - Dedicated HLS handling in nvurisrcbin with proper retry/recovery
  3. Source-type-aware configs - Different property sets for RTSP, HLS, File, IPC sources

I am still testing and refining this implementation, but wanted to share the concept with the community and NVIDIA team.

Note: The source code is not available at this time, but may be released in the future once I complete testing and validation.

1 Like

Thanks for the sharing! We will discuss it internally. Let me know if you have any questions.

Hi @fanzh,
I’ve made significant progress and many of the proposed scenarios are fully achievable. I also improved some points I considered important.
I implemented improvements in the DeepStream server based on nvmultiurisrcbin and nvurisrcbin. Most of it is working well; I am still dealing with some bugs or situations that have not been fully tested yet. The work focuses on:

  1. Full HLS support in nvurisrcbin and nvmultiurisrcbin
  2. Per-source parameters — each source uses its own overrides instead of static global parameters
  3. Analytics at source creation — nvdsanalytics configuration applied automatically when the stream becomes active (after stream_id is assigned)
  4. Metadata downstream — use of an existing field to carry add/update information to the downstream pipeline, accessible in the probe
  5. New API to update analytics at runtime

The main APIs changed or extended are:

  • POST /api/v1/stream/add — add stream with HLS, per-source overrides, analytics, and metadata
  • POST /api/v1/analytics/update — update analytics and/or metadata for a stream at runtime (NEW)

1. HLS Support (nvurisrcbin and nvmultiurisrcbin)

  • Before: Limited or no HLS support in the multi-uri source.
  • Now: Full HLS support in nvurisrcbin and nvmultiurisrcbin.
  • For http(s)://...m3u8 URLs, the pipeline uses uridecodebin3 / hlsdemux2 internally as appropriate.
  • Each HLS source can have its own overrides via the hls object in the add body (timeout, retries, auth, proxy, user-agent, extra_headers, etc.).

Example — Add HLS source with per-source configuration:

POST /api/v1/stream/add
{
  "key": "sensor",
  "value": {
    "camera_id": "hls001",
    "camera_name": "CDN Stream",
    "camera_url": "https://cdn.example.com/live/stream.m3u8",
    "change": "camera_add",
    "hls": {
      "is_live": true,
      "timeout": 15,
      "retries": 5,
      "user_id": "user",
      "user_pw": "secret",
      "user_agent": "DVP-Client/1.0",
      "extra_headers": { "Referer": "https://example.com" }
    }
  }
}

2. Per-Source Parameters (overrides on add)

  • Before: The nvmultiurisrcbin plugin always created one nvurisrcbin instance per source, but all instances used the same (static/global) parameters.
  • Now: The parameters sent when adding the source act as overrides for that source, so each nvurisrcbin instance can have different RTSP, HLS, decoder, or file settings.
  • That is: in POST /api/v1/stream/add, the optional objects rtsp, hls, decoder, file apply only to that source, allowing different RTSP, HLS, decoder, and file settings per camera/stream.

Override types supported on add:

Source type Body object Description
RTSP rtsp protocol, latency, reconnect_*, udp_buffer_size, drop_on_latency
HLS hls is_live, timeout, retries, auth, proxy, user_agent, extra_headers, etc.
Any decoder num_extra_surfaces, drop_frame_interval, skip_frames, low_latency_mode, disable_audio
File file loop

3. Analytics at Source Creation

  • Before: Analytics (nvdsanalytics) were not configured directly when adding the source, or depended on a global config file.
  • Now:
    • In POST /api/v1/stream/add you can send analytics_config inside value.
    • The source is created and, when the pad reports the stream_id (assigned by nvstreammux), the server automatically applies the analytics configuration for that stream.
    • No config file path is required in the request; the server uses the nvdsanalytics element already in the pipeline.
  • Behavior: create source → wait for stream_id → apply analytics_config. Analytics becomes active as soon as the stream is active.

Example — Add stream with analytics at creation:

POST /api/v1/stream/add
{
  "key": "sensor",
  "value": {
    "camera_id": "cam001",
    "camera_name": "entrance",
    "camera_url": "rtsp://192.168.1.100/stream",
    "change": "add",
    "analytics_config": {
      "roi_filtering": {
        "enable": 1,
        "class_id": [-1],
        "inverse_roi": 0,
        "zones": { "road_area": "100,200;1800,200;1800,900;100,900" }
      },
      "line_crossing": {
        "enable": 1,
        "class_id": [0, 2],
        "zones": {
          "entry_line": "950,550;950,650;200,600;1700,600",
          "exit_line": "950,450;950,350;200,400;1700,400"
        }
      },
      "overcrowding": {
        "enable": 1,
        "object_threshold": 10,
        "time_threshold_ms": 2000,
        "class_id": [0],
        "zones": { "intersection": "400,300;1500,300;1500,800;400,800" }
      }
    }
  }
}

The response may include source_id (stream_id), and zones/ROI are applied once the stream is active.


4. Metadata Downstream (probe) - Improved Version

  • Before: There was a field that was not used to carry data to the downstream.
  • Now:
    • The metadata field (free-form JSON object) is used consistently:
      • In POST /api/v1/stream/add: value.metadata is sent when adding the source.
      • In POST /api/v1/analytics/update: metadata can be sent to update/merge with the existing metadata for that stream.
    • This information is available downstream in the pipeline for the user to access in a probe (e.g., via NvDsCameraInfoMeta / NVIDIA.NVDS_CAMERA_INFO_META).
  • Typical use: identify camera, location, preset, tags, etc., in later processing (OSD, recording, business rules) without relying only on camera_id/url.

Example — Add with metadata:

{
  "key": "sensor",
  "value": {
    "camera_id": "cam_entrance_01",
    "camera_name": "Entrance",
    "camera_url": "rtsp://192.168.1.100/stream",
    "change": "add",
    "metadata": {
      "location": "Building A",
      "floor": 3,
      "zone": "entrance_principal",
      "tags": ["security", "24h"]
    }
  }
}

5. New API: Update Analytics at Runtime

  • Before: Changing zones/ROI/line crossing/overcrowding limited config reload.
  • Now: The POST /api/v1/analytics/update API allows updating at runtime for a specific stream:
    • stream_id (required): Stream ID (e.g., the source_id returned on add).
    • analytics_config (optional): Same structure as ROI, line_crossing, overcrowding, etc. The YAML config file used by nvdsanalytics is updated and a reload is triggered.
    • metadata (optional): Merge with the existing metadata for that stream.
    • At least one of analytics_config or metadata must be provided.

This makes the pipeline more dynamic: change presets, zones, and metadata without stopping the stream.

Example — Analytics only:

POST /api/v1/analytics/update
{
  "stream_id": 0,
  "analytics_config": {
    "roi_filtering": {
      "enable": 1,
      "zones": { "zone1": "100,100;500,100;500,400;100,400" }
    },
    "line_crossing": { "enable": 1, "zones": { "line1": "..." } }
  }
}

Example — Analytics + metadata:

POST /api/v1/analytics/update
{
  "stream_id": 0,
  "analytics_config": { "roi_filtering": { "enable": 1, "zones": { "road_area": "..." } } },
  "metadata": {
    "preset_id": 2,
    "updated_at": "2025-02-01T10:00:00Z"
  }
}

Success response:

{
  "status": "HTTP/1.1 200 OK",
  "reason": "ANALYTICS_UPDATE_SUCCESS",
  "stream_id": 0
}

Summary of API Changes

API Method Main changes
/api/v1/stream/add POST Full HLS; per-source overrides (rtsp, hls, decoder, file); analytics_config at creation; metadata for downstream.
/api/v1/analytics/update POST New/extended: runtime update of analytics_config and/or metadata by stream_id.

Technical Implementation and Data Flow

This section describes how analytics and metadata flow through the pipeline and how configuration application is synchronized with stream_id.

Note: Several components in this flow (e.g. NvDsStreamMetadataManager, pending analytics config storage, NvDsCameraInfoMeta attachment in the probe) did not exist in the original DeepStream / nvmultiurisrcbin—they were added in this improved version to support per-stream metadata and analytics config via API.

6.1. Mechanism: GST_NVEVENT_PAD_ADDED

The stream_id (source_id) is only known when the new source’s pad is linked to nvstreammux. DeepStream already provides the GST_NVEVENT_PAD_ADDED event: nvstreammux emits this event downstream when a new sink pad is linked, and the event carries the correct source_id.

  • GStreamer API: gst_nvevent_new_pad_added(guint source_id) (emit) and gst_nvevent_parse_pad_added(GstEvent*, guint* source_id) (parse).
  • Guaranteed order: Analytics configuration is applied after the pad is linked, avoiding race conditions (config applied before the source exists).
  • Consistency: The same mechanism is used by nvinfer, nvtracker, and nvinferserver to create source_info / addSource(source_id).

6.2. Flow: Add stream with analytics_config

Data flow from POST /api/v1/stream/add with analytics_config to application in nvdsanalytics:

1. REST Server (nvds_rest_server / inside nvmultiurisrcbin)
   - Parses JSON.
   - If analytics_config is present, stores it in pending_analytics_map[camera_id].
   - Also stores metadata (value.metadata) in StreamMetadataManager by camera_id.

2. Stream API / Source Creator (gst_nvmultiurisrcbincreator)
   - Calls add_source() with request parameters (uri, rtsp/hls/decoder overrides, etc.).
   - Generates source_id (0, 1, 2, …) and creates a new nvurisrcbin instance for that source.
   - Registers camera_id ↔ source_id mapping (for later lookup).
   - Links nvurisrcbin src pad to nvstreammux sink pad.

3. nvstreammux
   - When the new sink pad is linked, emits GST_NVEVENT_PAD_ADDED(source_id) on its src pad.
   - The event propagates downstream: nvinfer → nvtracker → nvdsanalytics → …

4. nvdsanalytics (or handler that feeds nvdsanalytics)
   - Receives GST_NVEVENT_PAD_ADDED in sink_event.
   - Parses source_id with gst_nvevent_parse_pad_added().
   - Looks up pending config: source_id → camera_id (mapping) → pending_analytics_map[camera_id].
   - Converts analytics_config (JSON) to per-stream YAML sections:
     - roi-filtering-stream-{source_id}
     - line-crossing-stream-{source_id}
     - overcrowding-stream-{source_id}
     - (direction-detection-stream-{source_id} if applicable)
   - Updates the YAML file used by nvdsanalytics (config-file property) and triggers reload, or applies in memory per implementation.
   - Removes the entry from pending_analytics_map for that camera_id.

5. Response to client
   - May return source_id and indicate that analytics will be applied after the pad is active (analytics_applied after PAD_ADDED).

Summary: The source is created first; analytics application happens in reaction to PAD_ADDED, when stream_id is already defined, ensuring config is applied to the correct stream.

6.3. Flow: POST /api/v1/analytics/update (runtime)

To update analytics for an already active stream:

1. REST Server
   - Validates stream_id (must exist in the list of active sources).
   - Validates presence of analytics_config and/or metadata.

2. Analytics Config Manager
   - Loads current YAML (same file used by nvdsanalytics).
   - Updates or inserts only the sections for the given stream_id (roi-filtering-stream-N, line-crossing-stream-N, overcrowding-stream-N, etc.).
   - Preserves sections for other streams (isolation by stream_id).
   - If metadata was sent, merges it with the metadata already stored for that stream (source_id).

3. Persistence and reload
   - Writes updated YAML (or to a temp file and swap).
   - Sends GST_NVEVENT_ANALYTICS_RELOAD_CONFIG_UPDATE(config_path) event to the pipeline.

4. nvdsanalytics
   - Handles reload event, rereads YAML file, and reapplies configuration.
   - Per-stream contexts are updated; only the changed stream_id is affected.

6.4. Metadata Flow (downstream / probe)

Metadata provided on add or analytics/update must reach frames for use in probes:

  • On add: The value.metadata field is stored by camera_id (pending). After add_source, the source_id ↔ camera_id mapping is registered. A probe in the pipeline (after nvstreammux, e.g. in the same multi-uri bin or a downstream element) uses each frame’s source_id to look up the stored metadata and attach NvDsUserMeta to the frame (type NVIDIA.NVDS_CAMERA_INFO_META, structure with camera_id, camera_name, camera_url, source_id, and a free-form JSON field, e.g. custom_metadata up to 1024 characters).
  • On analytics/update: If the body includes metadata, it is merged with the existing metadata for that stream_id. The same storage is used for the probe, so the updated content is available on subsequent frames.
  • Access in probe: Iterate frame_meta->frame_user_meta_list, identify user_meta->base_meta.meta_type == NVDS_CAMERA_INFO_META_TYPE, and read (NvDsCameraInfoMeta*)user_meta->user_meta_data (camera_id, source_id, custom_metadata, etc.).

6.6. Flow Diagram (overview)

                    POST /stream/add (camera_id, analytics_config, metadata)
                                        │
     ┌──────────────────────────────────┼──────────────────────────────────┐
     │                                  ▼                                  │
     │  pending_analytics_map[camera_id] = config                          │
     │  StreamMetadataManager.setPendingMetadata(camera_id, metadata)       │
     │                                  │                                  │
     │                                  ▼                                  │
     │  add_source() → new nvurisrcbin → link pad → nvstreammux             │
     │  register_source_mapping(camera_id, source_id)                       │
     │                                  │                                  │
     └──────────────────────────────────┼──────────────────────────────────┘
                                        │
                                        ▼
                    nvstreammux emits GST_NVEVENT_PAD_ADDED(source_id)
                                        │
          ┌─────────────────────────────┼─────────────────────────────┐
          ▼                             ▼                             ▼
    nvinfer                      nvtracker                    nvdsanalytics
    source_info[source_id]       addSource(source_id)         lookup pending by source_id
                                                                  → JSON→YAML
                                                                  → update config
                                                                  → reload
          │                             │                             │
          └─────────────────────────────┴─────────────────────────────┘
                                        │
                                        ▼
                    Probe: per frame (source_id) attaches NvDsCameraInfoMeta
                    (camera_id, name, url, source_id, custom_metadata)
                                        │
                                        ▼
                    Downstream: OSD, Kafka, recording, etc. read metadata in probe

6.7. Technical Summary

Aspect Implementation
When stream_id is known When the pad is linked in nvstreammux; GST_NVEVENT_PAD_ADDED(source_id) propagates downstream. Thanks @junshengy
When analytics is applied on add In the PAD_ADDED handler in nvdsanalytics (or the component that updates nvdsanalytics config): lookup by source_id → camera_id → pending config → JSON→YAML → update file and reload.
nvdsanalytics config file Path obtained from the pipeline (nvdsanalytics element’s config-file property); not sent in the API body.
Per-stream isolation YAML has sections *-stream-{id}; update/delete affects only that stream_id’s sections.
Metadata on frame Stored by camera_id/source_id; probe attaches NvDsCameraInfoMeta (NvDsUserMeta) per frame; analytics/update with metadata merges into the same storage.

Demo