Dynamic Stream Management - where, how, why?


after viewing the webinar I’ve been looking for details on the dynamic stream management capabilities of deepstream. so far i haven’t found anything that mentions where or how to use this feature.

does anyone know anything about this? can provide some greater detail beyond a mention of its existence?

to be more specific i’m looking into the following…

  • how to build/configure a deepstream application to enable dynamic stream management
  • how to add/remove a stream in a running deepstream app
  • how the app manages the streams currently being processed
  • the the process in which a new stream is handled by the deepstream app
    • frame batching (can i dynamically increase the batch size?)
    • object tracking
    • metadata for objects in the new stream
    • osd output ( will it update to iunclude the new stream?)



You can refer to here: https://devtalk.nvidia.com/default/topic/1037582/deepstream-for-tesla/restarting-pipeline-on-deepstream2-0/post/5276883/#5276883

Can i dynamically increase the batch size?
Why do you have this request?

dose exist more details for this topic ? official document or demo source code for multi dynamic stream will be appreciated, thanks! our team is working on dynamic streams addition/ remove.

Hi ZhouZhi:
Have you found the method about dynamic stream management? I face the same problem that not too many tutorials are talking about this topic.

Thank you!

My notes on batch-size are:

  • you can’t dynamically set the batch size on nvinfer, at least in the playing state (eg. i haven’t tried PAUSED). It should be set to the number of sources, ideally.

  • on nvinfer, the engine file also depends on the batch size, so there currently needs to be one engine file generated per batch size, and you need to figure out it’s filename and set that at runtime to override the config file (or else it will take minutes at a time to rebuild).

  • on the stream-muxer element you absolutely can set the sources in the paused state. I am setting it in my pad added callback when a source is linked. Yesterday I experienced some odd flickering when I tried to set batch size to 4 before connecting all 4 sources. Instead, I am setting the batch size to the number of sink pads on the stream muxer in my on_source_pad_added callback (called on “pad-added” signal from uridecodebin):

... rest of Genie class ...
		def _on_src_pad_added(src:Gst.Element, src_pad:Gst.Pad)
			debug(@"got new pad $(src_pad.name) from $(src.name)")
			// if not a video/NVMM pad, reject it
			// https://valadoc.org/gstreamer-1.0/Gst.Pad.query_caps.html
			src_caps:Gst.Caps = src_pad.query_caps(null)
			src_pad_struct:weak Gst.Structure = src_caps.get_structure(0)
			src_pad_type:string = src_pad_struct.get_name()
			if not src_pad_type.has_prefix("video/x-raw")
				debug(@"$(src_pad.name) is not a video pad. skipping.")

			// without this lock it's possible to request multiple identical pads like:
			// Padname sink_0 is not unique in element muxer, not adding
			debug(@"getting muxer lock for $(src.name)")
			debug(@"got muxer lock for $(src.name)")

			sink_pad:Gst.Pad = self._muxer.get_request_pad(@"sink_$(self._muxer.numsinkpads)")
			if sink_pad == null
				error("could not request sink pad from multiqueue")

			self._try_linking(src_pad, sink_pad)

			// this needs to be updated on pad added or flickering occurs with the osd
[b]			self._muxer.set_property("batch-size", self._muxer.numsinkpads)
			debug(@"releasing muxer lock for $(src.name)")
			debug(@"released muxer lock for $(src.name)")

Warning: the above function locks up about half the time with many sources. I belive it may be deadlocking. I am not familiar enough with GLib or this sort of concurrency yet to be sure. If you know, and can spot the error, I would greatly appreciate if it was pointed out. Likewise, please don’t hesitate to correct me, Nvidia if there is any misinformation here.