• Hardware Platform (Jetson / GPU): Jetson NX Dev Kit
• DeepStream Version: 6.0
• JetPack Version (valid for Jetson only): 4.6 GA
Currently I am trying to run the Deepstream-python example: deepstream_test_3.py.
Everyhing works good. Just one thing I am curious.
This file uses many queues, part of codes are as followed:
queue1=Gst.ElementFactory.make("queue","queue1") queue2=Gst.ElementFactory.make("queue","queue2") queue3=Gst.ElementFactory.make("queue","queue3") queue4=Gst.ElementFactory.make("queue","queue4") queue5=Gst.ElementFactory.make("queue","queue5") pipeline.add(queue1) pipeline.add(queue2) pipeline.add(queue3) pipeline.add(queue4) pipeline.add(queue5)
And queues linked pipeline modules:
streammux.link(queue1) queue1.link(pgie) pgie.link(queue2) queue2.link(tiler) tiler.link(queue3) queue3.link(nvvidconv) nvvidconv.link(queue4) queue4.link(nvosd) if is_aarch64(): nvosd.link(queue5) queue5.link(transform) transform.link(sink) else: nvosd.link(queue5) queue5.link(sink)
I am wondering is there any specific reason why using
queue in this scenario, comparing to not using it?
I tried to search for the definition of
queue in GStreamer Official website and get this:
Data is queued until one of the limits specified by the , and/or properties has been reached. Any attempt to push more buffers into the queue will block the pushing thread until more space becomes available. The queue will create a new thread on the source pad to decouple the processing on sink and source pad. You can query how many buffers are queued by reading the property. You can track changes by connecting to the notify::current-level-buffers signal (which like all signals will be emitted from the streaming thread). The same applies to the and properties. The default queue size limits are 200 buffers, 10MB of data, or one second worth of data, whichever is reached first.
So, just curious, will
queue be helpful in real time video performance, comparing to not using it (in this scenario)?
Thank you so much for your help.