Please provide complete information as applicable to your setup.
• Hardware Platform GPU
• DeepStream Version 6.1
• TensorRT Version 8.4.3-1+cuda11.6*
• NVIDIA GPU Driver Version (valid for GPU only) 510.47.03
• Issue Type Question
• How to reproduce the issue ? Using following code to create multistream deepstream python app
for i in range(len(self.sources)):
self.source_bin.append(self._create_source_bin(index=i))
self.streammux = self._create_streammux()
After this I’m linking them as provided in deepstream-test3 of deepstream_python_apps, just refactoring to my use case as:
for i in range(len(self.sources)):
padname = "sink_%u" % i
sinkpad = self.streammux.get_request_pad(padname)
if not sinkpad:
self.logger.error("Unable to get the sink pad of streammux")
srcpad = self.source_bin[i].get_static_pad("src")
if not srcpad:
self.logger.error("Unable to get source pad of decoder")
srcpad.link(sinkpad)
The rest of the element are linked sequentially. Following is a snippet from associated logs from app run.
INFO:src.pipeline_test.Pipeline:Linking elements in the Pipeline: source-bin-00 -> source-bin-01 -> stream-muxer -> primary-inference -> tracker -> analytics -> convertor1 -> capsfilter1 -> nvtiler -> convertor2
-> onscreendisplay -> queue3 -> nveglglessink
INFO:src.pipeline_test.Pipeline:Decodebin child added: source
INFO:src.pipeline_test.Pipeline:Decodebin child added: decodebin0
INFO:src.pipeline_test.Pipeline:Decodebin child added: source
INFO:src.pipeline_test.Pipeline:Decodebin child added: decodebin1
INFO:src.pipeline_test.Pipeline:Decodebin child added: qtdemux0
INFO:src.pipeline_test.Pipeline:Decodebin child added: qtdemux1
INFO:src.pipeline_test.Pipeline:Decodebin child added: multiqueue0
INFO:src.pipeline_test.Pipeline:Decodebin child added: multiqueue1
INFO:src.pipeline_test.Pipeline:Decodebin pad added
INFO:src.pipeline_test.Pipeline:Decodebin pad added
* Fps of stream 0, is 15.2
* Fps of stream 1, is 15.2
...
The issue that I’m facing is that PGIE is not working on the second stream, or rather working very sparsely. I’ve tried tinkering with interval and batch-size and comparing my code with examples yet I cannot locate the issue. This is probably because the issue is in my logic. I just can’t see it and would appreciate any help or nudge in the right direction. Also attaching the pgie config that I’m using.
Thank you
You code may have some problem. You can add multiple source stream into one source_bin. But you create multiple source_bin which is wrong. You can refer the deepstream-test3:
for i in range(number_sources):
print("Creating source_bin ",i," \n ")
uri_name=args[i]
if uri_name.find("rtsp://") == 0 :
is_live = True
source_bin=create_source_bin(i, uri_name)
if not source_bin:
sys.stderr.write("Unable to create source bin \n")
pipeline.add(source_bin)
padname="sink_%u" %i
sinkpad= streammux.get_request_pad(padname)
if not sinkpad:
sys.stderr.write("Unable to create sink pad bin \n")
srcpad=source_bin.get_static_pad("src")
if not srcpad:
sys.stderr.write("Unable to create src pad bin \n")
srcpad.link(sinkpad)
I’ve tried creating the source_bin as described in the deepstream-test3 yet the issue persists. Attaching my graph for reference. FYI the issue is that nvinfer is not working on one of the streams
Yes, also checking while setting up pipeline to have batch-size same as number of sources. Other than this I’ve tried upgrading pyds to 1.14 as well. The only difference between test3 and my pipeline is that I use object oriented creation, linking and addition of probes. I’ve modified the code at the following github to support multiple streams as mentioned in the first post.
A workaround instead was setting batch-size=1 in pgie config. Setting batch-size=2 in config is causing erroneous behavior from model (yolor) in my case