Retrieving frames in DeepStream Python

• Hardware Platform (Jetson / GPU) : GPU
• DeepStream Version : 7.0
• TensorRT Version : 8.6.1
• NVIDIA GPU Driver Version (valid for GPU only) : 535.171.04
• Issue Type( questions, new requirements, bugs) : questions

Hello!

I would like to ask about retrieving current frames in the DeepStream SDK (Python bindings).

I have taken the deepstream-test-3 as a basis of my code, and I couldn’t manage to access frames from the probe function, I always got " Currently we only support RGBA color Format"

After some experimenting, I found that using the structure from here (tiler, filter, nvconvs) works, I can get the frame from my python code from the tiler’s probe. So far, so good, but I have a few questions:

  • Is it possible to only use one probe? Can I use only the nvosd’s probe to save images? (Even if I put an nvconv and a filter before nvosd, I got the “non-RGBA” error there)
  • Ever since implementing the tiler and the second probe, the nvosd’s probe has a frame_id=0 for all images. Why is this happening?

Thank you for the help!

Of course. Actually, the parameter of get_nvds_buf_surface is called nvbufsurface, and the nvbufsurface can be saved to image when its format is converted to RGBA in python.

Because tiler will merge multiple streams into one, the frame_id will also be erased.

Ahh, thank you, this makes complete sense.

I see. And how may I do this? I have tried to have a filter and an nvconv before the osd (which has the probe), but it still was not in RGBA format.
I have searched through these docs, but I could not find a function which would convert it.
If there is no such function, what is the correct way of finding out the format of the data in the nvbufsurface, so that I may convert it?
Thank you!

Maybe there are some problems with your code, the following patch is modified from deepstream_imagedata-multistream.py, It works fine. You can refer it.

diff --git a/apps/deepstream-imagedata-multistream/deepstream_imagedata-multistream.py b/apps/deepstream-imagedata-multistream/deepstream_imagedata-multistream.py
index bb314b2..64b2a15 100755
--- a/apps/deepstream-imagedata-multistream/deepstream_imagedata-multistream.py
+++ b/apps/deepstream-imagedata-multistream/deepstream_imagedata-multistream.py
@@ -336,14 +336,14 @@ def main(args):
     if not filter1:
         sys.stderr.write(" Unable to get the caps filter1 \n")
     filter1.set_property("caps", caps1)
-    print("Creating tiler \n ")
-    tiler = Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
-    if not tiler:
-        sys.stderr.write(" Unable to create tiler \n")
-    print("Creating nvvidconv \n ")
-    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
-    if not nvvidconv:
-        sys.stderr.write(" Unable to create nvvidconv \n")
+    # print("Creating tiler \n ")
+    # tiler = Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
+    # if not tiler:
+    #     sys.stderr.write(" Unable to create tiler \n")
+    # print("Creating nvvidconv \n ")
+    # nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
+    # if not nvvidconv:
+    #     sys.stderr.write(" Unable to create nvvidconv \n")
     print("Creating nvosd \n ")
     nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
     if not nvosd:
@@ -379,10 +379,10 @@ def main(args):
         pgie.set_property("batch-size", number_sources)
     tiler_rows = int(math.sqrt(number_sources))
     tiler_columns = int(math.ceil((1.0 * number_sources) / tiler_rows))
-    tiler.set_property("rows", tiler_rows)
-    tiler.set_property("columns", tiler_columns)
-    tiler.set_property("width", TILED_OUTPUT_WIDTH)
-    tiler.set_property("height", TILED_OUTPUT_HEIGHT)
+    # tiler.set_property("rows", tiler_rows)
+    # tiler.set_property("columns", tiler_columns)
+    # tiler.set_property("width", TILED_OUTPUT_WIDTH)
+    # tiler.set_property("height", TILED_OUTPUT_HEIGHT)
 
     sink.set_property("sync", 0)
     sink.set_property("qos", 0)
@@ -392,7 +392,7 @@ def main(args):
         # can be easily accessed on CPU in Python.
         mem_type = int(pyds.NVBUF_MEM_CUDA_UNIFIED)
         streammux.set_property("nvbuf-memory-type", mem_type)
-        nvvidconv.set_property("nvbuf-memory-type", mem_type)
+        # nvvidconv.set_property("nvbuf-memory-type", mem_type)
         if platform_info.is_wsl():
             #opencv functions like cv2.line and cv2.putText is not able to access NVBUF_MEM_CUDA_UNIFIED memory
             #in WSL systems due to some reason and gives SEGFAULT. Use NVBUF_MEM_CUDA_PINNED memory for such
@@ -403,12 +403,12 @@ def main(args):
             nvvidconv1.set_property("nvbuf-memory-type", vc_mem_type)
         else:
             nvvidconv1.set_property("nvbuf-memory-type", mem_type)
-        tiler.set_property("nvbuf-memory-type", mem_type)
+        # tiler.set_property("nvbuf-memory-type", mem_type)
 
     print("Adding elements to Pipeline \n")
     pipeline.add(pgie)
-    pipeline.add(tiler)
-    pipeline.add(nvvidconv)
+    # pipeline.add(tiler)
+    # pipeline.add(nvvidconv)
     pipeline.add(filter1)
     pipeline.add(nvvidconv1)
     pipeline.add(nvosd)
@@ -418,9 +418,10 @@ def main(args):
     streammux.link(pgie)
     pgie.link(nvvidconv1)
     nvvidconv1.link(filter1)
-    filter1.link(tiler)
-    tiler.link(nvvidconv)
-    nvvidconv.link(nvosd)
+    filter1.link(nvosd)
+    # filter1.link(tiler)
+    # tiler.link(nvvidconv)
+    # nvvidconv.link(nvosd)
     nvosd.link(sink)
 
     # create an event loop and feed gstreamer bus mesages to it
@@ -429,7 +430,7 @@ def main(args):
     bus.add_signal_watch()
     bus.connect("message", bus_call, loop)
 
-    tiler_sink_pad = tiler.get_static_pad("sink")
+    tiler_sink_pad = nvosd.get_static_pad("sink")
     if not tiler_sink_pad:
         sys.stderr.write(" Unable to get src pad \n")
     else:

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Sorry for the delay.
Thank you, I will try this!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.