Push back the frame buffer to the pipeline with python bindings

Continuing the discussion from How to modify frame buffer returned by `pyds.get_nvds_buf_surface`?:

Any updates about how to put the buffer back to the pipeline inside a probe with python bindings?

I think it has been mentioned in How to modify frame buffer returned by pyds.get_nvds_buf_surface? - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums, there is already NvBufSurface read and write sample in c/c++, you just need to implement in python with gst-python and pyds.


OK thank you for the reply but I don’t know if I actually could understand what works on python and what doesn’t. Some functions do not have their python counterpart if I am not wrong for example NvBufSurfTransform ?

The sample deepstream_python_apps/deepstream_imagedata-multistream.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub shows how to get nvbufsurface array, draw bbox on it and write the frame back.

1 Like

And please upgrade the DeepStream SDK version to 6.1.1 GA. DeepStream 5.0.1 is too old.

1 Like

thank you for you answer so I tested your example but when I try to apply it for my use case, no change is reflected on the downstream images : I am getting a mask in tmp and converting it to heat map as follows :
tmp = cv2.applyColorMap(tmp, cv2.COLORMAP_JET)
tmp = cv2.cvtColor(tmp, cv2.COLOR_RGB2RGBA)
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
n_frame = cv2.addWeighted(tmp, 0.4, n_frame, 0.6, 0)
Isn’t this supposed to work?

Can the sample deepstream_python_apps/deepstream_imagedata-multistream.py at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com) work? Have you upgrade to DeepStream 6.1.1?

1 Like

yes it works ! As a matter of fact if I use cv2.line like in your sample I can see the image buffer has been changed.

Seems addWeighted() created a new memory(array) for “dst”, so the “n_frame” in your code is not the array you got from “get_nvds_buf_surface()” any longer.
[cv2.line()]( OpenCV: Drawing Functions) is different, it just draw something on the image, the memory(array) is the same one.

It is not a DeepStream issue.

1 Like

OK, Is there a way to push it back to the same memory ?

Maybe you need to google or go to opencv forum.

1 Like

So to answer my own question here, we can set “dst” which is the last argument of cv2.addWeighted to save the resulted image in the same memory so
tmp = cv2.applyColorMap(tmp, cv2.COLORMAP_JET)
tmp = cv2.cvtColor(tmp, cv2.COLOR_RGB2RGBA)
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
_ = cv2.addWeighted(tmp, 0.4, n_frame, 0.6, 0.0, n_frame)
Thank you for your help !

HI I am using python sample deepstream_imagedata-multistream. I want to redraw boxes in different way and circles on frame using opencv. I tried using the code below

n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
n_frame = draw_bounding_boxes(n_frame, obj_meta, obj_meta.confidence) # line 123 in script

but its not working for me. @m.elkateb How you solved the issue.
One more thing that for now I am getting default bounding boxes on display. How can I deactivate automatic drawn boxes by deep stream. I have attached my py file deepstream_imagedata-multistream.py.
Finally, I want to add video streams from 4 cameras rather then mp4 video files. Can it be solved by minimal change in python code i-e just changing the type of source like we do in config file changing type to 1 of [source0] as below

#Type - 1=CameraV4L2 2=URI 3=MultiURI

Any one help me please Thanks

  • I don’t know what drawing_bounding_boxes is doing but try to make sure that the functions you use do not create a copy of your n_frame memory as mentioned earlier in this topic. So you need all the functions to apply their changes to the buffer in-place.
  • Adding the 4 camera streams shouldn’t be that hard depending on the cameras. You have to adjust your pipeline and you can inspire from the example deepstream_python_apps/deepstream_test_1_usb.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub basically you have to add the plugins to read the stream from your camera with the necessary caps and conversions before streammux.

Did you implement buffer implementation. If yes, any outlines to implement this.
By the way if we are not able to draw something on deep-stream frame, then how we should implement AI applications like people safety applications where its mandatory to draw polygons and other stuff like distance measuring.

To put it in a simple way. Following function will give you the frame back
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
and as long as the modification you do on it, are done in-place without changing its memory, your change will be reflected back in your application output.
Let’s say you take multistream example, comment out the draw_bounding _boxes function, and just do
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
#n_frame = draw_bounding_boxes(n_frame, obj_meta, obj_meta.confidence)
and add a n_frame = cv2.line(n_frame, … ) with your own config
your change will be reflected.
So nothing to do from your side as long as the cv2 functions or whatever functions you use do not change n_frame memory location.

@m.elkateb Sorry for my too late reply. I gave up this app for some time but now I am back.

Thanks for the explanation. To explain it more, I want to make changes to each displayed frame like If I have 4 videos coming from different sources. then I want to draw 4 different polygons on each source frame (without missing any of them). I tried as you suggested but I didn’t get any change on the frame. I don’t know where I am wrong. Also, I am sharing my
deepstream_imagedata-multistream.py (17.1 KB). If u can lookup, I’ll be so thankful.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.