Find blobs on stream using gpu

Please provide complete information as applicable to your setup.

• Hardware Platform: Jteson nano
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.1.3.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) quetions

Hi all, using this post: Deepstream 5.1 docker can't run the python bindings
I’ve managed to run the deepstream and load the plugins.
I wanted to ask what is the best way (using the python-gi) to grab a usb cam stream as greyscale and run it through “find blobs” algortim, and save the output as vector, hopefully to use only to gpu of my jetson nano.

Thanks.

Hi,

Could you share more information about the ‘find blobs’ algorithm?
Do you indicates a Deepstream sample or an API?

In general, you can update the camera format to gray as the below source:

...
print("Playing cam %s " %args[1])
caps_v4l2src.set_property('caps', Gst.Caps.from_string("video/x-raw,format=GRAY8,framerate=30/1"))
caps_vidconvsrc.set_property('caps', Gst.Caps.from_string("video/x-raw,format=NV12"))
...

Thanks

Hi, Thanks.
By blob detection I mean it by the simpletst way:

Blob Detection Using OpenCV ( Python, C++ ) |

My purpose is grab a frame from the camera transfer it to grey scale and find blobs (connected areas) on the surface, and I would like it all to happen on the GPU as much as it can, without using the CPU.

I tried to use the deepstream pipe but I didn’t under stand how to grab the frame after it and keep process it on the GPU.

In general, at which step does the frame transfer from the GPU to the CPU?

Is there any recommended way?
Thanks.

Hi,

Deepstream doesn’t copy the memory buffer.
Instead, it mapping the data with zero-copy technique to avoid memory copy.

Here is a sample to access frame buffer.
You can modify it and pass the n_frame to the blob detection function:

Thanks.

Thank you’ i’ll take a look!