Please provide complete information as applicable to your setup.
• Hardware Platform: Jteson nano • DeepStream Version 5.0 • JetPack Version (valid for Jetson only) 4.4 • TensorRT Version 7.1.3.0 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) quetions
Hi all, using this post: Deepstream 5.1 docker can't run the python bindings
I’ve managed to run the deepstream and load the plugins.
I wanted to ask what is the best way (using the python-gi) to grab a usb cam stream as greyscale and run it through “find blobs” algortim, and save the output as vector, hopefully to use only to gpu of my jetson nano.
My purpose is grab a frame from the camera transfer it to grey scale and find blobs (connected areas) on the surface, and I would like it all to happen on the GPU as much as it can, without using the CPU.
I tried to use the deepstream pipe but I didn’t under stand how to grab the frame after it and keep process it on the GPU.
In general, at which step does the frame transfer from the GPU to the CPU?