**• Hardware Platform (GPU) : NVIDIA GeForce RTX 3060 **
**• DeepStream Version : 6.1.1 **
**• Ubuntu : 20.04 **
**• TensorRT Version : 8.4.1.5 **
**• NVIDIA GPU Driver Version : 525.105.17 **
**• Issue Type : Question **
Hello,
I’ve just started to computer vision so I am not very good at undrestanding the documentations.
What I want to do is connect to multiple ip cameras (around 20 or more) and make detection with yolov5 or yolov8. I tried using OpenCV VideoCapture to get frames but this way I couldn’t get the last frames. When I looked into this I’ve found that I could use GStream with OpenCV (using appsink drop=true works for me). So I build OpenCV GStream enabled. But this way CPU usage was too high. I made more research and I decided to use DeepStream in order to use GPU.
Now I can get frames using GPU but if there are some movement on frame, it’s getting corrupted. And also for 10 cameras GPU usage goes to 600mb and it’s too much. I want to reduce GPU usage and get frames uncorrupted.
Also when I research GStream’s pulings I saw there are lots of decoder plugins but I don’t know which ones to use. Not every plugin works with all cameras.
Last thing I want to add is I tought about using nvstreammux plugin but I couldn’t figure out how, I thing this may be better than using just nvvideoconvert plugin.
Pipelines that I used (uri stars with rtsp):
GPU usage is 122mb for a camera but it doesn’t work for all cameras;
gst-launch-1.0 -e rtspsrc location={uri1} latency=100 ! rtph264depay ! h264parse ! avdec_h264 ! nvvideoconvert output-buffers=1 ! videoscale ! video/x-raw,width=1024,height=720,format=BGR ! appsink drop=true
GPU usage is 156mb for a camera, it works for all cameras, for 10 cameras GPU usage is 622mb;
gst-launch-1.0 -e rtspsrc location={uri} latency=100 ! decodebin3 ! nvvideoconvert ! appsink drop=true