I’m currently using Deepstream 3.0 on a Jetson Xavier with Jetpack 4.1. I’ve edited deepstream-yolo-app to take in a USB camera stream using v4l2src; however, when I try to run two instances with different sources each with a different nvoverlaysink, I get the result of the overlays next to each other but the streams are very slow to render (~1 fps).
How would I go about changing the dimensions of the video from my webcam? Without the nv plugins, I can do “gst-launch-1.0 v4l2src device=/dev/video0 ! ‘video/x-raw,width=640,height=360’ ! xvimagesink” and it shows up with much smaller dimensions and rendering with minimal latency, which is what I want.
However, when I try in deepstream-yolo-app.cpp, I’m not sure where to put the resize caps or the videoscale element. I tried putting it before nvvidconv, after nvvidconv, after yolo, but the pipeline doesn’t work (it either returns internal data stream error or freezes at GstSystemClock or on the first frame).
Any help would be greatly appreciated!