Gstreamer: one nvcamerasrc and two applications.


I was wondering if anyone knows a gstreamer pipeline where you can use one camerasrc in two applications?

We have tried running two different programs with the same camera, with the following commands:

The first one uses darknet (YOLO) a detectionprogram for neural networks:

$ ./darknet detector demo data/ cfg/yolov3-tiny-obj.cfg yolov3-tiny-obj_200000.weights “nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)30/1 ! nvvidconv flip-method=0 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink”

  1. The second one is a tracking application that uses the following pipe for videoloading;
    $“nvcamerasrc ! video/x-raw(memory:NVMM), format=(string)I420, width=640, height=480, framerate=30/1 ! nvvidconv flip-method=0 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink”;

…but when these are run simultaneously we get the following error:

Setting pipeline to PAUSED …
Socket read error. Camera Daemon stopped functioning…
gst_nvcamera_open() failed ret=0
ERROR: Pipeline doesn’t want to pause.
ERROR: from element /GstPipeline:pipeline0/GstNvCameraSrc:nvcamerasrc0: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure.
Additional debug info:
gstbasesrc.c(3354): gst_base_src_start (): /GstPipeline:pipeline0/GstNvCameraSrc:nvcamerasrc0:
Failed to start
Setting pipeline to NULL …
Freeing pipeline …

We assume this is due to the fact that the camerasrc can only be used once. We would prefer using a tee, but haven’t yet found a good one. Any help would be greatly appreciated.


An easy solution would be to use v4l2loopback. Check this post for installation.

Then you may use tee in the first application pipeline and send its second output to v4l2loopback, and use the virtual v4l2 node for feeding the second application.

Assuming you only have the onboard camera, using v4l2loopback to create the virtual device /dev/video1, you may use these pipelines: first app (yolo)

nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! tee name=t ! queue ! appsink    t. ! queue ! v4l2sink device=/dev/video1

and second one (tracker):

v4l2src device=/dev/video1 ! video/x-raw, format=BGR ! appsink

Thank you for your reply, it was very helpful :)

Now tracker and detector work together, but the implementation is somewhat unstable (it won’t always open). We get the following error only sometimes:

OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /home/nvidia/opencv-3.4.0/modules/highgui/src/window.cpp, line 331
terminate called after throwing an instance of ‘cv::Exception’
what(): /home/nvidia/opencv-3.4.0/modules/highgui/src/window.cpp:331: error: (-215) size.width>0 && size.height>0 in function imshow

./ line 13: 32614 Aborted

Do you have any idea what may cause this? We have used a very similar implementation of the tracker as presented by you in this post

As well, when we run the tracker with the loopback it is lagging quite a bit. Is this because it skips frames or is there another reason for this?
The loopback provides even more latency to the system (it seems) so we tried to run an individual pipe at first:

gst-launch-1.0 nvcamerasrc ! ‘video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)30/1’ ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! v4l2sink device=/dev/video1

Then we wanted to initalize both the tracker and the detector with the following pipeline:

“v4l2src device=/dev/video1 ! appsink”

But we got the following error message using a gst-debug-level=2:

0:00:00.133633321 23142 0x2a5b6cf0 ERROR v4l2allocator gstv4l2allocator.c:1299:gst_v4l2_allocator_dqbuf:v4l2src0:pool:src:allocator buffer 1 was not queued, this indicate a driver bug.

Notice that both the tracker and the detector opened the first frame, but the then the buffer wasn’t able to load any more (?)…
We tried to run the second pipe with xvimagesink and this worked:

gst-launch-1.0 v4l2src device=/dev/video1 ! xvimagesink

We tried using a tee as well, but we didn’t find any working solutions for this either…

Seems the read failed to get a frame. In the example, this is catched by line 28. Do you get the same error with the example ?

I also see some short and periodic lagging. Seems better after boosting with:

sudo nvpmodel -m 0  
sudo /home/nvidia/

Furthermore, IIRC yolo was not able to acheive high framerates on Jetson (2-3 fps, and less than 20 fps with tiny-yolo).

In my case, I need to insert a tee before v4l2sink otherwise it cannot start:

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), width=1280, height=720,format=I420, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw, format=BGRx' ! videoconvert ! 'video/x-raw, format=BGR' ! <b>tee</b> ! v4l2sink device=/dev/video1

I suppose it fixes a synchronization issue, but I have no details on that.

You cannot use the same capture node from 2 applications…This is the purpose of tee.
What you can try if you want both applications to be independent from each other is to create 2 virtual nodes. Be sure that nodes are not used by any application first and then:

sudo rmmod v4l2loopback   # unload driver
sudo modprobe v4l2loopback devices=2 video_nr=1,2    # reload driver managing 2 nodes
gst-launch-1.0   nvcamerasrc ! 'video/x-raw(memory:NVMM), width=1280, height=720,format=I420, framerate=30/1' ! nvvidconv ! 'video/x-raw, format=BGRx' ! videoconvert ! 'video/x-raw, format=BGR' ! tee name=t ! queue ! v4l2sink device=/dev/video1   t. ! queue ! v4l2sink device=/dev/video2

Then you can use one node for each application (yolo reads /dev/video1 and tracker /dev/video2).
Of course, this may duplicate v4l2loopback overhead.

Note that in this case v4l2src uses format YV12, not BGR. For being sure BGR is used, you may add caps but you’ll also need videoconvert for xvimagesink:

gst-launch-1.0 v4l2src device=/dev/video1 ! 'video/x-raw, format=BGR' ! videoconvert ! xvimagesink