The second one is a tracking application that uses the following pipe for videoloading;
$“nvcamerasrc ! video/x-raw(memory:NVMM), format=(string)I420, width=640, height=480, framerate=30/1 ! nvvidconv flip-method=0 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink”;
…but when these are run simultaneously we get the following error:
Setting pipeline to PAUSED …
Socket read error. Camera Daemon stopped functioning…
gst_nvcamera_open() failed ret=0
ERROR: Pipeline doesn’t want to pause.
ERROR: from element /GstPipeline:pipeline0/GstNvCameraSrc:nvcamerasrc0: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure.
Additional debug info:
gstbasesrc.c(3354): gst_base_src_start (): /GstPipeline:pipeline0/GstNvCameraSrc:nvcamerasrc0:
Failed to start
Setting pipeline to NULL …
Freeing pipeline …
We assume this is due to the fact that the camerasrc can only be used once. We would prefer using a tee, but haven’t yet found a good one. Any help would be greatly appreciated.
An easy solution would be to use v4l2loopback. Check this post for installation.
Then you may use tee in the first application pipeline and send its second output to v4l2loopback, and use the virtual v4l2 node for feeding the second application.
Assuming you only have the onboard camera, using v4l2loopback to create the virtual device /dev/video1, you may use these pipelines: first app (yolo)
Now tracker and detector work together, but the implementation is somewhat unstable (it won’t always open). We get the following error only sometimes:
OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /home/nvidia/opencv-3.4.0/modules/highgui/src/window.cpp, line 331
terminate called after throwing an instance of ‘cv::Exception’
what(): /home/nvidia/opencv-3.4.0/modules/highgui/src/window.cpp:331: error: (-215) size.width>0 && size.height>0 in function imshow
As well, when we run the tracker with the loopback it is lagging quite a bit. Is this because it skips frames or is there another reason for this?
The loopback provides even more latency to the system (it seems) so we tried to run an individual pipe at first:
Then we wanted to initalize both the tracker and the detector with the following pipeline:
“v4l2src device=/dev/video1 ! appsink”
But we got the following error message using a gst-debug-level=2:
0:00:00.133633321 23142 0x2a5b6cf0 ERROR v4l2allocator gstv4l2allocator.c:1299:gst_v4l2_allocator_dqbuf:v4l2src0:pool:src:allocator buffer 1 was not queued, this indicate a driver bug.
Notice that both the tracker and the detector opened the first frame, but the then the buffer wasn’t able to load any more (?)…
We tried to run the second pipe with xvimagesink and this worked:
I suppose it fixes a synchronization issue, but I have no details on that.
You cannot use the same capture node from 2 applications…This is the purpose of tee.
What you can try if you want both applications to be independent from each other is to create 2 virtual nodes. Be sure that nodes are not used by any application first and then:
Then you can use one node for each application (yolo reads /dev/video1 and tracker /dev/video2).
Of course, this may duplicate v4l2loopback overhead.
Note that in this case v4l2src uses format YV12, not BGR. For being sure BGR is used, you may add caps but you’ll also need videoconvert for xvimagesink: