Looking to take one camera and duplicate its output x number of times for separate programs. As the device can only be opened once, I need to find an approach that stems from opening the real device once, then getting the frames to other programs.
One way to do this is through creating multiple virtual devices (/dev/video*) and using gstreamer to send the frames to the other devices using v4l2sink. I have accomplished this using v4l2loopback, but am interested in alternatives. I am looking for a way that gives optimal performance and makes sense from a design perspective.
The way I am hoping to create multiple devices is through the device tree for the camera. I honestly don’t know if this is possible, but if I could add devices there it would make more sense than using v4l2loopback. Is this physically possible using the device tree?
As an alternative to creating multiple devices, I already have a program that uses the V4L2 API to stream from the camera so I am considering copying the frame data to shared memory for other processes as I grab it. This would avoid the need to create virtual devices and feels like it makes more sense from a design perspective. However, I have not yet tested the difference in latency between these options. Every millisecond counts.
If you know a good alternative, feel free to suggest.
You may also have a look to plugin tee in gstreamer framework. You may feed several v4l2loopback node with one camera, or use directly gstreamer to get your frames into various applications.
If you have control on how to access the frames from the applications i.e. you can modify the application’s source code, then you can use either the tee element or interpipes:
If the applications are separate processes and you can not - or don’t want to - modify them then a trick like v4l2loopback might work for you.
If the client applications are not hard coded to use a video node then maybe you can just receive the video stream from a local network loopback and send the video using GStreamer in multicast mode.
I have been able to use the tee element to duplicate the camera stream from /dev/video0 into 4 virtual devices created using v4l2loopback.
This works well, but I have not yet had a chance to test how this effects glass to glass latency. I have complete control over the entire system here (I wrote the camera driver, device tree, and will write every app that accesses the camera stream).
I haven’t used Gstreamer extensively. Are you saying that as an alternative to multiple devices, I can create a custom sink that will send the stream data from the camera directly to individual processes using tee (or interpipes)?
I’m really aiming for the lowest latency physically possible (even 5-10 ms difference is significant), and I have a feeling I will just need to try each method and measure the glass to glass latency. Briefly looking at the GstInterpipe plugin I see that it can transfer buffers without copying, so that could help reduce a bit of latency if tee does copy.