Looking to take one camera and duplicate its output x number of times for separate programs. As the device can only be opened once, I need to find an approach that stems from opening the real device once, then getting the frames to other programs.
One way to do this is through creating multiple virtual devices (/dev/video*) and using gstreamer to send the frames to the other devices using v4l2sink. I have accomplished this using v4l2loopback, but am interested in alternatives. I am looking for a way that gives optimal performance and makes sense from a design perspective.
The way I am hoping to create multiple devices is through the device tree for the camera. I honestly don’t know if this is possible, but if I could add devices there it would make more sense than using v4l2loopback. Is this physically possible using the device tree?
As an alternative to creating multiple devices, I already have a program that uses the V4L2 API to stream from the camera so I am considering copying the frame data to shared memory for other processes as I grab it. This would avoid the need to create virtual devices and feels like it makes more sense from a design perspective. However, I have not yet tested the difference in latency between these options. Every millisecond counts.
If you know a good alternative, feel free to suggest.