We are looking into the possibility of using 2 (ov9281) camera’s on the TX2, on a dual-lane MIPI-CSI-2 port.
We’re considering using an aggragation chip which stitches together mulitple input images (e.g. the ov680 chip) to combine two single-lane inputs from the camera’s into the dual-lane port.
Practially I was wondering what the best approach was to tackle this development.
Assuming we obtain all the info regarding the onmivision chip details, what would be the firmware development impact on the L4T code?
From reading through the code, the most basic scenario I can envision is create a v4l pipeline with the ov680 aggregator chip as virtual sensor (v4l slave device), and do the set up of everything behind it myself in a static way. (This would map onto the documented examples of adding your own camera driver).
But ideally, the camera controls of these sub-subdevices should be exposed as v4l as well.
Is there a similar use case worked-out already? (A hierarchy that has multiple camera feeds converging into a single CSI-2 port)? Does my proposition make sense or am I missing something ?
For reference, I did find that “virtual channel support” should be available in the latest releases (https://devtalk.nvidia.com/default/topic/1048665/jetson-tx2/tx2-mipi-virtual-channel-support-/), but due to lack of documentation it is not really clear to me what exactly this encompasses. Is this anybit similar to the use-case I am trying to work out?
Thanks for any feedback regarding this!