I have one captureSession, which should manage up to three cameras and supply two frame consumers.
One consumer renders a Preview to the screen and the other save the current frame(s) as jpg.
As long as only one camera is connected everything works fine. Both outputStreams to both consumers (one stream for each consumer) get the frames and the consumers can process them correctly.
Also if i have one consumer with multiple cameras connected (one consumer manages multiple streams), everything works fine. The consumer can read the frames from all streams and process them.
However if I activate two outputStreams on two cameras (this means 4 stream in total and one consumer gets one stream from each camera), I don’t get any frame at all on any outputStream.
Is this an inherent limitation of libargus or is there something else I have to consider to get frames in this scenario?
Suppose shouldn’t have any of this kind of limitation.
Like you can simulate by argus_camera --device=0 and argus_camera --device=1 and both of them have output JPEG output stream.
I am using Jetpack version 4.6.2 and L4T 32.7.2 .
I used the MMAPI samples witch where installed with the API via the SDK manager.
The Argus camera API is the version 0.98 (from the RELEASE.TXT file)
I don’t know where the 3.40282e+38 are taken from, the supported range should be [0, 21.8].
However, if I use my application I can access both cameras (also simultaneously), if I only use one Stream for each camera.
But I still can’t connect two streams to two (or three) cameras each.