I have a three camera setup and I want to set the same exposure for all cameras based on the average of the three.
I’ve a created a application with the LibArgus API, similar to the autoUserExposure Argus sample. The main difference is that I’ve three captureSessions, one for each camera.
while (captureMoreFrames) {
std::thread waitForCompleteThread1(waitForCompleteEvent, iEventProvider1, queue1.get(), iQueue1);
std::thread waitForCompleteThread2(waitForCompleteEvent, iEventProvider2, queue2.get(), iQueue2);
std::thread waitForCompleteThread3(waitForCompleteEvent, iEventProvider3, queue3.get(), iQueue3);
waitForCompleteThread1.join();
waitForCompleteThread2.join();
waitForCompleteThread3.join();
...
...
// Get frame metadata from each camera etc
...
...
iCaptureSession1->capture(request1.get());
iCaptureSession2->capture(request2.get());
iCaptureSession3->capture(request3.get());
}
However, it seems that multiple IEventProvider::waitForEvents() slows down the system. The cameras are hardware synced, so I would assume that the events EVENT_TYPE_CAPTURE_COMPLETE for each queue would arrive quite simultaneously .
Is this a limitation of the API, or should I be doing something differently ?
you may implement the app differently.
since you’ve cameras with hardware synced, it’ll be hardware/software all waiting for frames.
please see-also Argus sample, syncSensor which is sample app for using single session per multi-camera (dual-camera).
I’ve come upon some issues using one captureSession for all three cameras. If I do something like this:
while (captureMoreFrames) {
const uint64_t FIVE_SECONDS = 5000000000;
iEventProvider->waitForEvents(queue.get(), FIVE_SECONDS);
const Event* event = iQueue->getEvent(iQueue->getSize() - 1);
const IEvent* iEvent = interface_cast<const IEvent>(event);
if (!iEvent) {
KSDEBUG("Error : Failed to get IEvent interface\n");
continue;
}
if (iEvent->getEventType() == EVENT_TYPE_CAPTURE_COMPLETE) {
// DO SOME STUFF
iCaptureSession->capture(request.get());
} else if (iEvent->getEventType() == EVENT_TYPE_CAPTURE_STARTED) {
continue;
}
}
Then for some reason the iEventProvider->waitForEvents(queue.get(), FIVE_SECONDS); uses considerably more time when waiting for a EVENT_TYPE_CAPTURE_STARTED event than for example when there is only one or two cameras involved in the captureSession.
Commenting out the event types: EVENT_TYPE_ERROR and EVENT_TYPE_CAPTURE_STARTED seemed to do the trick.
std::vector<EventType> eventTypes;
eventTypes.push_back(EVENT_TYPE_CAPTURE_COMPLETE);
// eventTypes.push_back(EVENT_TYPE_ERROR);
/* Seems there is bug in Argus, which drops EVENT_TYPE_ERROR if all
3 events are not set. Set it for now */
// eventTypes.push_back(EVENT_TYPE_CAPTURE_STARTED);
UniqueObj<EventQueue> queue(iEventProvider->createEventQueue(eventTypes));
IEventQueue* iQueue = interface_cast<IEventQueue>(queue);
event types: EVENT_TYPE_ERROR… this is used for error handling. event types: EVENT_TYPE_CAPTURE_STARTED… this is event follow-by EVENT_TYPE_CAPTURE_COMPLETE, by default it have to wait twice for each frame request.
BTW,
I’ve check your code snippets in comment #4 again.
there’s Argus::ICaptureSession::repeat for repeating capture request.
you may try using this API for capture frames continuously.