I have a leopard imaging carrier board with 6 cams.
I want to capture the 6 cams simultaneously.
gstreamer is out of question.
So between Argus and V4L2 which must i use?
Is there a method to set up 6 memory buffers once and have the camera capture
be dumped in those buffers at each capture “tick”?
is there a sample code for 6cams capture somewhere?
thanks in advance.
PS: please excuse my english, it’s not my first language.
there’s example to capture multiple cameras and composites them to one frame.
please access the [L4T Multimedia API] from Jetson Download Center, and check the [Multimedia API Sample Applications]-> [13_multi_camera] for more details,
Thank you, i knew that example.
but is it mandatory that for each frame captured, you must create a nvBuffer and map it?
Is it not possible to create the buffer once and all subsequent EGLStream are mapped at that memory address?
Plus, i have another question: what trigger the real camera capture? is it the sendRequest function or is it when you call the acquireFrame function on the stream? Because the execution time of acquireFrame seem large for a memory transfert and changes with lighting conditions (approx 10-50 ms).
may I know what’s your major concerns.
are you looking for shorten the capture-to-display latency?
I don’t want to display the captured images but process them.
So you can say that my major concern is to shorten the capture-to-unified-memory latency.
Then image processing will be a multi stage pipeline (I wanted to use EGLStream for the image transmission part but i do not know how to produce a stream yet)
Thank you for your help.
you may refer to [Multimedia API Sample Applications] and have your own implementation,