Hi all,
I’m trying to implement a simple application that captures an image from 4 cameras in sync. I have tried to combine the oneShot and syncSensor code from the jetson_multimedia_api samples. See code attached.
I’m using the Leopard Imaging LI-XAVIER-KIT-IMX477CS-Q kit with four IMX477-based cameras.
When adding only two cameras to the capture, I do get results, but have the following issues:
-
~120 msec delay between the two captures
-
Noticeable difference in hue between captures from the two cameras (at the same time)
-
I get the following error messages:
NvCaptureStatusErrorDecode Stream 2.0 failed: sof_ts 0 eof_ts 6035105617568 frame 0 error 2 data 0x000000a2
NvCaptureStatusErrorDecode Capture-Error: CSIMUX_FRAME (0x00000002)
CsimuxFrameError_Regular : 0x000000a2
Stream ID [ 2: 0]: 2VPR state from fuse block [ 3]: 0
Frame end (FE) [ 5]: 1
A frame end has been found on a regular mode stream.
FS_FAULT [ 7]: 1
A FS packet was found for a virtual channel that was already in frame.An errored FE packet was injected before FS was allowed through.
Binary VC number [3:2] [27:26]: 0
To get full binary VC number, user need to concatenate VC[3:2] and VC[1:0] together.
SCF: Error InvalidState: Capture error with status 2 (channel 0) (in src/services/capture/NvCaptureViCsiHw.cpp, function waitCsiFrameEnd(), line 880)
(Argus) Objects still active during exit: [CameraProvider (0x5591075f00): refs: 1, cref: 0]
When trying with all four cameras, all four output streams return the same image captured from the first camera added, and the following error is shown:
ClipHelper
allowIspClipping: true
maxIspDownscale: 4.0:1 4096
maxIspOutWidth: 6144,4096
ispIn: (4056 x 3040)
PRU enabled: false, interleaved input: (0 x 0)
postProcessingSize: (4056 x 3040)
postIspClip: (0.00,0.00, 1.00,1.00)
ispOut[0]: (4056 x 3040)
ispClip[0]: (0.00,0.00, 1.00,1.00)
ispOut[1]: (0 x 0)
ispClip[1]: (0.00,0.00, 1.00,1.00)
out[0] 4056x3040 req (0.00,0.00, 1.00,1.00) final (0.00,0.00, 1.00,1.00) isp from isp[0]
StageGroup 0x7f28000d00 parent=(nil) 4056x3040 (1 exposure) obufMask=f finalMask=0
stages[0] = 35 SensorCaptureStage(in = 12, outA= 6, outB = 12, outThumb = 12, outMeta = 7, outStats = 12) routed
StageGroup 0x7f280018d0 parent=0x7f28000d00 4056x3040 (1 exposure) obufMask=f finalMask=f
stages[0] = 27 MemoryToISPCaptureStage(in = 6, outA= 0, outB = 12, outThumb = 4, outMeta = 12, outStats = 5) routed
m_bufStates[0] = 0 attached output done readOrder=0 writeOrder=2 group=0x7f280018d0 fbs=isp0
4056x3040 BL U8_V8_ER 420SP
m_bufStates[1] = 1 attached output done readOrder=0 writeOrder=2 group=0x7f28000d00 fbs=none
4056x3040 BL U8_V8_ER 420SP
m_bufStates[2] = 2 attached output done readOrder=0 writeOrder=2 group=0x7f28000d00 fbs=none
4056x3040 BL U8_V8_ER 420SP
m_bufStates[3] = 3 attached output done readOrder=0 writeOrder=2 group=0x7f28000d00 fbs=none
4056x3040 BL U8_V8_ER 420SP
m_bufStates[4] = 4 attached readOrder=0 writeOrder=2 group=0x7f280018d0 AF fbs=none
640x480 Pitch U8_V8_ER 420SP
m_bufStates[5] = 5 readOrder=0 writeOrder=2 group=0x7f280018d0 fbs=none
524288x1 Pitch NonColor8
m_bufStates[6] = 6 readOrder=1 writeOrder=1 group=0x7f28000d00 fbs=none
4056x3040 Pitch BayerS16RGGB
m_bufStates[7] = 7 readOrder=0 writeOrder=1 group=0x7f28000d00 fbs=none
4056x1 Pitch NonColor8
GraphHelper blit pixel count=73981440 != ClipHelper blit pixel count=0
(Argus) Objects still active during exit: [CameraProvider (0x5578a15f00): refs: 1, cref: 0]
Version numbers are:
Linux agx 4.9.140 #1 SMP PREEMPT Sat Jan 16 23:56:01 WIB 2021 aarch64 aarch64 aarch64 GNU/Linux
(only change to the stock kernel is Leopard Imaging’s camera driver and DTB)
R32 (release), REVISION: 4.4, GCID: 23942405, BOARD: t186ref, EABI: aarch64, DATE: Fri Oct 16 19:37:08 UTC 2020
Argus Version: 0.97.3 (single-process)
Please help answer the following questions:
a) Is a time difference between cameras on the order of ~100 msec the best achievable for this approach to syncing? (I’m still waiting to hear from LI if they support any hardware-based option)
b) According to the syncSensor code, auto-correction is supposed apply the same correction to all sensors, which is calculated based on the first sensor added. Why is the a difference in hue between two images from cameras right next to each other?
c) What do the error messages above mean?
d) Is having four cameras in a single capture session supported? If not, what is the best approach here, especially regarding auto-correction?
e) Will the ISP be able to handle real-time video capture from four cameras at full (12M) resolution running at 30 fps (once I move on to video capture)?
Thanks,
Andy
sync4cams.cpp (5.7 KB)