Xavier AGX Synchronized Capture with 4 Cameras

As I mentioned, I’m still waiting to hear back from the vendor regarding the feasibility of hw sync.

The more pressing issues are that 4 cameras don’t seem to work at all, auto-correction seems to be wrong, and potential limitations of the ISP.
Could you please answer my questions a-e above? These are valid and important irrespective of whether hw sync is supported or not.

Don’t understand the a-c
For the d, only verified 2 sensors can make sure if 4 sensors can working, but multiple session without problem.
For e, only guarantee 4k 30fps

Let me clarify:
a) What is the maximum expected time difference between frames captured using the syncSensor method? Our goal is to capture frames from all cameras at exactly the same time, so they can be stitched together and also be used for stereo vision applications.

b) If we are using the same camera / sensor, and the same auto-correction parameters are applied by the ISP to all captures, why is there a difference in the color of images captured by different cameras at the same time? They should look exactly the same, shouldn’t they?

c) In my original post, I have included the messages shown when running the capture program (also attached). These look like errors to me, can you take a look and explain why they are shown and how to resolve them?

d) Without hardware sync, we need to use the syncSensor method from the NVIDIA samples. In order for this to work, all 4 cameras have to be in the same capture session, no?

e) To clarify, we’re talking 4 x 4k 30fps, right?

a. need to sync by the timestamp, you can reference to the relative topic in my eraly comment.
b. Yes, should be the same, maybe the device tree cause wrong ISP setting.
c. I think those message have tell the information.
d. syncSensor need HW design otherwise the frame won’t sync.
e. Yes support 4 x 4K @30fps

a) I can’t see the topic linked in the original comment, please resend the link.
b) I have checked, there doesn’t seem to be a difference. Any other ideas?
c) OK, but I have no idea what they mean. :) Can you explain? Also, do you have any suggestions on how to fix them?

Check this reference topic. Keeping camera synchronization at software level - #5 by JerryChang

I would suggest consult with vendor for the image tuning.

I have reviewed the topic and the ones linked from it. Your colleague ultimately suggested that with hardware sync, “you should launch all your cameras with separate sessions, to synchronize outside the driver using sensor timestamps.

This is the exact opposite of what the syncSensor example does, but since we don’t currently have hw sync option, we have to stick with the syncSensor way of keeping all cameras in one capture session. Also, making sure that the ISP applies the same white-balance and other auto-correction values to all cameras is critical, because this is the only way to make the images match for stitching. So, it sounds like we’re stuck with the single-capture-session approach.

As I mentioned before, this does not work for four cameras (only two). Is there anything we can do to make 4 cameras work without using multiple sessions?

hello andy36,

you may misunderstand my points.
it’s suggested to have both hardware approach and also software approach to have frame synchronization.
as you can see, syncSensor sample is a software approach of software based synchronization.
please also check Topic 111355 for reference.
thanks

@JerryChang OK, but how do we make syncSensor work with 4 cameras?

As I said before, our code doesn’t work when there are 4 cameras added to the same capture session. I have attached the source in the initial post, feel free to take a look. If you limit the cameraDevices vector in my code to contain only two cameras, it is able to capture correctly, but if you run it as-is, all four output files end up having the image from camera #0.

Also, there are lots of error messages being displayed while running the program (these are also included in my initial post).

hello andy36,

according to CSI and USB Camera Features, it’s only validate 4K previews with dual cameras.
since you’re working with four IMX477-based camera, could you please have a try to scale-down the resolution to 1080p.
BTW,
you may also disable the preview rendering to narrow down the issue, how about doing image captures for these four camera concurrently?
thanks

I have tried scaling down all the way to 692 × 520, but still have the same issue: all 4 files have the image from camera #0.

This is the complete output of the execution:

OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module0

OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module1

OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module2

OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module3

NvPclHwGetModuleList: WARNING: Could not map module to ISP config string

NvPclHwGetModuleList: No module data found

NvPclHwGetModuleList: WARNING: Could not map module to ISP config string

NvPclHwGetModuleList: No module data found

NvPclHwGetModuleList: WARNING: Could not map module to ISP config string

NvPclHwGetModuleList: No module data found

NvPclHwGetModuleList: WARNING: Could not map module to ISP config string

NvPclHwGetModuleList: No module data found

OFParserGetVirtualDevice: NVIDIA Camera virtual enumerator not found in proc device-tree

---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----

CAM: serial no file already exists, skips storing again---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----

CAM: serial no file already exists, skips storing again---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----

CAM: serial no file already exists, skips storing again---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----

CAM: serial no file already exists, skips storing again
CAM: serial no file already exists, skips storing againClipHelper

allowIspClipping: true

maxIspDownscale: 4.0:1 4096

maxIspOutWidth: 6144,4096

ispIn: (4056 x 3040)

PRU enabled: false, interleaved input: (0 x 0)

postProcessingSize: (1014 x 760)

postIspClip: (0.00,0.00, 1.00,1.00)

ispOut[0]: (1014 x 760)

ispClip[0]: (0.00,0.00, 1.00,1.00)

ispOut[1]: (0 x 0)

ispClip[1]: (0.00,0.00, 1.00,1.00)

out[0] 692x520 req (0.00,0.00, 1.00,1.00) final (0.00,0.00, 1.00,1.00) isp from isp[0]

StageGroup 0x7f48000d00 parent=(nil) 4056x3040 (1 exposure) obufMask=f finalMask=0

stages[0] = 35 SensorCaptureStage(in = 12, outA= 7, outB = 12, outThumb = 12, outMeta = 8, outStats = 12) routed

StageGroup 0x7f480018d0 parent=0x7f48000d00 1014x760 (1 exposure) obufMask=f finalMask=f

stages[0] = 27 MemoryToISPCaptureStage(in = 7, outA= 5, outB = 12, outThumb = 4, outMeta = 12, outStats = 6) routed

m_bufStates[0] = 0 attached output done readOrder=0 writeOrder=2 group=0x7f48000d00 fbs=none

692x520 BL U8_V8_ER 420SP

m_bufStates[1] = 1 attached output done readOrder=0 writeOrder=2 group=0x7f48000d00 fbs=none

692x520 BL U8_V8_ER 420SP

m_bufStates[2] = 2 attached output done readOrder=0 writeOrder=2 group=0x7f48000d00 fbs=none

692x520 BL U8_V8_ER 420SP

m_bufStates[3] = 3 attached output done readOrder=0 writeOrder=2 group=0x7f48000d00 fbs=none

692x520 BL U8_V8_ER 420SP

m_bufStates[4] = 4 attached readOrder=0 writeOrder=2 group=0x7f480018d0 AF fbs=none

640x480 Pitch U8_V8_ER 420SP

m_bufStates[5] = 5 attached readOrder=0 writeOrder=2 group=0x7f480018d0 fbs=isp0

1014x760 BL U8_V8_ER 420SP

m_bufStates[6] = 6 readOrder=0 writeOrder=2 group=0x7f480018d0 fbs=none

524288x1 Pitch NonColor8

m_bufStates[7] = 7 readOrder=1 writeOrder=1 group=0x7f48000d00 fbs=none

4056x3040 Pitch BayerS16RGGB

m_bufStates[8] = 8 readOrder=0 writeOrder=1 group=0x7f48000d00 fbs=none

4056x1 Pitch NonColor8

GraphHelper blit pixel count=4521920 != ClipHelper blit pixel count=1130480

(Argus) Objects still active during exit: [CameraProvider (0x558f16cf00): refs: 1, cref: 0]

Argus Version: 0.97.3 (single-process)

Camera count: 4

DONE!

Could you attached the binary and the source code to verify.

Also, I’m not rendering any previews, please refer to the code attached in my initial post.

With gstreamer, I have been able to stream from all 4 cameras concurrently without issues, although not at 4k full resolution:

gst-launch-1.0
nvarguscamerasrc sensor-id=2 ! “video/x-raw(memory:NVMM),width=960,height=540,format=(string)NV12,framerate=30/1” ! c.sink_0
nvarguscamerasrc sensor-id=3 ! “video/x-raw(memory:NVMM),width=960,height=540,format=(string)NV12,framerate=30/1” ! c.sink_1
nvarguscamerasrc sensor-id=0 ! “video/x-raw(memory:NVMM),width=960,height=540,format=(string)NV12,framerate=30/1” ! c.sink_2
nvarguscamerasrc sensor-id=1 ! “video/x-raw(memory:NVMM),width=960,height=540,format=(string)NV12,framerate=30/1” ! c.sink_3
nvcompositor name=c
sink_0::width=960 sink_0::height=540 sink_0::xpos=0 sink_0::ypos=0
sink_1::width=960 sink_1::height=540 sink_1::xpos=960 sink_1::ypos=0
sink_2::width=960 sink_2::height=540 sink_2::xpos=0 sink_2::ypos=540
sink_3::width=960 sink_3::height=540 sink_3::xpos=960 sink_3::ypos=540
! nvoverlaysink
-e

@ShaneCCC this is the 3rd time I’m saying it, the code is attached to my first post in this thread. :)

Thanks for your confirm. Could you able to attached the binary here.

Thanks

Sure, see attached.

There are two binaries:
sync4cams - four cameras in session: doesn’t work, all four outputs are from camera #0
sync2cams - two cameras in session: seems OK

Cameras are configured to mode 5, which is 692 × 520 resolution on my sensors.

sync2cams (86.1 KB) sync4cams (85.7 KB)

Can’t run these two binary on r32.4.3
What’s the BSP version? cat /etc/nv_tegra_release

Also listed in my first post… ;)

You can also very easily compile the source from my first post with:

c++ -I/usr/src/jetson_multimedia_api/include -L/usr/lib/aarch64-linux-gnu/tegra -o sync4cams sync4cams.cpp -lnvargus

Hi andy36
Current version only can support to 3 cameras for single session.
Please confirm it 3 cameras worked.

If I put #0, #1, and #2 in a session, it fails with an error message about unsupported output buffer format (see below) for some resolutions, but works OK for others.

E.g., 4056 × 3040 works OK, but 3840x2160 or 692x520 both fail.

This is output when it fails:

OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module0

OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module1

OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module2

OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module3

NvPclHwGetModuleList: WARNING: Could not map module to ISP config string

NvPclHwGetModuleList: No module data found

NvPclHwGetModuleList: WARNING: Could not map module to ISP config string

NvPclHwGetModuleList: No module data found

NvPclHwGetModuleList: WARNING: Could not map module to ISP config string

NvPclHwGetModuleList: No module data found

NvPclHwGetModuleList: WARNING: Could not map module to ISP config string

NvPclHwGetModuleList: No module data found

OFParserGetVirtualDevice: NVIDIA Camera virtual enumerator not found in proc device-tree

---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----

CAM: serial no file already exists, skips storing again---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----

CAM: serial no file already exists, skips storing again---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----

CAM: serial no file already exists, skips storing again---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----

CAM: serial no file already exists, skips storing again
CAM: serial no file already exists, skips storing again
SCF: Error NotSupported: Output buffer format not supported: 692x520 BL U8_V8_ER 420SP (in src/components/GraphHelper.cpp, function findBlitSource(), line 1449)

SCF: Error NotSupported: (propagating from src/components/GraphHelper.cpp, function addScalingBlit(), line 2025)

SCF: Error NotSupported: (propagating from src/components/GraphHelper.cpp, function addOutputScalingBlits(), line 1984)

SCF: Error NotSupported: (propagating from src/components/CaptureSetupEngineImpl.cpp, function genInstructionsCoordinatedCamera(), line 1626)

SCF: Error NotSupported: (propagating from src/components/CaptureSetupEngineImpl.cpp, function doGetInstructions(), line 2211)

SCF: Error NotSupported: (propagating from src/components/CaptureSetupEngine.cpp, function getInstructionList(), line 300)

SCF: Error NotSupported: (propagating from src/components/CaptureSetupEngine.cpp, function setupCC(), line 214)

SCF: Error NotSupported: (propagating from src/api/Session.cpp, function capture(), line 815)

(Argus) Error NotSupported: (propagating from src/api/ScfCaptureThread.cpp, function run(), line 109)

Failed to get IFrame interface #0

Argus Version: 0.97.3 (single-process)

Camera count: 3

(Argus) Objects still active during exit: [CameraProvider (0x558a399f00): refs: 1, cref: 0]