I have two separate cameras connected to CSI0 and CSI1, which share NVCSI Brick A. The cameras work individually, but seem to be mutually exclusive - when one is streaming, the other times out with:
tegra-camrtc-capture-vi tegra-capture-vi: uncorr_err: request timed out after 5000 ms
If Depth camera (CSI0) is streaming → IR camera (CSI1) times out
If IR camera (CSI1) is streaming → Depth camera (CSI0) fails to start
The interfaces are channels. If you are using two lanes at CSI0 you need to skip serial_b, meaning you allocate serial_c to IR and serial_d to RGB. You can see that with the various two camera setups, for instance nvidia/platform/t19x/common/kernel-dts/t19x-common-modules/tegra194-camera-imx274-dual.dtsi setup.
Also I’m not sure if this kind of configuration is supported, the Design Guide Chapter 10 has a 4x2-Lane and a 2x4-Lane configuration.
tegra194 is on Jetson Xavier, and it’s different from tegra234 used in Orin NX.
In BSP 36.4, I see a few other examples of multi-camera use.
I found NVIDIA’s example tegra234-camera-e3333-a00.dtsi which shows 6 cameras using serial_a through serial_g simultaneously, including serial_a and serial_b on the same NVCSI brick:
This shows that serial_a + serial_b simultaneous operation (with 2 lanes each) should be supported.
Our configuration difference:
e3333 (BSP example)
Our setup (not working)
serial_a
2 lanes
2 lanes
serial_b
2 lanes
1 lane
Could the issue be related to mixed lane counts (2+1 lanes) on the same brick?
Additionally, our depth camera (IRS2975C) uses a driver where start_streaming and stop_streaming callbacks are dummy implementations - the sensor is controlled directly via I2C by a userspace SDK (Royale). Could this affect CSI/VI resource allocation when another camera on the same brick tries to stream?
Any guidance on what configuration might be preventing simultaneous streaming would be appreciated.
Well, it’s also serial_a and serial_c and two lanes each in tegra234-camera-rbpcv3-imx477.dtsi and I needed to set it to serial_a and serial_c with my Orin NX to get two imx708 to work.
Did you test it with skipping serial_b?
Like I said, maybe your 2+1+4 setup is not supported. It is at least not on in the list of the explicitly supported setups, 4x2 lane and 2x4 lane.
Maybe you will have to reorder your cameras, so you are doing 4+2+1 or something like that.
Regarding your driver, I don’t know, so I say maybe but not likely.
Well, the thing is that it’s a new use case, and this hardware is already in mass production so I can’t change it.
The main use case, RGBIR (ox05b, 4 lanes) + IR (imx296, 1 lane) simultaneously works. Also every camera separately was validated to work. But the depth camera was never tested simultaneously with all other cameras streaming (due to another software issue in the depth camera SDK). Now we’re integrating it to allow for a new usage scenario and I want to know how to make this configuration work.
So I really hope for a way to support this CSI config.
According to the Orin TRM, Section 7.2.1.2 (NVCSI SCIL), brick AB supports “1x 2 lanes + 1x 1 lane” D-PHY configuration with two simultaneous streams. Our configuration (serial_a: 2 lanes, serial_b: 1 lane) falls exactly within this supported mode. Brick CD handles our 4-lane RGB camera separately.
The hardware should support this - can you confirm?
Then the issue appears to be in the software/driver layer. Could there be a configuration part we’re missing, or a known issue with the tegracam framework when using mixed lane counts on the same brick?
please also check you’re able to run them simultaneously via v4l2 IOCTL.
for instance, v4l2-ctl -d /dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=10000 -V v4l2-ctl -d /dev/video1 --set-fmt-video=width=1920,height=1080,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=10000 -V
Hi @JerryChang , unfortunately, I cannot simply use v4l2-ctl --stream-mmap to test the depth camera (IRS2975C) because its kernel driver has dummy start_streaming/stop_streaming callbacks — the sensor is controlled entirely from userspace by the proprietary Royale SDK (from PMD/Infineon), which programs the imager directly via I2C. V4L2 is only used to receive the MIPI CSI frame data, not to control the sensor.
This means running v4l2-ctl --stream-mmap on the depth camera’s /dev/video node will result in a timeout, since the sensor never actually begins transmitting.
I am currently preparing a standalone test using the Royale SDK’s sampleRetrieveData sample to start the depth camera, while simultaneously running v4l2-ctl --stream-mmap on the IR eye camera (IMX296, which has a standard V4L2 driver). I will report the results once I have them.
In the meantime, could you please confirm whether the above hardware configuration is supported for simultaneous streaming on Jetson Orin NX?
Brick AB:
CSI0 / serial_a: 2 lanes (D-PHY) — depth camera (IRS2975C)
CSI1 / serial_b: 1 lane (D-PHY) — IR eye camera (IMX296)
Brick CD:
CSI2–CSI3 / serial_c: 4 lanes (D-PHY) — RGB camera (OX05B1S)
Result: IR eye streams at 53–60 fps. Depth camera reports Bridge 65531 frames dropped, FC 5 frames dropped, 0 frames delivered. dmesg shows uncorr_err: request timed out after 5000 ms.
Test 2 — Depth first, then IR eye (depth wins, IR eye loses):
# Terminal 1: Start depth camera first*
sudo LD_LIBRARY_PATH=/usr/lib timeout 20 ./sampleRetrieveData
# Terminal 2: After \~6 seconds (once depth is streaming), start IR eye*
v4l2-ctl -d /dev/video-ir-eye \
--set-fmt-video=width=1440,height=1080,pixelformat=RG10 \
--set-ctrl bypass_mode=0 --stream-mmap --stream-count=50 --stream-poll -V
Result: Depth camera streams normally. IR eye reports select timeout and captures zero frames. dmesg shows 1 uncorr_err.
I digged a bit further into how these bricks are configured on Jetson.
The problem seems to be between kernel and Jetson’s RCE firmware.
As I understand there are two layers:
Kernel driver (csi5_fops.c) – constructs IVC messages and sends them via vi_capture_control_message(). This is in nvidia-oot source tree.
RCE firmware (Real-time Camera Engine) – runs on a dedicated ARM Cortex-R5 core inside the Tegra234 SoC. The device tree node is at rtcpu@bc00000 with compatible = “nvidia,tegra194-rce” and nvidia,cpu-name = “rce”. The firmware is part of the Nvidia bootloader, and I do not have its source code. The RCE firmware receives IVC commands from the kernel driver and configures the actual NVCSI PHY hardware.
The issue is in how csi5_start_streaming() in nvidia-oot/drivers/media/platform/tegra/camera/nvcsi/csi5_fops.c configures the NVCSI brick via IVC messages to the Camera RTCPU (RCE).
When a stream starts, csi5_start_streaming() calls:
csi5_stream_set_config() — sends CAPTURE_CSI_STREAM_SET_CONFIG_REQ to RCE with a brick_config struct
csi5_stream_open() — sends CAPTURE_PHY_STREAM_OPEN_REQ to RCE
The CAPTURE_CSI_STREAM_SET_CONFIG_REQ message includes config_flags = NVCSI_CONFIG_FLAG_BRICK | NVCSI_CONFIG_FLAG_CIL | NVCSI_CONFIG_FLAG_ERROR, which instructs the RCE firmware to reconfigure the entire brick’s PHY.
Each stream sends its own brick_config independently. There is no coordination between streams sharing the same brick. When the second stream sends its CAPTURE_CSI_STREAM_SET_CONFIG_REQ, the RCE reconfigures the brick PHY for the new stream’s parameters, which disrupts the first stream’s active data reception.
I was able to fix it on Jetson CPU side, without having to modify the RCE firmware.
Now cameras can be used in parallel!
This patch fixes the bug by tracking on the Jetson kernel side what brick configuration was sent to the RCE/RTCPU, and reusing the saved brick_config for subsequent streams on the same brick instead of reconfiguring the PHY and disrupting the already-active stream.
Also, the polarity needs to be common for all sensors on the same brick (in this case CSI0 and CSI1), as the first sensor to start streaming configures the entire brick’s polarity via the RCE firmware.