Multiple Camera Frame Support Using MIPI-CSI Virtual Channels

Dear community,

We are working on driver support for a custom camera with a MIPI-CSI interface connected to Jetson Orin Nano Dev-Kit with NVIDIA Jetson Linux 35.4.1 (Linux Kernel 5.10).
The MIPI-CSI backend is an IP core from Xilinx (https://docs.amd.com/r/en-US/pg260-mipi-csi2-tx/MIPI-CSI-2-Transmitter-Subsystem-v2.2-Product-Guide) configured for a 600Mbps data rate/line and two active lanes. The image is 640x480/60fps at 16 bits per pixel bit width. Currently, we only have this one mode configured.

In difference to standard drivers already seen in the NVIDIA Linux kernel and from your example code, the camera has the following constraints:

  • No register access via I2C. The camera provides a UART-CLI that is itself provided via an I2C-UART bridge.
  • No Start/Stop streaming capabilities via GPIO or UART-CLI. The camera starts streaming immediately after powering up the camera.
  • Camera registration/deregistration is done from a user space daemon. During boot, the camera is not registered at all.

The development kit has two camera interfaces available and the device tree should include a configuration where two cameras will have two streams available, and each camera is connected to one camera interface. The driver is working so far if:

  • Two cameras are configured in the device tree, each with only one stream available.
  • One camera is configured at one camera interface with two streams available using vc-id property to manage stream binding.

If I configure two cameras with two streams in the device tree, I do not get any stream on each camera. Thereby, it does not matter if only one camera interface is used or if two cameras are connected at both camera interfaces.

Here is the devicetree overlay:
custom-camera-overlay.txt (42.3 KB)

I do not use any GMSL functionalities at all.
It seems there may be a configuration error for the tegra-camera-platform dt-properties, I guess:

			max_lane_speed = <600000>;
			min_bits_per_pixel = <16>;
			vi_peak_byte_per_pixel = <2>;
			vi_bw_margin_pct = <25>;
			isp_peak_byte_per_pixel = <5>;
			isp_bw_margin_pct = <25>;
			max_pixel_rate = <184320>;

Is there a recommendation on how to set up tegra-camera-platform device tree properties? Or any suggestions as to why this may not work.

Thanx in advance.

werneazc

How do you connect to Orin Nano without GMSL for two cameras?
Suppose need aggregator to connect two sensors to Orin.

Dear ShaneCCC,

One camera is connected to CAM0 or CAM1 interface respectively. Each camera sends two streams marked with virtual channel ID corresponding to vi-cd property set in the device tree. I have two video devices (/dev/video[0,1]) available in kernel userspace and can see the video streams using gstreamer.

Then what’s the problem.

Can the vc-id =0/1 able to control streaming individually?
If no I think it could have problem.

This only works if I describe only one camera in the device tree at a time. If I add two cameras to the device tree but only connecting one (or both it does not matter), I get no stream anymore. The core shows standard error for no valid frame:

[ 103.904814] tegra-camrtc-capture-vi tegra-capture-vi: uncorr_err: request timed out after 2500 ms
[ 103.905124] tegra-camrtc-capture-vi tegra-capture-vi: err_rec: attempting to reset the capture channel
[ 103.906197] (NULL device *): vi_capture_control_message: NULL VI channel received

If I had two cameras each on one CAM interface. Do I need to have more vc-ids to use, e.g.

  • CAM0: main stream: vc-id=0; secondary stream: vc-id=1
  • CAM1: main stream: vc-id=2; secondary stream: vc-id=3

Does the vc-id need to be unique for all streams available in the system?

I add some trace and kernel log files for better understanding:

kernel_log.txt (5.4 KB)
trace_log.txt (235.0 KB)

Looks like your HW streaming both vc-id=0/1 at the same time.
You need to add control the streaming by stream_on function in sensor driver to control it.

CAM1: main stream: vc-id=0; secondary stream: vc-id=1

As mentioned above:

  • We cannot control the start and stop of streaming at all.
  • I can see both streams when defining a single camera in the device tree and using it. I can see two camera streams if two cameras are defined in the device tree and used with each streaming a single stream. But we have an issue if two cameras are defined with two streams each.

In this case, whether two or only one camera is connected to the Dev.-Kit does not matter. Only the definition in the device tree causes the issue. So it seems we need to fix something in the device tree, and in my opinion, it leads to something in the camera-platform configuration.

So question again:
Is there a recommendation on how to set up tegra-camera-platform device tree properties? Do you have any suggestions as to why this may not work?

No, I don’t think there’s any configurate to fix this kind of case.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.