Use Jetson Nano MIPI interface connect MAX9286(with four GMSL camera)

As title, Does Jetnano support above solution?

Yes, Nano can support GMSL camera without virtual channel.

Dear Shane,

Thank you for your reply ,
I have implement camera driver for MAX9286 by refer below guide:
https://docs.nvidia.com/jetson/archives/l4t-archived/l4t-3231/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fcamera_sensor_prog.html%23

Is it right way to develop a driver for MAX9286?
What’s the meaning of “without virtual channel”, Could you give me some more information about it?

By the way , our hardware connection is Jetson Nano <–Mipi–> MAX9286 Coax dev kit <–4 GMSL Camera (with MAX96705 Deserializer and AP0101 & AR0143)

Thanks a lot.

One more thing ,
Does Jetson nano support virtual channel ?
As Guide said ,
" •Jetson AGX Xavier series and Jetson TX2 series also support the MIPI CSI virtual channel feature. The virtual channel is a unique channel identifier used for multiplexed sensor streams sharing the same CSI port/brick and CSI stream through supported GMSL (Gigabit Multimedia Serial Link) aggregators.

•GMSL + VC capture is validated on Jetson AGX Xavier series and Jetson TX2 series using the nvgstcapture application. The reference GMSL module (MAX9295-serializer/MAX9296-deserializer/IMX390-sensor) is used for validation purposes."

Does it mean only “Jetson AGX Xavier series and Jetson TX2 series” support Virtual Channel , and reference “MAX9295-serializer/MAX9296-deserializer/IMX390-sensor” is enough for my development?

Thanks .

TX1/Nano(t210) not support VC others support it.

OK, I see.

Do you think reference “MAX9295-serializer/MAX9296-deserializer/IMX390-sensor” is enough for my development?
But actually i cant find dts related modification of above serdes solution.

Thanks a lot.

The reference MAX9295/Max9296 are for the virtual channel referencing. You may need to consult with vendor to get the nonoe VC driver configure.

Do you mean the serializer/deserializer vendor?

Yes

Seems serializer/deserializer only provide register configuration , not include platform driver.
Currently , i develop the driver for max9286 on Jetson nano by following IMX219 driver (trust max9286 as a sensor, AR0143 and AP0101 dont need any configuration)
Do you think it’s a feasible way for my situation?

Hope your kindness reply.
Thanks a lot.

Yes, I think there’s should be without problem if the SER/DES configure well.

OK, thanks .
By the way , how to identifier a driver whether use VC or not?

Maybe the driver have some information for the vc-id

After i configure MAX9286 and MAX96705 register as normal and try to use v4l2 capture video from /dev/video0 ,
it occur timeout .
As check the kernel log, it shows :

[ 80.517278] max9286 6-0048: max9286_set_mode:
[ 80.517284] max9286 6-0048: max9286_start_streaming:
[ 80.517653] ## [max9286_i2c_write():max9286.c:143 ] addr:0x90, reg:0x1b, read_val:0xf
[ 80.526174] ## [max9286_i2c_write():max9286.c:143 ] addr:0x90, reg:0x32, read_val:0xaa
[ 80.534894] ## [max9286_i2c_write():max9286.c:143 ] addr:0x90, reg:0x33, read_val:0xaa
[ 80.543692] ## [max9286_i2c_write():max9286.c:143 ] addr:0x8e, reg:0x04, read_val:0x87
[ 80.658744] ## [max9286_i2c_write():max9286.c:143 ] addr:0x90, reg:0x15, read_val:0x8b
[ 80.890440] video4linux video0: frame start syncpt timeout!0
[ 81.098339] video4linux video0: frame start syncpt timeout!0
[ 81.306209] video4linux video0: frame start syncpt timeout!0
[ 81.514251] video4linux video0: frame start syncpt timeout!0
[ 81.722109] video4linux video0: frame start syncpt timeout!0
[ 81.934084] video4linux video0: frame start syncpt timeout!0
[ 82.142114] video4linux video0: frame start syncpt timeout!0
[ 82.350044] video4linux video0: frame start syncpt timeout!0
[ 82.557960] video4linux video0: frame start syncpt timeout!0

Could you help to find any clue from above logs to debug the issue?
Thanks a lot.

You need to enable the debug print of the csi2_fops.c/vi2_fops.c

As enable DEBUG of csi2_fops.c/vi2_fops.c , and try to capture video, log shows like below.

[ 154.904539] video4linux video0: TEGRA_VI_CSI_ERROR_STATUS 0x00000000
[ 154.904601] vi 54080000.vi: TEGRA_CSI_PIXEL_PARSER_STATUS 0x00000000
[ 154.904651] vi 54080000.vi: TEGRA_CSI_CIL_STATUS 0x00000010
[ 154.904697] vi 54080000.vi: TEGRA_CSI_CILX_STATUS 0x00040041
[ 154.904919] vi 54080000.vi: cil_settingtime was autocalculated
[ 154.904959] vi 54080000.vi: csi clock settle time: 13, cil settle time: 10
[ 155.106576] video4linux video0: frame start syncpt timeout!0

Does it have any incorrect configuration of csi/vi ?

The log show the data and clk lane control error.
Have a check the TRM for the detail information.

Dear Shane ,

Do you mean by setting pixel clk for MAX9286? 
Or other configuration of clock?

Thanks a lot.

It’s could be the csi lanes configure problem.

Dear Shane,
As check TRM, i found that TEGRA_CSI_CILX_STATUS 0x00040041 means:
CILA_CLK_LANE_CTRL_ERR
CILA_DATA_LANE0_CTRL_ERR
CILA_DATA_LANE1_CTRL_ERR

But where should i confige csi lanes on Jetson nano ?
Attached my dts file which refer imx219 dts.
Do i need modify something in dtsi?

include <dt-bindings/media/camera.h>
include <dt-bindings/platform/t210/t210.h>

/ {
host1x {
vi_base: vi {
num-channels = <1>;
ports {
#address-cells = <1>;
#size-cells = <0>;
vi_port0: port@0 {
reg = <0>;
max9286_vi_in0: endpoint {
port-index = <0>;
bus-width = <2>;
remote-endpoint = <&max9286_csi_out0>;
};
};
};
};

  csi_base: nvcsi {
  	num-channels = <1>;
  	#address-cells = <1>;
  	#size-cells = <0>;
  	csi_chan0: channel@0 {
  		reg = <0>;
  		ports {
  			#address-cells = <1>;
  			#size-cells = <0>;
  			csi_chan0_port0: port@0 {
  				reg = <0>;
  				max9286_csi_in0: endpoint@0 {
  					port-index = <0>;
  					bus-width = <2>;
  					remote-endpoint = <&max9286_out0>;
  				};
  			};
  			csi_chan0_port1: port@1 {
  				reg = <1>;
  				max9286_csi_out0: endpoint@1 {
  					remote-endpoint = <&max9286_vi_in0>;
  				};
  			};
  		};
  	};
  };

  i2c@546c0000 {
  	max9286_single_cam0: max9286_a@48 {
  		compatible = "maxim,max9286";
  		/* I2C device address */
  		reg = <0x48>;

  		/* V4L2 device node location */
  		devnode = "video0";

  		/* Physical dimensions of sensor */
  		physical_w = "3.680";
  		physical_h = "2.760";

  		sensor_model = "max9286";

  		use_sensor_mode_id = "true";

  		/**
  		* ==== Modes ====
  		* A modeX node is required to support v4l2 driver
  		* implementation with NVIDIA camera software stack
  		*
  		* == Signal properties ==
  		*
  		* phy_mode = "";
  		* PHY mode used by the MIPI lanes for this device
  		*
  		* tegra_sinterface = "";
  		* CSI Serial interface connected to tegra
  		* Incase of virtual HW devices, use virtual
  		* For SW emulated devices, use host
  		*
  		* pix_clk_hz = "";
  		* Sensor pixel clock used for calculations like exposure and framerate
  		*
  		* readout_orientation = "0";
  		* Based on camera module orientation.
  		* Only change readout_orientation if you specifically
  		* Program a different readout order for this mode
  		*
  		* == Image format Properties ==
  		*
  		* active_w = "";
  		* Pixel active region width
  		*
  		* active_h = "";
  		* Pixel active region height
  		*
  		* pixel_t = "";
  		* The sensor readout pixel pattern
  		*
  		* line_length = "";
  		* Pixel line length (width) for sensor mode.
  		*
  		* == Source Control Settings ==
  		*
  		* Gain factor used to convert fixed point integer to float
  		* Gain range [min_gain/gain_factor, max_gain/gain_factor]
  		* Gain step [step_gain/gain_factor is the smallest step that can be configured]
  		* Default gain [Default gain to be initialized for the control.
  		*     use min_gain_val as default for optimal results]
  		* Framerate factor used to convert fixed point integer to float
  		* Framerate range [min_framerate/framerate_factor, max_framerate/framerate_factor]
  		* Framerate step [step_framerate/framerate_factor is the smallest step that can be configured]
  		* Default Framerate [Default framerate to be initialized for the control.
  		*     use max_framerate to get required performance]
  		* Exposure factor used to convert fixed point integer to float
  		* For convenience use 1 sec = 1000000us as conversion factor
  		* Exposure range [min_exp_time/exposure_factor, max_exp_time/exposure_factor]
  		* Exposure step [step_exp_time/exposure_factor is the smallest step that can be configured]
  		* Default Exposure Time [Default exposure to be initialized for the control.
  		*     Set default exposure based on the default_framerate for optimal exposure settings]
  		*
  		* gain_factor = ""; (integer factor used for floating to fixed point conversion)
  		* min_gain_val = ""; (ceil to integer)
  		* max_gain_val = ""; (ceil to integer)
  		* step_gain_val = ""; (ceil to integer)
  		* default_gain = ""; (ceil to integer)
  		* Gain limits for mode
  		*
  		* exposure_factor = ""; (integer factor used for floating to fixed point conversion)
  		* min_exp_time = ""; (ceil to integer)
  		* max_exp_time = ""; (ceil to integer)
  		* step_exp_time = ""; (ceil to integer)
  		* default_exp_time = ""; (ceil to integer)
  		* Exposure Time limits for mode (sec)
  		*
  		* framerate_factor = ""; (integer factor used for floating to fixed point conversion)
  		* min_framerate = ""; (ceil to integer)
  		* max_framerate = ""; (ceil to integer)
  		* step_framerate = ""; (ceil to integer)
  		* default_framerate = ""; (ceil to integer)
  		* Framerate limits for mode (fps)
  		*
  		* embedded_metadata_height = "";
  		* Sensor embedded metadata height in units of rows.
  		* If sensor does not support embedded metadata value should be 0.
  		*/
  		mode0 { /* MAX9286_MODE_1280x720_120FPS */
  			mclk_khz = "24000";
  			num_lanes = "2";
  			tegra_sinterface = "serial_a";
  			phy_mode = "DPHY";
  			discontinuous_clk = "yes";
  			dpcm_enable = "false";
  			cil_settletime = "0";

  			active_w = "1280";
  			active_h = "720";
  			mode_type = "yuv";
  			csi_pixel_bit_depth = "8";
  			pixel_phase = "uyvy";
  			readout_orientation = "90";
  			line_length = "3448";
  			inherent_gain = "1";
  			mclk_multiplier = "9.33";
  			pix_clk_hz = "169600000";

  			gain_factor = "16";
  			framerate_factor = "1000000";
  			exposure_factor = "1000000";
  			min_gain_val = "16"; /* 1.00x */
  			max_gain_val = "170"; /* 10.66x */
  			step_gain_val = "1";
  			default_gain = "16"; /* 1.00x */
  			min_hdr_ratio = "1";
  			max_hdr_ratio = "1";
  			min_framerate = "2000000"; /* 2.0 fps */
  			max_framerate = "120000000"; /* 120.0 fps */
  			step_framerate = "1";
  			default_framerate = "120000000"; /* 120.0 fps */
  			min_exp_time = "13"; /* us */
  			max_exp_time = "683709"; /* us */
  			step_exp_time = "1";
  			default_exp_time = "2495"; /* us */

  			embedded_metadata_height = "2";
  		};

  		ports {
  			#address-cells = <1>;
  			#size-cells = <0>;

  			port@0 {
  				reg = <0>;
  				max9286_out0: endpoint {
  					port-index = <0>;
  					bus-width = <2>;
  					remote-endpoint = <&max9286_csi_in0>;
  				};
  			};
  		};
  	};
  };
};

};

/ {
tcp: tegra-camera-platform {
compatible = “nvidia, tegra-camera-platform”;

  /**
  * Physical settings to calculate max ISO BW
  *
  * num_csi_lanes = <>;
  * Total number of CSI lanes when all cameras are active
  *
  * max_lane_speed = <>;
  * Max lane speed in Kbit/s
  *
  * min_bits_per_pixel = <>;
  * Min bits per pixel
  *
  * vi_peak_byte_per_pixel = <>;
  * Max byte per pixel for the VI ISO case
  *
  * vi_bw_margin_pct = <>;
  * Vi bandwidth margin in percentage
  *
  * max_pixel_rate = <>;
  * Max pixel rate in Kpixel/s for the ISP ISO case
  *
  * isp_peak_byte_per_pixel = <>;
  * Max byte per pixel for the ISP ISO case
  *
  * isp_bw_margin_pct = <>;
  * Isp bandwidth margin in percentage
  */
  num_csi_lanes = <2>;
  max_lane_speed = <1500000>;
  min_bits_per_pixel = <8>;
  vi_peak_byte_per_pixel = <2>;
  vi_bw_margin_pct = <25>;
  max_pixel_rate = <240000>;
  isp_peak_byte_per_pixel = <5>;
  isp_bw_margin_pct = <25>;

  /**
   * The general guideline for naming badge_info contains 3 parts, and is as follows,
   * The first part is the camera_board_id for the module; if the module is in a FFD
   * platform, then use the platform name for this part.
   * The second part contains the position of the module, ex. "rear" or "front".
   * The third part contains the last 6 characters of a part number which is found
   * in the module's specsheet from the vendor.
   */
  modules {
  	cam_module0: module0 {
  		badge = "porg_front_RBPCV2";
  		position = "front";
  		orientation = "1";
  		cam_module0_drivernode0: drivernode0 {
  			pcl_id = "v4l2_sensor";
  			devname = "max9286 6-0048";
  			proc-device-tree = "/proc/device-tree/host1x/i2c@546c0000/max9286_a@48";
  		};
  	};
  };

};
};