Tx1 CSI without I2c

Thank you Shane. But my camera is still not connected while I do these tests.

Can you please tell me the correct commands to test if /dev/video0 works correctly, for both with camera and without camera.

You can’t test without camera.

Have a check below doc for the command to capture image from video input device.

[url]https://docs.nvidia.com/jetson/l4t/Tegra%20Linux%20Driver%20Package%20Development%20Guide/camera_sensor_prog.html#wwpID0E0HF0HA[/url]

nvidia@tegra-ubuntu:~$ v4l2-ctl --list-devices
VIDIOC_QUERYCAP: failed: Inappropriate ioctl for device
VIDIOC_QUERYCAP: failed: Inappropriate ioctl for device
vi-output, my_in 6-0036 (platform:54080000.vi:2):
          /dev/video0
          /dev/v4l-subdev1
          /dev/v4l-subdev0

Is this normal for pre-configured camera? (Camera is still not connected, trying to figure out drivers without it.

nvidia@tegra-ubuntu:~$ v4l2-ctl --list-devices
VIDIOC_QUERYCAP: failed: Inappropriate ioctl for device
VIDIOC_QUERYCAP: failed: Inappropriate ioctl for device
vi-output, my_in 6-0036 (platform:54080000.vi:2):
          /dev/video0
          /dev/v4l-subdev1
          /dev/v4l-subdev0

Is this normal for pre-configured camera? (Camera is still not connected, trying to figure out drivers without it.

i2c errors are all gone now, but still get

vi 54080000.vi: mipi calibration failed
vi 54080000.vi: calibration failed -110 error
tegra_mipi_cal 700e3000.mipical: Mipi cal timeout,val:67a1, lanes:400000

You can ignore the calibration message if your output resolution is not much higher.
Also the can ignore the “VIDIOC_QUERYCAP: failed: Inappropriate ioctl for device” message.

i want to change pixel_t = “bayer_bggr”; -----------> yuv422 format
i want to change output depth 10bit -----------> 8bit;

how to configure the device tree file? which files do I need to configure?

Have a reference to below topic for YUV sensor.

https://devtalk.nvidia.com/default/topic/972192
https://devtalk.nvidia.com/default/topic/976709
https://devtalk.nvidia.com/default/topic/977871
https://devtalk.nvidia.com/default/topic/981601

Thanks Shane. I checked them, but still have some questions.

  1. What are the “must” be correct parameters for /dev/video0 to be configured correctly? There are a lot of parameters here and in the sensor programming guide, but I dont think all are necessary.

i2c@546c0000 {
			// avdd_dsi_csi-supply = <&max77620_ldo0>;

			status = "okay";

			#address-cells = <1>;
			#size-cells = <0>;

			my_video_in@01 {
				compatible = "my_video_in";
				/* I2C device address */
				reg = <0x36>; 

				/* V4L2 device node location */
				devnode = "video0";

				/* Physical dimensions of sensor */
				physical_w = "3.674";
				physical_h = "2.738";

				/* Sensor output flip settings */
				vertical-flip = "true";

				/* Define any required hw resources needed by driver */
				/* ie. clocks, io pins, power sources */
				avdd-reg = "vana";
				iovdd-reg = "vif";
				status = "okay";

				/**
				* A modeX node is required to support v4l2 driver
				* implementation with NVIDIA camera software stack
				*
				* mclk_khz = "";
				* Standard MIPI driving clock, typically 24MHz
				*
				* num_lanes = "";
				* Number of lane channels sensor is programmed to output
				*
				* tegra_sinterface = "";
				* The base tegra serial interface lanes are connected to
				*
				* discontinuous_clk = "";
				* The sensor is programmed to use a discontinuous clock on MIPI lanes
				*
				* dpcm_enable = "true";
				* The sensor is programmed to use a DPCM modes
				*
				* cil_settletime = "";
				* MIPI lane settle time value.
				* A "0" value attempts to autocalibrate based on mclk_multiplier
				*
				*
				*
				*
				* active_w = "";
				* Pixel active region width
				*
				* active_h = "";
				* Pixel active region height
				*
				* pixel_t = "";
				* The sensor readout pixel pattern
				*
				* readout_orientation = "0";
				* Based on camera module orientation.
				* Only change readout_orientation if you specifically
				* Program a different readout order for this mode
				*
				* line_length = "";
				* Pixel line length (width) for sensor mode.
				* This is used to calibrate features in our camera stack.
				*
				* mclk_multiplier = "";
				* Multiplier to MCLK to help time hardware capture sequence
				* TODO: Assign to PLL_Multiplier as well until fixed in core
				*
				* pix_clk_hz = "";
				* Sensor pixel clock used for calculations like exposure and framerate
				*
				*
				*
				*
				* inherent_gain = "";
				* Gain obtained inherently from mode (ie. pixel binning)
				*
				* min_gain_val = ""; (floor to 6 decimal places)
				* max_gain_val = ""; (floor to 6 decimal places)
				* Gain limits for mode
				*
				* min_exp_time = ""; (ceil to integer)
				* max_exp_time = ""; (ceil to integer)
				* Exposure Time limits for mode (us)
				*
				*
				* min_hdr_ratio = "";
				* max_hdr_ratio = "";
				* HDR Ratio limits for mode
				*
				* min_framerate = "";
				* max_framerate = "";
				* Framerate limits for mode (fps)
				*/
				mode0 { // OV5693_MODE_2592X1944
					mclk_khz = "24000";
					num_lanes = "2";
					tegra_sinterface = "serial_c";
					discontinuous_clk = "yes";
					dpcm_enable = "false";
					cil_settletime = "0";

					active_w = "2592";
					active_h = "1944";
					pixel_t = "bayer_bggr";
					readout_orientation = "90";
					line_length = "2688";
					inherent_gain = "1";
					mclk_multiplier = "6.67";
					pix_clk_hz = "160000000";

					min_gain_val = "1.0";
					max_gain_val = "16";
					min_hdr_ratio = "1";
					max_hdr_ratio = "64";
					min_framerate = "1.816577";
					max_framerate = "30";
					min_exp_time = "34";
					max_exp_time = "550385";
				};

				mode1 { //OV5693_MODE_2592X1458
					mclk_khz = "24000";
					num_lanes = "2";
					tegra_sinterface = "serial_c";
					discontinuous_clk = "yes";
					dpcm_enable = "false";
					cil_settletime = "0";

					active_w = "2592";
					active_h = "1458";
					pixel_t = "bayer_bggr";
					readout_orientation = "90";
					line_length = "2688";
					inherent_gain = "1";
					mclk_multiplier = "6.67";
					pix_clk_hz = "160000000";

					min_gain_val = "1.0";
					max_gain_val = "16";
					min_hdr_ratio = "1";
					max_hdr_ratio = "64";
					min_framerate = "1.816577";
					max_framerate = "30";
					min_exp_time = "34";
					max_exp_time = "550385";
				};

				mode2 { //OV5693_MODE_1280X720
					mclk_khz = "24000";
					num_lanes = "2";
					tegra_sinterface = "serial_c";
					discontinuous_clk = "no";
					dpcm_enable = "false";
					cil_settletime = "0";

					active_w = "1280";
					active_h = "720";
					pixel_t = "bayer_bggr";
					readout_orientation = "90";
					line_length = "1752";
					inherent_gain = "1";
					mclk_multiplier = "6.67";
					pix_clk_hz = "160000000";

					min_gain_val = "1.0";
					max_gain_val = "16";
					min_hdr_ratio = "1";
					max_hdr_ratio = "64";
					min_framerate = "2.787078";
					max_framerate = "120";
					min_exp_time = "22";
					max_exp_time = "358733";
				};

				ports {
					#address-cells = <1>;
					#size-cells = <0>;

					port@0 {
						reg = <0>;
						status = "okay";
						e3326_ov5693_out0: endpoint {
							csi-port = <2>;
							bus-width = <2>;
							status = "okay";
							remote-endpoint = <&e3326_csi_in0>;
						};
					};
				};
			};
		};
	};

	e3326_lens_ov5693@P5V27C {
		min_focus_distance = "0.0";
		hyper_focal = "0.0";
		focal_length = "2.67";
		f_number = "2.0";
		aperture = "2.0";
	};

	tegra-camera-platform {
		compatible = "nvidia, tegra-camera-platform";
		/**
		* Physical settings to calculate max ISO BW
		*
		* num_csi_lanes = <>;
		* Total number of CSI lanes when all cameras are active
		*
		* max_lane_speed = <>;
		* Max lane speed in Kbit/s
		*
		* min_bits_per_pixel = <>;
		* Min bits per pixel
		*
		* vi_peak_byte_per_pixel = <>;
		* Max byte per pixel for the VI ISO case
		*
		* vi_bw_margin_pct = <>;
		* Vi bandwidth margin in percentage
		*
		* isp_peak_byte_per_pixel = <>;
		* Max byte per pixel for the ISP ISO case
		*
		* isp_bw_margin_pct = <>;
		* Isp bandwidth margin in percentage
		*/
		num_csi_lanes = <4>;
		max_lane_speed = <1500000>;
		min_bits_per_pixel = <10>;
		vi_peak_byte_per_pixel = <2>;
		vi_bw_margin_pct = <25>;
		isp_peak_byte_per_pixel = <2>;
		isp_bw_margin_pct = <25>;

		/**
		* The general guideline for naming badge_info contains 3 parts, and is as follows,
		* The first part is the camera_board_id for the module; if the module is in a FFD
		* platform, then use the platform name for this part.
		* The second part contains the position of the module, ex. “rear” or “front”.
		* The third part contains the last 6 characters of a part number which is found
		* in the module's specsheet from the vender.
		*/
		modules {
			module0 {
				badge = "e3326_front_P5V27C";
				position = "rear";
				orientation = "1";
				status = "okay";
				drivernode0 {
					/* Declare PCL support driver (classically known as guid)  */
					pcl_id = "v4l2_sensor";
					/* Driver's v4l2 device name */
					devname = "ov5693 6-0036";
					/* Declare the device-tree hierarchy to driver instance */
					proc-device-tree = "/proc/device-tree/host1x/i2c@546c0000/my_video_in@01";
				};
				drivernode1 {
					/* Declare PCL support driver (classically known as guid)  */
					pcl_id = "v4l2_lens";
					proc-device-tree = "/proc/device-tree/e3326_lens_ov5693@P5V27C/";
				};
			};
		};
	};

I think the “tegra-camera-platform” device-tree is what matters right? Are i2c@546c0000–>my_video_in@01 parameters are only used for configuring the camera? Or are they used in defining the /dev/video0 as well?

  1. If the modeX matters for not just configuring the camera, but also configuring the video stream, how are modeX decided? Do I need to specifically set the mode somewhere, or does JetsonTx1 select the most appropriate mode itself?

Thank you very much.

For your case below must be correct.

num_lanes = "2";
					tegra_sinterface = "serial_c";
					discontinuous_clk = "yes";
					dpcm_enable = "false";
					cil_settletime = "0";

					active_w = "2592";
					active_h = "1458";
					pixel_t = "bayer_bggr";
					readout_orientation = "90";


					mclk_multiplier = "6.67";
					pix_clk_hz = "160000000

Hello Shane. Thank you again.

I gave numbers for each question, If you can answer them referring to numbers I’d be really happy.
Thanks for all the help and patience you’ve shown to us.

mode0 { // OV5693_MODE_2592X1944
					mclk_khz = "24000";
					num_lanes = "4";
					tegra_sinterface = "serial_a";
					discontinuous_clk = "yes";
					dpcm_enable = "false";
					cil_settletime = "0";

...
...
...
				};

As I said before, we’re using a 4-lane MIPI CSI-2 yuv422 camera input. We’ve checked the Sensor Programming Guide configuration for lanes, and it is serial_a and num_lanes=4 in the modeX.

1)But what confuses us is that what are these csi-port and bus-width then?(e3326_vi_in0, e3326_csi_in0, e3326_ov5693_out0) Do we need to modify this as well? (I guess so)

vi {
			num-channels = <1>;
			ports {
				#address-cells = <1>;
				#size-cells = <0>;
				port@0 {
					reg = <0>;
					status = "okay";
					e3326_vi_in0: endpoint {
						csi-port = <2>;
						bus-width = <2>;
						status = "okay";
						remote-endpoint = <&e3326_csi_out0>;
					};
				};
			};
		};
		nvcsi {
			num-channels = <1>;
			#address-cells = <1>;
			#size-cells = <0>;
			channel@0 {
				reg = <0>;
				status = "okay";
				ports {
					#address-cells = <1>;
					#size-cells = <0>;
					port@0 {
						reg = <0>;
						status = "okay";
						e3326_csi_in0: endpoint@0 {
							csi-port = <2>;
							bus-width = <2>;
							status = "okay";
							remote-endpoint = <&e3326_ov5693_out0>;
						};
					};

e3326_vi_in0, e3326_csi_in0, e3326_ov5693_out0. They all use;

csi-port = <2>;
bus-width = <2>;

2)Is this okey for a 4-lane, mipi csi2 camera? I think we need to change them.
3)We still don’t know how to select the mode of our choice. How do we select the mode we want?

The csi-port and bus-width must be correct for the vi/nvcsi scope too.

CSIA is 0 CSIF is 5 bus-width the same with the lane number. For your case should be like below.

csi-port = <0>;
bus-width = <4>;

How does the mode selection works? Any knowledge on that one Shane? At the worst case, we’ll delete all the other mode’s and just stick with mode0 of our choice.

Thank you.

The width and high can request from the application layer and vi/sensor driver handle it.
It’s better to support one mode to reduce the problem.

static const struct camera_common_colorfmt camera_common_color_fmts[] = {
	{
		MEDIA_BUS_FMT_SRGGB12_1X12,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SRGGB12,
	},
	{
		MEDIA_BUS_FMT_SRGGB10_1X10,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SRGGB10,
	},
	{
		MEDIA_BUS_FMT_SBGGR10_1X10,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SBGGR10,
	},
	{
		MEDIA_BUS_FMT_SRGGB8_1X8,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SRGGB8,
	},
	{ //Added this line.
		MEDIA_BUS_FMT_YUYV8_2X8,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_YUYV,
	},

I made this change, but in ov5693.c

#define OV5693_DEFAULT_MODE	OV5693_MODE_2592X1944
#define OV5693_DEFAULT_HDR_MODE	OV5693_MODE_2592X1944_HDR
#define OV5693_DEFAULT_WIDTH	1280
#define OV5693_DEFAULT_HEIGHT	720
#define OV5693_DEFAULT_DATAFMT	MEDIA_BUS_FMT_YUYV8_2X8
#define OV5693_DEFAULT_CLK_FREQ	24000000

I need to change these as well. Should I go and edit ov5693_mode_tbls.h ? (there are a lot of register tables)

and in device tree;

pixel_t = "bayer_bggr";

Still this. Couldn’t find a replacement for this eventhough it should be yuv422.

Can I just use,

mode_type= "yuv";
 csi_pixel_bit_depth= "???" (What to enter here)
pixel_phase= "yuyv"

instead? Does ov5693.c support this? I know imx185 does.

@ShaneCCC any idea? Thank you Shane.

The ov5693_mode_tbls.h is sensor initial table, If your device don’t need it you can remove/ignore it.
Below configure don’t need exactly correct. They are for ISP pipeline usage. And ISP pipeline don’t support YUV sensor.

pixel_t = "bayer_bggr";
mode_type= "yuv";
csi_pixel_bit_depth= "???" (What to enter here)
pixel_phase= "yuyv"

But you said before that these were required in order to work.

How will tx1 know that my input is “yuvy” then? It will try to capture the stream as it was bayer_bggr in this case, am i wrong?

Correct my comment for below don’t need for YUV sensor.
Look into the kernel/kernel-4.4/drivers/media/platform/tegra/camera/sensor_common.c both of them are need to configure correct. Please have a reference to below doc for detail information.

https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fcamera_sensor_prog.html%23

pixel_t = "bayer_bggr";
mode_type= "yuv";
csi_pixel_bit_depth= "???" (What to enter here)
pixel_phase= "yuyv"

I’m connecting the Tx1 with ssh. How can I check it’s working?

nvidia@tegra-ubuntu:~$ gst-launch-1.0 -v v4l2src device=/dev/video0 ! fakesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Device '/dev/video0' cannot capture in the specified format
Additional debug info:
gstv4l2object.c(3481): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Tried to capture in YU12, but device returned format YUYV
Execution ended after 0:00:00.000113073
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

I want this kind of code, one to publish and one to capture. TX1 to send video stream via ethernet, and watch it from the host pc.