Unable to view camera frames using GStreamer but v4l-ctl frame capture works


I’m using Jetson Nano dev kit and have connected a camera on CAM0.
I’ve also made a camera driver and devicetree for the camera. Camera connected to Jetson Nano provides 400x400 yuv frames at 30fps.

From v4l2 I’m able to fetch yuv frames at 30fps but frame saved from camera is with resolution 416x400.
when I view the frame in yuv player, I see 16px green at the end.

In dmesg i see the following errors too, but frame gets captured somehow

[  269.636593] video4linux video0: frame start syncpt timeout!0
[  269.844606] video4linux video0: frame start syncpt timeout!0
[  270.052571] video4linux video0: frame start syncpt timeout!0
[  270.260562] video4linux video0: frame start syncpt timeout!0
[  270.468741] video4linux video0: frame start syncpt timeout!0
[  270.676820] video4linux video0: frame start syncpt timeout!0

Following is the command I used for saving a single frame.

v4l2-ctl --set-fmt-video=width=400,height=400,pixelformat=YUYV --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 -d /dev/video0 --stream-to=frame_1.raw

Also somehow from Gstreamer I’m unable to view the frames and here is the command I’m using to view frames.

gst-launch-1.0 -v nvarguscamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM), width=(int)400, height=(int)400, format=(string)NV12, framerate=(fraction)30/1’ ! nvvidconv ! queue ! nvoverlaysink

In dmesg I don’t see any errors while running Gstreamer command.

Could you please help me find out answer’s for the following questions,
Question 1: why I’m receiving 16 pixels extra in width? as camera only provides 400x400 frames.
Question 2: I’m receiving camera frames using v4l-ctl command but why I’m unable to receive frames from Gstreamer?

For reference, here is the devicetree configuration I’m using,

i2c@546c0000 {
			devcam1_a@36 {
				status = "okay";
				compatible = "nvidia,devcam1";
				reg = <0x36>;
				devnode = "video0";
				set_mode_delay_ms = "3000";

				mode0 {
					mclk_khz = "24000";
					num_lanes = "2";
					tegra_sinterface = "serial_a";
					phy_mode = "DPHY";
					discontinuous_clk = "no";
					dpcm_enable = "false";
					cil_settletime = "0";

					active_w = "400";
					active_h = "400";
					mode_type = "yuv";
					pixel_phase = "yuyv";
					csi_pixel_bit_depth = "16";
					readout_orientation = "0";
					line_length = "416";
					inherent_gain = "1";
					mclk_multiplier = "12.5";
					pix_clk_hz = "300000000";

					gain_factor = "1";
					min_gain_val = "1";
					max_gain_val = "255";
					step_gain_val = "1";
					default_gain = "16";
					min_hdr_ratio = "1";
					max_hdr_ratio = "1";
					framerate_factor = "1000000";
					min_framerate = "3000000";
					max_framerate = "30000000";
					step_framerate = "1";
					default_framerate = "30000000";
					exposure_factor = "1000000";
					min_exp_time = "832";
					max_exp_time = "16667";
					step_exp_time = "208";
					default_exp_time = "8000";
					embedded_metadata_height = "0";

				ports {
					status = "okay";
					#address-cells = <1>;
					#size-cells = <0>;

					port@0 {
						status = "okay";
						reg = <0>;
						devcam1_devcam1_out0: endpoint {
							status = "okay";
							port-index = <0>;
							bus-width = <1>;
							remote-endpoint = <&devcam1_csi_in0>;

	tegra-camera-platform {
		status = "okay";
		compatible = "nvidia, tegra-camera-platform";
		num_csi_lanes = <1>;
		max_lane_speed = <1500000>;
		min_bits_per_pixel = <16>;
		vi_peak_byte_per_pixel = <2>;
		vi_bw_margin_pct = <25>;
		max_pixel_rate = <200000>;
		isp_peak_byte_per_pixel = <5>;
		isp_bw_margin_pct = <25>;

		modules {
			module0 {
				status = "okay";
				badge = "devcam1_front_devcam1";
				position = "front";
				orientation = "1";
				drivernode0 {
					status = "okay";
					pcl_id = "v4l2_sensor";
					devname = "devcam1 6-0036";
					proc-device-tree = "/proc/device-tree/host1x/i2c@546c0000/devcam1_a@36";
  1. Due to VI have 32 alignment that is why 16 pixels extra.
  2. nvarguscamerasrc doesn’t support YUV camera try v4l2src check the command from below document.


Sorry for late response and Thank you for a quick reply,

  1. Is there any way we can get rid of those 16 extra pixels at source?

  2. As per you suggestion I used v4l2src to render frames using the following command

gst-launch-1.0 v4l2src device=“/dev/video0” ! “video/x-raw, width=400, height=400, format=(string)YUY2” ! xvimagesink -ev

When frames are rendered using the above gstreamer pipeline, colors don’t look proper. whole frame is covered in green color.
But when I open the saved YUV image(i.e captured from v4l-ctl) in YUV player with YUYV 422 settings, I’m able to see proper colors (screenshot is shared in my last post).
Could you please let me know if there is something wrong in my gstreamer command?

  1. I camera across an example, where nvv4l2camerasrc with UYVY pixel phase was used. looks like nvv4l2camerasrc supports YUV input frames. but couldn’t find any examples with YUY2 as source.
    Could you please provide some pointers on this?
  1. The player should be able handle it.
  2. Enable the debug print in the csi2_fops.c/vi2_fops.c to check if able get more information.


Could you please help me find out on how to enable debug print for csi2_fops.c/vi2_fops.c ?

I’m using Jetson Nano 4GB devkit with Jetpack 4.6.2.


I tried performing a test by capturing frames from camera using jetson nano.

Here are my observations,

  • I configured devicetree’s pixel_phase to YUYV and then when I captured frame using v4l-ctl command,
    received frame is in UYVY format.

  • Later, I configured devicetree’s pixel_phase to UYVY and then when I captured frame using v4l-ctl command, received frame is in YUYV format.

Camera which I’m using only provides frames in YUYV format but when I perform capture the received frame format is different.

Could you please help me out in this issue as to what changes we need to do in devicetree or any sources to get proper pixel format captured.

It’s configure by device tree like you do.
You can add code to print in the driver to check. Also v4l2-ctl --list-formats-ext can check.

					pixel_phase = "yuyv";
					csi_pixel_bit_depth = "16";

I overcame the issue by implementing a workaround, by changing the color format at the renderer end.

Is there any way in jetson we can change the way frame is read?
Like, if the bit stream which is coming in or received from MIPI lane is in reverse order, is there any configuration/flag where we can specify read operation, on how the data can be read to interpret frames color information.

I don’t think so. The frame is directly to memory, doesn’t see any relative configure for VI like this case.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.