Gstreamer cannot drive CSI camera on Jatson Nano 4GB

Hell guys, I am now using the CSI camera in Jatson nano for Visual Computing. I have been testing this camera for days, in which the gstreamer cannot drive the camera properly. I have tried several commands and packages, none of them works.
the v4l2-ctl --device /dev/video0 --all gives detailed camera info

Driver Info (not using libv4l2):
	Driver name   : tegra-video
	Card type     : vi-output, imx219 8-0010
	Bus info      : platform:54080000.vi:4
	Driver version: 4.9.140
	Capabilities  : 0x84200001
		Video Capture
		Streaming
		Extended Pix Format
		Device Capabilities
	Device Caps   : 0x04200001
		Video Capture
		Streaming
		Extended Pix Format
Priority: 2
Video input : 0 (Camera 4: ok)
Format Video Capture:
	Width/Height      : 1920/1080
	Pixel Format      : 'RG10'
	Field             : None
	Bytes per Line    : 3840
	Size Image        : 4147200
	Colorspace        : sRGB
	Transfer Function : Default (maps to sRGB)
	YCbCr/HSV Encoding: Default (maps to ITU-R 601)
	Quantization      : Default (maps to Full Range)
	Flags             : 

Camera Controls

                     group_hold 0x009a2003 (bool)   : default=0 value=0 flags=execute-on-write
                    sensor_mode 0x009a2008 (int64)  : min=0 max=0 step=0 default=0 value=2 flags=slider
                           gain 0x009a2009 (int64)  : min=0 max=0 step=0 default=0 value=16 flags=slider
                       exposure 0x009a200a (int64)  : min=0 max=0 step=0 default=0 value=33330 flags=slider
                     frame_rate 0x009a200b (int64)  : min=0 max=0 step=0 default=0 value=30000000 flags=slider
                    bypass_mode 0x009a2064 (intmenu): min=0 max=1 default=0 value=1
                override_enable 0x009a2065 (intmenu): min=0 max=1 default=0 value=1
                   height_align 0x009a2066 (int)    : min=1 max=16 step=1 default=1 value=1
                     size_align 0x009a2067 (intmenu): min=0 max=2 default=0 value=0
               write_isp_format 0x009a2068 (bool)   : default=0 value=0
       sensor_signal_properties 0x009a2069 (u32)    : min=0 max=0 step=0 default=0 flags=read-only, has-payload
        sensor_image_properties 0x009a206a (u32)    : min=0 max=0 step=0 default=0 flags=read-only, has-payload
      sensor_control_properties 0x009a206b (u32)    : min=0 max=0 step=0 default=0 flags=read-only, has-payload
              sensor_dv_timings 0x009a206c (u32)    : min=0 max=0 step=0 default=0 flags=read-only, has-payload
               low_latency_mode 0x009a206d (bool)   : default=0 value=0
               preferred_stride 0x009a206e (int)    : min=0 max=65535 step=1 default=0 value=0
                   sensor_modes 0x009a2082 (int)    : min=0 max=30 step=1 default=30 value=6 flags=read-only

I am using gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! nvoverlaysink, the output does not tell Error, but just stuck there without any image or window prompt out.


Secondly, I tried gst-launch-1.0 nvarguscamerasrc sensor_id=1 ! \ 'video/x-raw(memory:NVMM),width=3264, height=2464,framerate=21/1, format=NV12' ! \ nvvidconv flip-method=0 ! 'video/x-raw, width=816, height=616' ! \ nvvidconv ! nvegltransform ! nveglglessink -e

the result is wired that, the gst-launch window will immediately capture the upper left corner of the screen and and no longer shown any image.

I also tried python script to call the opencv and gstreamer functions, which still not working.

import cv2
def gstreamer_pipeline(
    capture_width=1280,
    capture_height=720,
    display_width=1280,
    display_height=720,
    framerate=60,
    flip_method=0,
):
    return (
        "nvarguscamerasrc ! "
        "video/x-raw(memory:NVMM), "
        "width=(int)%d, height=(int)%d, "
        "format=(string)NV12, framerate=(fraction)%d/1 ! "
        "nvvidconv flip-method=%d ! "
        "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
        "videoconvert ! "
        "video/x-raw, format=(string)BGR ! appsink"
        % (
            capture_width,
            capture_height,
            framerate,
            flip_method,
            display_width,
            display_height,
        )
    )


def show_camera():
    # To flip the image, modify the flip_method parameter (0 and 2 are the most common)
    print(gstreamer_pipeline(flip_method=0))
    cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)
    if cap.isOpened():
        window_handle = cv2.namedWindow("CSI Camera", cv2.WINDOW_AUTOSIZE)
        # Window
        while cv2.getWindowProperty("CSI Camera", 0) >= 0:
            ret_val, img = cap.read()
            cv2.imshow("CSI Camera", img)        
            print("I am showing Image !!!!\n")        
            # This also acts as
            keyCode = cv2.waitKey(30) & 0xFF
            # Stop the program on the ESC key
            if keyCode == 27:
                break
        cap.release()
        cv2.destroyAllWindows()
    else:
        print("Unable to open camera")


if __name__ == "__main__":
    show_camera() 

The results also wired, since the inserted print() function doesnot display the information “showing image”. And the camera is not opened properly.
)
Do you guys have any Any help or advice ?

---- Update: I replaced anther camera which should be identical to the previous one to make sure the camera is not breakdown. The driver can easily detect the camera, but still cannot display any captured video frames. The terminal output is shown below.

ludafu@ludafu:~$ ls /dev/vi*
/dev/video0
ludafu@ludafu:~$ v4l2-ctl --device /dev/video0 --all
Driver Info (not using libv4l2):
	Driver name   : tegra-video
	Card type     : vi-output, imx219 8-0010
	Bus info      : platform:54080000.vi:4
	Driver version: 4.9.140
	Capabilities  : 0x84200001
		Video Capture
		Streaming
		Extended Pix Format
		Device Capabilities
	Device Caps   : 0x04200001
		Video Capture
		Streaming
		Extended Pix Format
Priority: 2
Video input : 0 (Camera 4: ok)
Format Video Capture:
	Width/Height      : 1920/1080
	Pixel Format      : 'RG10'
	Field             : None
	Bytes per Line    : 3840
	Size Image        : 4147200
	Colorspace        : sRGB
	Transfer Function : Default (maps to sRGB)
	YCbCr/HSV Encoding: Default (maps to ITU-R 601)
	Quantization      : Default (maps to Full Range)
	Flags             : 

Camera Controls

                     group_hold 0x009a2003 (bool)   : default=0 value=0 flags=execute-on-write
                    sensor_mode 0x009a2008 (int64)  : min=0 max=0 step=0 default=0 value=2 flags=slider
                           gain 0x009a2009 (int64)  : min=0 max=0 step=0 default=0 value=16 flags=slider
                       exposure 0x009a200a (int64)  : min=0 max=0 step=0 default=0 value=33330 flags=slider
                     frame_rate 0x009a200b (int64)  : min=0 max=0 step=0 default=0 value=30000000 flags=slider
                    bypass_mode 0x009a2064 (intmenu): min=0 max=1 default=0 value=1
                override_enable 0x009a2065 (intmenu): min=0 max=1 default=0 value=1
                   height_align 0x009a2066 (int)    : min=1 max=16 step=1 default=1 value=1
                     size_align 0x009a2067 (intmenu): min=0 max=2 default=0 value=0
               write_isp_format 0x009a2068 (bool)   : default=0 value=0
       sensor_signal_properties 0x009a2069 (u32)    : min=0 max=0 step=0 default=0 flags=read-only, has-payload
        sensor_image_properties 0x009a206a (u32)    : min=0 max=0 step=0 default=0 flags=read-only, has-payload
      sensor_control_properties 0x009a206b (u32)    : min=0 max=0 step=0 default=0 flags=read-only, has-payload
              sensor_dv_timings 0x009a206c (u32)    : min=0 max=0 step=0 default=0 flags=read-only, has-payload
               low_latency_mode 0x009a206d (bool)   : default=0 value=0
               preferred_stride 0x009a206e (int)    : min=0 max=65535 step=1 default=0 value=0
                   sensor_modes 0x009a2082 (int)    : min=0 max=30 step=1 default=30 value=6 flags=read-only

I have also tried gstreamter function playing test video in a window using command gst-launch-1.0 videotestsrc ! autovideosink this works well, the test video is successfully displayed. So I think the true problem is the interaction between the Camera and gstreamer.

1 Like

Hi @CallMeMax

Can you please test this pipeline in order to see if there are buffers coming from nvarguscamerasrc element?

gst-launch-1.0 nvarguscamerasrc ! fakesink silent=false -v

It should show a each buffer information.

Best Regards,
Roberto Gutierrez,
Embedded Software Engineer,

1 Like

For the first issue, it may be that you are using a DP monitor, while nvoverlaysink expects HDMI by default.
Try adding property display-id=1 to nvoverlaysink

gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! nvoverlaysink display-id=1

For the second issue, does this work ?

gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! 'video/x-raw(memory:NVMM),width=3264, height=2464,framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw(memory:NVMM), width=816, height=616' ! nvegltransform ! nveglglessink -e

1 Like

Thank you Robert, I tried your command, however not works for me. The results are shown below.

ludafu@ludafu:~$ gst-launch-1.0 nvarguscamerasrc ! fakesink silent=false -v
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = event   ******* (fakesink0:sink) E (type: stream-start (10254), GstEventStreamStart, stream-id=(string)90f420e770315fa062eaa002fbb0235a, flags=(GstStreamFlags)GST_STREAM_FLAG_NONE, group-id=(uint)1;) 0x556a51cdd0
/GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = event   ******* (fakesink0:sink) E (type: caps (12814), GstEventCaps, caps=(GstCaps)"video/x-raw\(memory:NVMM\)\,\ width\=\(int\)1920\,\ height\=\(int\)1080\,\ format\=\(string\)NV12\,\ framerate\=\(fraction\)30/1";) 0x556a51ce40
/GstPipeline:pipeline0/GstFakeSink:fakesink0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 29.999999 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.

So no video data, no window prompt up…

Hello Honey, thanks for your reply, the first suggestion I tried will generate a gstSystemClock without any video data shown up. The results are shown below, I have to enter Ctrl+C to exit the process. The monitor i am using HDMI cable to connect, and this monitor is quite cheap which only supports HDMI and VGA connection. Could you explain further about this issues ?

ludafu@ludafu:~$ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! nvoverlaysink display-id=1
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock

For the second problem, this will encountered the exactly same issue described in above Third Question. Perhaps this screen shoot can help you better understand. The prompted window only capture the upper left corner of the screen.


Thanks for your time, I really appreciate that.

Thanks for all your help guys, strange things happened … I just simple replace the camera with another camera but with same image sensor, sony IMX_219, the camera is successfully detected and all the function tests passed…
The process is described as follows, there a 3 camera numbered 1,2,3. No.1 Cam is the one that I am debugging and working with, the rest two cams have the exact same image sensor (Sony,IMX219). Initially, I run the demo code clone from JatsonHacks CSI-Camera Github, the No.1 cam cannot work. The issues and questions are described as above. Things changed when I change the camera to No.2 and No.3. The No.2 and No.3 all works well for the demo tests. However, when I changed the camera to No.1 and then back to No.3, none of them will work. I have to reboot the Jatson Nano and directly use the No.3 Camera, so that the function can successfully operates.
A really annoying error is shown below. I think the question is there still some threads on backend that continue read the stream pipeline although I have stop the camera using Ctrl+C. Can anyone give a suggestion or direction ? I have noticed that, the MIX_219 sensor driving issues have been reported various times in several forums and blogs.

(Argus) Error EndOfFile: Unexpected error in reading socket (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 266)
(Argus) Error EndOfFile: Receive worker failure, notifying 1 waiting threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 340)
(Argus) Error InvalidState: Argus client is exiting with 1 outstanding client threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 357)

If you’re using an HDMI monitor, my first proposal is not appropriate.

You may tell more about your setup (carrier board and/or multi camera adpater, which connector is used…).
Be sure the 3 cams are the same…various models use imx219 sensor but are not interchangeable. The driver and device-tree provided in standard L4T are intended to be used with RPi v2 camera with devkit.

Be sure that no other process is using the camera. If using v4l2-ctl, be sure to always set control bypass to 0, otherwise it may interfere with argus daemon.

Looks surprizing that first camera is camera 4, but I have no Nano for checking.
You may post the output of the following commands for further debugging:

dmesg | egrep "i2c|imx|cam"
media-ctl -p
1 Like

Very helpful advice. I am now using Jatson Nano 4GB with Raspberry Pi mouse and keyboard connected using USB ports. At the same time, I set up an external monitor though HDMI. The camera is sure imx219 sensor, but not the official Pi v2 module. The commands return are shown below,

ludafu@ludafu-desktop:~$ dmesg | egrep "i2c|imx|cam"
[    0.460038] iommu: Adding device 546c0000.i2c to group 22
[    0.461497] tegra-pmc 7000e400.pmc: i2c-thermtrip node not found, emergency thermal reset disabled.
[    0.514459] iommu: Adding device 7000c000.i2c to group 25
[    0.514728] iommu: Adding device 7000c400.i2c to group 26
[    0.515043] iommu: Adding device 7000c500.i2c to group 27
[    0.515299] iommu: Adding device 7000c700.i2c to group 28
[    0.515645] iommu: Adding device 7000d000.i2c to group 29
[    0.515906] iommu: Adding device 7000d100.i2c to group 30
[    0.558619] GPIO line 151 (camera-control-output-low) hogged as output/low
[    0.558640] GPIO line 152 (camera-control-output-low) hogged as output/low
[    0.636801] camchar: rtcpu character device driver loaded
[    1.067230] tegra_camera_platform tegra-camera-platform: tegra_camera_probe:camera_platform_driver probe
[    1.067469] misc tegra_camera_ctrl: tegra_camera_isomgr_register isp_iso_bw=1500000, vi_iso_bw=1500000, max_bw=1500000
[    1.333551] i2c /dev entries driver
[    1.335042] i2c i2c-6: Added multiplexed i2c bus 7
[    1.335344] i2c i2c-6: Added multiplexed i2c bus 8
[    1.335349] i2c-mux-gpio cam_i2cmux: 2 port mux on Tegra I2C adapter adapter
[    1.335916] imx219 7-0010: tegracam sensor driver:imx219_v2.0.6
[    1.361081] imx219 8-0010: tegracam sensor driver:imx219_v2.0.6
[    1.384363] tegra-vii2c 546c0000.i2c: no acknowledge from address 0x10
[    1.384435] imx219 8-0010: imx219_board_setup: error during i2c read probe (-121)
[    1.384558] imx219 8-0010: board setup failed
[    1.384636] imx219: probe of 8-0010 failed with error -121
[    1.538647] vi 54080000.vi: subdev imx219 7-0010 bound

ludafu@ludafu-desktop:~$ media-ctl -p
Media controller API version 0.1.0

Media device information
------------------------
driver          vi
model           NVIDIA Tegra Video Input Device
serial          
bus info        
hw revision     0x3
driver version  0.0.0

Device topology
- entity 1: nvcsi--2 (2 pads, 2 links)
            type V4L2 subdev subtype Unknown flags 0
            device node name /dev/v4l-subdev0
	pad0: Sink
		<- "imx219 7-0010":0 [ENABLED]
	pad1: Source
		-> "vi-output, imx219 7-0010":0 [ENABLED]

- entity 4: imx219 7-0010 (1 pad, 1 link)
            type V4L2 subdev subtype Sensor flags 0
            device node name /dev/v4l-subdev1
	pad0: Source
		[fmt:SRGGB10_1X10/3264x2464 field:none colorspace:srgb]
		-> "nvcsi--2":0 [ENABLED]

- entity 6: vi-output, imx219 7-0010 (1 pad, 1 link)
            type Node subtype V4L flags 0
            device node name /dev/video0
	pad0: Sink
		<- "nvcsi--2":1 [ENABLED]

- entity 18: nvcsi--1 (2 pads, 0 link)
             type V4L2 subdev subtype Unknown flags 0
	pad0: Sink
	pad1: Source


I am quite confused by these returned info, could you plz tell more details about how these commands work and their meaning ? Also, by replacing the cameras, I noticed that the v4l2-ctl --device /dev/video0 --all returns different results especially the caemra no.

Priority: 2
Video input : 0 (Camera 0: no power)
Format Video Capture:
	Width/Height      : 3264/2464
	Pixel Format      : 'RG10'

here the Video input camera No. is 0, but the first camera is 4… How should I activate the first camera if it is referred as Camera 4?

From the kernel logs, one camera has been successfully probed on first CSI connector.

The numbering may not be relevant, but the following looks weird:

If using a different IMX219 sensor than RPi v2 cam, you may have to adapt driver and/or device tree. Contact your camera vendor for these.

1 Like

Really appreciate your advice. Thanks for your kindness again.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.