Jetson TX2 MIPI CSI-2 opencv error

Hi.

I was trying to use the Jetson TX2 board to test the MIPI CSI-2 signal.
First, I checked the video output with the Jetson TX2 onboard camera.
The relevant source code is shown below.

#!/usr/bin/env python


import sys
import argparse
import cv2
import numpy as np

def parse_cli_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("--video_device", dest="video_device",
                        help="Video device # of USB webcam (/dev/video?) [0]",
                        default=0, type=int)
    arguments = parser.parse_args()
    return arguments

# On versions of L4T previous to L4T 28.1, flip-method=2
# Use the Jetson onboard camera
def open_onboard_camera():
    #return cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)I420 ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")
	return cv2.VideoCapture("nvcamerasrc ! "
                		"video/x-raw(memory:NVMM), width=(int)640, height=(int)480," 
				"format=(string)I420, framerate=(fraction)30/1 ! "
				"nvvidconv ! video/x-raw, format=(string)I420 ! "
				"videoconvert ! video/x-raw, format=(string)BGR ! appsink")

# Open an external usb camera /dev/videoX
def open_camera_device(device_number):
    return cv2.VideoCapture(device_number)
   

def read_cam(video_capture):
    if video_capture.isOpened():
        windowName = "JetsononboardDemo"
        cv2.namedWindow(windowName, cv2.WINDOW_NORMAL)
        cv2.resizeWindow(windowName,1280,720)
        cv2.moveWindow(windowName,0,0)
        cv2.setWindowTitle(windowName,"Jetson onboard Demo")
        showWindow=1  # Show all stages
        while True:
            if cv2.getWindowProperty(windowName, 0) < 0: # Check to see if the user closed the window
                # This will fail if the user closed the window; Nasties get printed to the console
                break;
            ret_val, frame = video_capture.read();
            hsv=cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

            if showWindow==1: # Show Camera Frame
                displayBuf = frame 
            elif showWindow == 2: # Show Gray Frame
                displayBuf = hsv

            cv2.imshow(windowName,displayBuf)

            key=cv2.waitKey(10)
            if key == 27: # Check for ESC key
                cv2.destroyAllWindows()
                break ;
            elif key==49: # 1 key, show frame
                showWindow=1
            elif key==50: # 2 key, show Gray
                showWindow=2
              
    else:
     print ("camera open failed")



if __name__ == '__main__':
    arguments = parse_cli_args()
    print("Called with args:")
    print(arguments)
    print("OpenCV version: {}".format(cv2.__version__))
    print("Device Number:",arguments.video_device)
    if arguments.video_device==0:
      video_capture=open_onboard_camera()
    else:
      video_capture=open_camera_device(arguments.video_device)
    read_cam(video_capture)
    video_capture.release()
    cv2.destroyAllWindows()

I refer to the circuit diagram of the Jetson TX2 Camera module board. And I connected the Jetson TX2 board to the Lattice FPGA board.

I created a 1280 x 720 resolution, RGB888 format image with a Lattice FPGA. And I sent that video to Jetson TX2.

Then I get the following error.

Called with args:
Namespace(video_device=0)
OpenCV version: 3.4.1-dev
('Device Number:', 0)
VIDEOIO ERROR: V4L: device nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink: Unable to query number of channels
Socket read error. Camera Daemon stopped functioning.....
gst_nvcamera_open() failed ret=0
OpenCV(3.4.1-dev) Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp, line 890
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

OpenCV(3.4.1-dev) /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

I think I need to fix the opencv videocapture part to fix this error.
But I do not know what to do.

I’ll wait for your help.

It’s a bit fuzzy to me…Could you send the commands and the logs for both cases ?
Seems to me that in the first case you had opencv supporting gstreamer pipeline, while in the second case either opencv did not support gstreamer, or it was trying to use V4L2 api instead.

Hi.

I ran the program with the Python code above.

I experimented with the MIPI CSI-2 signal from the onboard camera and FPGA.

I refer to the MIPI CSI-2 signal log made by the FPGA above.

I did not refer to the log when I ran the above program with the onboard camera.
That’s like below.

Called with args:

Namespace(video_device=0)

OpenCV version: 3.4.1-dev

('Device Number:', 0)

VIDEOIO ERROR: V4L: device nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink: Unable to query number of channels

Available Sensor modes : 
2592 x 1944 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
2592 x 1458 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1280 x 720 FR=120.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10

NvCameraSrc: Trying To Set Default Camera Resolution. Selected sensorModeIndex = 1 WxH = 2592x1458 FrameRate = 30.000000 ...

I do not have experience with Jetson and ubuntu.
I want to know how gstreamer’s pipeline supports opencv.
Do you know how to check?

It looks a bit clearer to me. I don’t think this issue to be opencv-related.

I assume you’re using the onboard camera and trying to emulate this from your FPGA connected to CSI.
You would have to be aware that the onboard camera is sending bayer frames in 10 bits format, so nvcamerasrc get these through ISP for debayering and more (auto-exposure, gain,…).
Providing RGB888 is not suitable for this path without modifications.
You may try to emulate OV5693 from your FPGA, or use directly V4L2 API for getting your RGB format (using --set-ctrl bypass_mode=0). Some info here.

I’d suggest to try from gstreamer only, no need to add opencv in the loop for now.

That statements seems a bit contradictory to me, or lacking some details:
“I created a 1280 x 720 resolution, RGB888 format image with a Lattice FPGA”
" Jetson TX2 board to test the MIPI CSI-2 signal"
“experimented with the MIPI CSI-2 signal from the onboard camera and FPGA.”
From the statements above it seems to me that you are not using jetson devkit onboard sensor at all. But for the statement that you experemented with it.
It appears that you somehow generate by FPGA something that is kind of emulating camera signal and that you are trying to redirect that to Jetson somehow, for a purpose of recording probably? displaying? processing?
I would suggest to determine if you are trying to deliver onboard sensor stream [when it is connected to jetson] or if you somehow generate stream with FPGA [ and how].

reference thread
https://devtalk.nvidia.com/default/topic/1047177/jetson-tx2/how-to-check-mipi-csi-2-data/

Hi. Honey_Patouceul

I tried the part you spoke with an onboard camera.

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), format=I420, width=640, height=480, framerate=30/1' ! nvvidconv ! 'video/x-raw' ! xvimagesink
Setting pipeline to PAUSED ...

Available Sensor modes : 
2592 x 1944 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
2592 x 1458 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1280 x 720 FR=120.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock

NvCameraSrc: Trying To Set Default Camera Resolution. Selected sensorModeIndex = 1 WxH = 2592x1458 FrameRate = 30.000000 ...

ERROR: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0: Output window was closed
Additional debug info:
xvimagesink.c(555): gst_xv_image_sink_handle_xevents (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0
Execution ended after 0:00:03.517184766
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
v4l2-ctl --set-fmt-video=width=1280,height=720,pixelformat=BG10 --stream-mmap -d /dev/video0 --set-ctrl bypass_mode=0 --stream-count=300 --stream-to=ov5693.raw
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 121.00 fps
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 120.50 fps
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

The result is the same as above.

I also tried to verify that the MIPI CSI-2 data is coming in correctly by connecting the FPGA board and Jetson in the same way.

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), format=I420, width=640, height=480, framerate=30/1' ! nvvidconv ! 'video/x-raw' ! xvimagesink
Setting pipeline to PAUSED ...
Socket read error. Camera Daemon stopped functioning.....
gst_nvcamera_open() failed ret=0
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstNvCameraSrc:nvcamerasrc0: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure.
Additional debug info:
gstbasesrc.c(3354): gst_base_src_start (): /GstPipeline:pipeline0/GstNvCameraSrc:nvcamerasrc0:
Failed to start
Setting pipeline to NULL ...
Freeing pipeline ...
v4l2-ctl --set-fmt-video=width=1280,height=720,pixelformat=BG10 --stream-mmap -d /dev/video0 --set-ctrl bypass_mode=0 --stream-count=300 --stream-to=ov5693.raw
Failed to open /dev/video0: No such file or directory

The result is the same as above.

I saw this result and thought that the MIPI CSI-2 data I created in nvcamerasrc from the FPGA was wrong.

This time I created the I420 data in 1280 x 720 resolution on the FPGA.

I do not even know where the documentation is related to nvcamerasrc. Where is the MIPI CSI-2 data accepted by Jetons for the format or packet type?

Is the only way you can check the MIPI CSI-2 data coming in from Jetson?

Hi. Andrey1984

  1. I wanted to develop a reference program to receive MIPI CSI-2 data.

  2. So I designed a reference program to view the MIPI CSI-2 image data by running the Jetson onboard camera.

  3. I used the program to check the video of the onboard camera.

  4. Then I created the MIPI CSI-2 data using the FPGA.

  5. I connected the FPGA board to Jetson and sent the MIPI CSI-2 data from the FPGA board to Jetson.

  6. The waveform of my FPGA board is as below. See it.
    https://www.dropbox.com/sh/3lnwk6b3jjsxyqj/AABjU9ypGk4YLKtWXO9RTq2Ka?dl=0

  7. I tried to check the FPGA image from the reference program.

  8. However, the result does not work like the previous article.

Do not you find any information about the onboard camera’s waveforms on the Jetson Forum?

Looks like the original camera driver failed to drive your FPGA device, and finally it has no driver and /dev/video0 is not created. I have no experience in camera driver, someone else may better advise.

Or may be the driver at tx2 can not interpret the signal that you are passing from fpga.
For example, because of signal characteristics that doesn’t match the driver.