Multi-Cam Image Capturing Gstreamer Jetson Xavier

Hi,

I bought NVIDIA Jetson Xavier and 4 MIPI cameras connected through ECON- Module. I wanted to capture single-shot images from all cameras in less than a second. I can access all the cameras synchronously at a frame rate of 30FPS and asynchronously at a frame rate of 120FPS. I understood that it is possible to acquire images through Gstreamer,v4l2, and Libargus. I found it difficult to understand the Tegra multimedia samples. So, I implemented the image capturing using Gstreamer which runs at the maximum frame rate and the code snippet is given below. It takes around 5s to capture the images from all the cameras (excluding the undistort images). I have three questions,

  1. Is it possible to speed up the image capturing process using GStreamer?

  2. Is it possible to access all the cameras in a single pipeline and capture single-shot images using GStreamer in less than a second?

  3. If the above two is impossible and if it can be done only through Argus. Is it possible to provide sample code for multi-camera_jpeg_capture similar to 09_camera_jpeg_capture from tegra_multimedia_api as 13_multi_camera does not provide the option to store in the drive?

import numpy as np 
import cv2
import sys
import os
import yaml
import time

def gstreamer_pipeline (capture_width=1920, capture_height=1080, display_width=1920, display_height=1080, framerate=120, flip_method=0) :   
    return ('nvarguscamerasrc sensor-id= 0 ! ' 
    'video/x-raw(memory:NVMM),'
    'width=(int)%d, height=(int)%d, '
    'format=(string)NV12, framerate=(fraction)%d/1 ! '
    'nvvidconv flip-method=%d ! '
    'video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! '
    'videoconvert ! '
    'video/x-raw, format=(string)BGR ! appsink'  % (capture_width,capture_height,framerate,flip_method,display_width,display_height))

def gstreamer_pipeline1 (capture_width=1920, capture_height=1080, display_width=1920, display_height=1080, framerate=120, flip_method=0) :   
    return ('nvarguscamerasrc sensor-id= 1 ! ' 
    'video/x-raw(memory:NVMM),'
    'width=(int)%d, height=(int)%d, '
    'format=(string)NV12, framerate=(fraction)%d/1 ! '
    'nvvidconv flip-method=%d ! '
    'video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! '
    'videoconvert ! '
    'video/x-raw, format=(string)BGR ! appsink'  % (capture_width,capture_height,framerate,flip_method,display_width,display_height))

def gstreamer_pipeline2 (capture_width=1920, capture_height=1080, display_width=1920, display_height=1080, framerate=120, flip_method=0) :   
    return ('nvarguscamerasrc sensor-id= 2 ! ' 
    'video/x-raw(memory:NVMM),'
    'width=(int)%d, height=(int)%d, '
    'format=(string)NV12, framerate=(fraction)%d/1 ! '
    'nvvidconv flip-method=%d ! '
    'video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! '
    'videoconvert ! '
    'video/x-raw, format=(string)BGR ! appsink'  % (capture_width,capture_height,framerate,flip_method,display_width,display_height))

def gstreamer_pipeline3 (capture_width=1920, capture_height=1080, display_width=1920, display_height=1080, framerate=120, flip_method=0) :   
    return ('nvarguscamerasrc sensor-id= 3 ! ' 
    'video/x-raw(memory:NVMM),'
    'width=(int)%d, height=(int)%d, '
    'format=(string)NV12, framerate=(fraction)%d/1 ! '
    'nvvidconv flip-method=%d ! '
    'video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! '
    'videoconvert ! '
    'video/x-raw, format=(string)BGR ! appsink'  % (capture_width,capture_height,framerate,flip_method,display_width,display_height))

def Img_Capture(K,D,DIM):
    i = 0
    cap = cv2.VideoCapture(gstreamer_pipeline(), cv2.CAP_GSTREAMER)
    ret_val, img = cap.read()
    map1, map2 = cv2.fisheye.initUndistortRectifyMap(np.array(K), np.array(D), np.eye(3), np.array(K), DIM, cv2.CV_16SC2)
    undistorted_img = cv2.remap(img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
    if ret_val == 1:
	img_dir = '/home/fcr/Python_Programs/Images/Stitch/im' + str(i) + '.bmp'
    	cv2.imwrite(img_dir,undistorted_img)
	i = i+1 
    cap.release()
    cap = cv2.VideoCapture(gstreamer_pipeline1(), cv2.CAP_GSTREAMER)
    ret_val, img = cap.read()
    map1, map2 = cv2.fisheye.initUndistortRectifyMap(np.array(K), np.array(D), np.eye(3), np.array(K), DIM, cv2.CV_16SC2)
    undistorted_img = cv2.remap(img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
    if ret_val == 1:
	img_dir = '/home/fcr/Python_Programs/Images/Stitch/im' + str(i) + '.bmp'
    	cv2.imwrite(img_dir,undistorted_img)
	i = i+1
    cap.release()
    cap = cv2.VideoCapture(gstreamer_pipeline2(), cv2.CAP_GSTREAMER)
    ret_val, img = cap.read()
    map1, map2 = cv2.fisheye.initUndistortRectifyMap(np.array(K), np.array(D), np.eye(3), np.array(K), DIM, cv2.CV_16SC2)
    undistorted_img = cv2.remap(img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
    if ret_val == 1:
	img_dir = '/home/fcr/Python_Programs/Images/Stitch/im' + str(i) + '.bmp'
    	cv2.imwrite(img_dir,undistorted_img)
	i = i+1
    cap.release()
    cap = cv2.VideoCapture(gstreamer_pipeline3(), cv2.CAP_GSTREAMER)
    ret_val, img = cap.read()
    map1, map2 = cv2.fisheye.initUndistortRectifyMap(np.array(K), np.array(D), np.eye(3), np.array(K), DIM, cv2.CV_16SC2)
    undistorted_img = cv2.remap(img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
    if ret_val == 1:
	img_dir = '/home/fcr/Python_Programs/Images/Stitch/im' + str(i) + '.bmp'
    	cv2.imwrite(img_dir,undistorted_img)
	i = i+1
    cap.release()

if __name__ == '__main__':
    start = time.time()
    with open("/home/fcr/Python_Programs/Fish_eye_Calib.yaml", 'r') as stream:
        data = yaml.load(stream)
    K = data["K"]
    D = data["D"]
    DIM = data["DIM"]
    Img_Capture(K,D,DIM)
    end = time.time()
    print("Time Taken: ",end-start)

Output:

GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1920 x 1080 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1920 x 1080 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 120.000005 
GST_ARGUS: PowerService: requested_clock_Hz=108864000
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1920 x 1080 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1920 x 1080 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: Running with following settings:
   Camera index = 1 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 120.000005 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1920 x 1080 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1920 x 1080 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: Running with following settings:
   Camera index = 2 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 120.000005 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1920 x 1080 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1920 x 1080 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 31.622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: Running with following settings:
   Camera index = 3 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 120.000005 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
('Time Taken: ', 5.643635034561157)
GST_ARGUS: 
PowerServiceHwVic::cleanupResources

hello skrishnamoorthi,

default provided sample application did not share the solutions to capture multiple camera images simultaneously.
may I know what’s your use-case to capture single-shot images from all these 4-cameras.
thanks

Hello JerryChang,

  My Application is to do bar code, OCR and pose estimation of an A3 label. To do so, I had to capture a single image from all the four cameras. Consequently, stitch it to get the full label and do the aforementioned application. I'm limited by time as it is a continuous production process (Max - 5s for the application). So, I wanted to capture the images as quick as possible (less than a second). Hence, the post.

Thanks

hello Theesh,

you may working with four argus sources for each cameras doing the single-shot.
you should also check the sensor timestamp if there’s synchronize requirements.
thanks

Hello JerryChang,
Thank you for responding. Could you please elaborate on your previous answer? Are you trying to say, to build separate c++ programs for individual cameras and combine them to achieve synchronicity?.

To do add on, it’s also fine to acquire images asynchronously but I wanted the images to be captured in less than a second from all the cameras. Is it possible to open the cameras individually and still achieve the time complexity?

Is it possible to achieve the abovesaid in GStreamer or v4l2?

Thanks

Hi Theesh,

Answering the questions in your original post:

R/ Yes. Since your use case is about ‘speed’ I would go with simplicity and and it seems gst-launch should suffice for you, at leat for some preliminary testing.

R/ Yes, as I said direct execution of the pipeline in the shell should be faster and it can give you some insight about how much time it takes to capture a single frame.

R/ Please try pipeline below and share results if possible.

gst-launch-1.0 nvarguscamerasrc sensor-id=0 num-buffers=1 ! 'video/x-raw(memory:NVMM), width=(int)1920,height=(int)1080, format=(string)NV12' ! nvjpegenc ! filesink location=cam0.jpg nvarguscamerasrc sensor-id=1 num-buffers=1 ! 'video/x-raw(memory:NVMM), width=(int)1920,height=(int)1080, format=(string)NV12' ! nvjpegenc ! filesink location=cam1.jpg nvarguscamerasrc sensor-id=2 num-buffers=1 ! 'video/x-raw(memory:NVMM), width=(int)1920,height=(int)1080, format=(string)NV12' ! nvjpegenc ! filesink location=cam2.jpg nvarguscamerasrc sensor-id=0 num-buffers=3 ! 'video/x-raw(memory:NVMM), width=(int)1920,height=(int)1080, format=(string)NV12' ! nvjpegenc ! filesink location=cam3.jpg -e

I tried a similar pipeline with some cameras I have at my disposal, here are my results (note the pipeline also includes encoding, just capturing the frame should be faster):

nvidia@nvidia-desktop:~$ time gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=1 ! "video/x-raw, format=(string)UYVY, width=(int)1280, height=(int)800, framerate=(fraction)30/1" ! nvvidconv ! nvjpegenc ! filesink location=cam0.jpg v4l2src device=/dev/video1 num-buffers=1 ! "video/x-raw, format=(string)UYVY, width=(int)1280, height=(int)800, framerate=(fraction)30/1" ! nvvidconv ! nvjpegenc ! filesink location=cam1.jpg v4l2src device=/dev/video2 num-buffers=1 ! "video/x-raw, format=(string)UYVY, width=(int)1280, height=(int)800, framerate=(fraction)30/1" ! nvvidconv ! nvjpegenc ! filesink location=cam2.jpg v4l2src device=/dev/video3 num-buffers=1 ! "video/x-raw, format=(string)UYVY, width=(int)1280, height=(int)800, framerate=(fraction)30/1" ! nvvidconv ! nvjpegenc ! filesink location=cam3.jpg -e
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:00.101429628
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

real	0m0,486s
user	0m0,100s
sys	0m0,152s

Hope this info helps.

Regards,

In addition to previous comments, in opencv application, you may also reorganize your code and measurements, as you are currently including yaml parsing (probably loading library, reading file from disk), gstreamer pipelines creation (probably loading many libraries), frame reading and saving on file (this might be slow depending on resolution and disk).
Furthermore, you might not need to recreate map1 and map2 for each read, you may do that once at init time (AFAIK InputArray are references to const _InputArray in C++ API, so unlikely to be modified).

You would have an init function doing yaml, gstreamer pipelines creation and maps creation, and a loop function first reading frames from each camera, then performing disparity maps, then saving results to disk. You may measure each of these steps for accurate understanding.
Also check for any difference from first loops to loops after a while (caching might be involved).

Hello there,

I hope you don’t mind me joining your discussion with some related questions of my own. I am trying to do something similar. I am using a Jetson Xavier with a Rogue Carrier and 2 MIPI-CSI Leopard Imaging IMX274 cameras. I don’t want to do any encoding/decoding. I just want to capture RAW frames. To start can someone explain the following pipeline using appsink?

std::string gstreamer_pipeline (int capture_width, int capture_height, int display_width, int display_height, int framerate, int flip_method, int cam_ID) {
	return "gst-launch-1.0 nvarguscamerasrc sensor-id=" + std::to_string(cam_ID) + " ! video/x-raw(memory:NVMM), width=(int)" + std::to_string(capture_width) + ", height=(int)" + std::to_string(capture_height) + ", format=(string)NV12, framerate=(fraction)" + std::to_string(framerate) + "/1 ! nvvidconv flip-method=" + std::to_string(flip_method) + " ! video/x-raw, width=(int)" + std::to_string(display_width) + ", height=(int)" + std::to_string(display_height) + ", format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink";

}

I got that code snippet from https://github.com/JetsonHacksNano/CSI-Camera/blob/master/simple_camera.cpp

So, I sorta understand the basics how “nvarguscamerasrc” denotes the source information of the video feed which is

sensor-id=" + std::to_string(cam_ID) + " ! video/x-raw(memory:NVMM), width=(int)" + std::to_string(capture_width) + ", height=(int)" + std::to_string(capture_height) + ", format=(string)NV12, framerate=(fraction)" + std::to_string(framerate) + "/1

Then there’s the “nvvidconv” which converts the src to have the following properties (as the sink)

flip-method=" + std::to_string(flip_method) + " ! video/x-raw, width=(int)" + std::to_string(display_width) + ", height=(int)" + std::to_string(display_height) + ", format=(string)BGRx

However, I don’t understand what the other conversion is for with “videoconvert”

video/x-raw, format=(string)BGR

Why are we converting again? And how come when I try to change the formats (like GRAY_16BE and GRAY_8), it breaks the pipeline? I used gst-inspect-1.0 to tell me which formats were allowed, but it always gives some kind of error.

Now, after this, I would like to convert the color image to a grayscale image (which I mentioned that I tried, but it didn’t work). Does anyone have any advice to get it to convert properly through the pipeline? Or do I have to use cv::cvtColor()? I have also tried using that cvtColor() function, but the image is very unclear.

I have some other questions that follow-up on this one, but I would like to understand the base problem first as it might help me fix my other problems. Thank you!

Hi,
Thank you jchaves and Honey_Patouceul for your valuable comments.

Jchaves - I followed your suggestions and I was successful with nvarguscamerasrc pipeline but not with v4l2src. The error logs, outputs, and questions are shared below.

Suggestion 1 :

time gst-launch-1.0 nvarguscamerasrc sensor-id=0 num-buffers=1 ! "video/x-raw(memory:NVMM),format=(string)NV12, width=(int)1920, height=(int)1080" ! nvjpegenc ! filesink location=0.jpg nvarguscamerasrc sensor-id=1 num-buffers=1 ! "video/x-raw(memory:NVMM),format=(string)NV12, width=(int)1920, height=(int)1080" ! nvjpegenc ! filesink location=1.jpg nvarguscamerasrc sensor-id=2 num-buffers=1 ! "video/x-raw(memory:NVMM),format=(string)NV12, width=(int)1920, height=(int)1080" ! nvjpegenc ! filesink location=2.jpg nvarguscamerasrc sensor-id=3 num-buffers=1 ! "video/x-raw(memory:NVMM),format=(string)NV12, width=(int)1920, height=(int)1080" ! nvjpegenc ! filesink location=3.jpg


Output:

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
GST_ARGUS: Creating output stream
GST_ARGUS: Creating output stream
GST_ARGUS: Creating output stream
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1920 x 1080 FR = 59,999999 fps Duration = 16666667 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1920 x 1080 FR = 59,999999 fps Duration = 16666667 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

CONSUMER: Waiting until producer is connected...
GST_ARGUS: 1920 x 1080 FR = 120,000005 fps Duration = 8333333 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1920 x 1080 FR = 120,000005 fps Duration = 8333333 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1280 x 720 FR = 59,999999 fps Duration = 16666667 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

CONSUMER: Waiting until producer is connected...
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1280 x 720 FR = 120,000005 fps Duration = 8333333 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1280 x 720 FR = 59,999999 fps Duration = 16666667 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1920 x 1080 FR = 59,999999 fps Duration = 16666667 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: Running with following settings:
   Camera index = 3 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 120,000005 
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: PowerService: requested_clock_Hz=27216000
GST_ARGUS: 1280 x 720 FR = 120,000005 fps Duration = 8333333 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 120,000005 
CONSUMER: Producer has connected; continuing.
GST_ARGUS: 1920 x 1080 FR = 59,999999 fps Duration = 16666667 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
GST_ARGUS: 1920 x 1080 FR = 120,000005 fps Duration = 8333333 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

CONSUMER: Producer has connected; continuing.
GST_ARGUS: 1920 x 1080 FR = 120,000005 fps Duration = 8333333 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1280 x 720 FR = 59,999999 fps Duration = 16666667 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1280 x 720 FR = 120,000005 fps Duration = 8333333 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: 1280 x 720 FR = 59,999999 fps Duration = 16666667 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: Running with following settings:
   Camera index = 1 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 120,000005 
GST_ARGUS: 1280 x 720 FR = 120,000005 fps Duration = 8333333 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 450000, max 400000000;

GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
GST_ARGUS: Running with following settings:
   Camera index = 2 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 120,000005 
CONSUMER: Producer has connected; continuing.
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
Got EOS from element "pipeline0".
Execution ended after 0:00:02.172784192
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
Setting pipeline to NULL ...
Freeing pipeline ...
GST_ARGUS: 
PowerServiceHwVic::cleanupResources

real	0m3,382s
user	0m0,100s
sys	0m0,160s

I tested the same pipeline before for capturing single-shot images from all cameras but I had two problems

  1. I was not able to incorporate the gst-launch command (suggestion 1) into my python code. Is it possible to include the above shell commands in python? if yes, Can you give me an example or point me to the right documentation?

  2. As you could see, it still takes 3.38 s to capture images from all the camera as I wanted to achieve a time, less than 1s. So, is it the maximum achievable in nvarguscamerasrc pipeline?

Suggestion 2:
I was able to see in your previous comment that you were able to achieve faster image capturing with four cameras using v4l2src pipeline. So, I tried your second suggestion for a single video node but ran into this error,

Code:

time gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=1 ! "video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080" ! nvvidconv ! nvjpegenc ! filesink location=cam00.jpg

Output

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
Execution ended after 0:00:00.000094528
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

real	0m0,095s
user	0m0,056s
sys	0m0,028s

My MIPI camera connected to node 0 has the following properties
Code:

v4l2-ctl --list-formats-ext -d /dev/video0

Output

ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'GB10'
	Name        : 10-bit Bayer GBGB/RGRG
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.017s (60.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.017s (60.000 fps)

	Index       : 1
	Type        : Video Capture
	Pixel Format: 'GB12'
	Name        : 12-bit Bayer GBGB/RGRG
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.017s (60.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.017s (60.000 fps)

I ran into the following error even after changing the image format.

Code:

time gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=(string)gbrg, width=(int)1920, height=(int)1080" ! nvvidconv ! nvjpegenc ! filesink location=cam00.jpg

Output

WARNING: erroneous pipeline: could not link v4l2src0 to nvvconv0, nvvconv0 can't handle caps video/x-bayer, format=(string)gbrg, width=(int)1920, height=(int)1080

or if I ran this

time gst-launch-1.0 v4l2src device=/dev/video0 do-timestamp=true ! 'video/x-bayer, format=(string)rggb, width=(int)1920, height=(int)1080, framerate=(fraction)30/1' ! bayer2rgb ! videoconvert ! xvimagesink -ev

Output

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...
New clock: GstSystemClock
Got EOS from element "pipeline0".
EOS received - stopping pipeline...
Execution ended after 0:00:00.000504416
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

real	0m0,130s
user	0m0,068s
sys	0m0,008s
  1. How to overcome this error?
  2. I have the same question like suggestion 1. Even if it is successful, how do you incorporate this shell command in python?

Thanks

Based on my short experience with MIPI and v4l2, v4l2 seems like it is mostly used for USB cameras. I haven’t had any luck picking up any MIPI/CSI camera feeds with v4l2. I don’t know why/how it works with

v4l2-ctl --list-formats-ext -d /dev/video0

Hi Theesh,

The pipeline using v4l2src that I shared was intended as a demonstration/example, those sensors I used provide YUV so there is no need to use the Jetson ISP in my case. I’m not aware of what is the specific stream format that you are capturing from the image sensor so it is expected that using that pipeline as it is may not work. When capturing with v4l2src you are bypassing the camera capture libraries from NVIDIA, which in other words means you are capturing raw data from the sensor, also you have to be specific on what streaming format you are trying to capture when using VL42 interface directly.

To execute a shell command from a python script you could use something similar to this:

import os

pipeline = 'gst-launch-1.0 videotestsrc num-buffers=1 ! "video/x-raw, format=(string)UYVY, width=(int)1280, height=(int)800, framerate=(fraction)30/1" ! nvvidconv ! nvjpegenc ! filesink location=cam0.jpg videotestsrc num-buffers=1 ! "video/x-raw, format=(string)UYVY, width=(int)1280, height=(int)800, framerate=(fraction)30/1" ! nvvidconv ! nvjpegenc ! filesink location=cam1.jpg videotestsrc num-buffers=1 ! "video/x-raw, format=(string)UYVY, width=(int)1280, height=(int)800, framerate=(fraction)30/1" ! nvvidconv ! nvjpegenc ! filesink location=cam2.jpg videotestsrc num-buffers=1 ! "video/x-raw, format=(string)UYVY, width=(int)1280, height=(int)800, framerate=(fraction)30/1" ! nvvidconv ! nvjpegenc ! filesink location=cam3.jpg -e'

os.system(pipeline)