Performance difference between nvarguscamerasrc pipeline and lib_argus api when grabbing images from...

Hi,

I would like to use multiple csi cameras on Xavier. Multiple images from cameras are then processed independently.

There are two approaches to obatin images:

  1. Using multiple nvarguscamerasrc pipelines but with different sensor id;
  2. Following the multi_cameras sample to implement a lib_argus api based application.

I am wondering whether there is a performance difference between these two approaches, in terms of time delay? Thx.

Best,
ywl22

hello ywl22,

may I know how many cameras you would like to used?
according to Jetson AGX Xavier Software Features, Jetson-Xavier support 6-camera preview at 30-fps.

there’s no huge difference between multiple nvarguscamerasrc pipelines and multi_cameras sample (Multi-Session) since they’re all creating a request and streams for each capture-session.

suggest you also contact with Jetson Preferred Partner for multiple camera solutions.
thanks

Hi Jerry,

Thx for your reply.
The desired number of cameras is 6 but 4 is also acceptable.

  1. Under a 4-cameras setting, argus_camera app grabs images from cameras simultaneously while gstreamer fails to do that. That is why I’d like to ask the performance difference between these two approaches. Also it is highly recommended by the camera supplier, a NVIDIA preferred partner, to use argus for multiple cameras.

  2. The pipeline used is:

nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM),
    width=1920, height=1080,format=(string)NV12,framerate=30/1 ! nvvidconv flip-method=0 !  video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR

Is there any way to accelerate the piepline ?

Thx.

hello ywl22,

that’s right, suggest to work with Argus for multiple camera solutions.

however, may I know what’s your use-case.
your pipeline involves two color format conversion which should be performance bottleneck.

suggest you access L4T Accelerated GStreamer User Guide, check the [gst-nvivafilter] plugin to have implementation.
thanks

My project is based on ROS (https://www.ros.org/). Currently I am using gscam(https://github.com/ros-drivers/gscam) for grabbing images and publishing them as ros topics. To covert the iamges into ros topics(http://docs.ros.org/melodic/api/sensor_msgs/html/msg/Image.html), a rgb format is required.

Any recommendation of efficient ROS driver? or any efficient way to convert color format ?
thx.

hello ywl22,

please access L4T Accelerated GStreamer User Guide, you should check the [VIDEO FORMAT CONVERSION WITH GSTREAMER-1.0] chapter for the details and samples for format conversions.
thanks

Hi ywl22,

Did you find the best pipeline for publishing images in ROS from camera?

Thanks.