Problem with nvcompositor + nvarguscamerasrc

Have a try below command.

gst-launch-1.0 nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=320 sink_0::height=240 sink_1::xpos=320 sink_1::ypos=0 sink_1::width=320 sink_1::height=240 ! nvoverlaysink nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! queue ! nvvidconv ! nvivafilter cuda-process=true customer-lib-name="libnvsample_cudaprocess.so" ! 'video/x-raw(memory:NVMM), format=(string)RGBA' ! comp. -e nvarguscamerasrc sensor-id=1 ! "video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=(fraction)30/1" ! queue ! nvivafilter cuda-process=true customer-lib-name="libnvsample_cudaprocess.so" ! 'video/x-raw(memory:NVMM), format=(string)RGBA' ! comp. -e

I tried your command.

Result:

jetson@jetson:~$ gst-launch-1.0 nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=320 sink_0::height=240 sink_1::xpos=320 sink_1::ypos=0 sink_1::width=320 sink_1::height=240 ! nvoverlaysink nvarguscamerasrc sensor-id=0 ! “video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1” ! queue ! nvvidconv ! nvivafilter cuda-process=true customer-lib-name=“libnvsample_cudaprocess.so” ! “video/x-raw(memory:NVMM), format=(string)RGBA” ! comp. -e nvarguscamerasrc sensor-id=1 ! “video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=(fraction)30/1” ! queue ! nvivafilter cuda-process=true customer-lib-name=“libnvsample_cudaprocess.so” ! “video/x-raw(memory:NVMM), format=(string)RGBA” ! comp. -e
nvbuf_utils: Could not get EGL display connection
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
GST_ARGUS: Creating output stream
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 59000, max 33333000;

GST_ARGUS: Running with following settings:
Camera index = 1
Camera mode = 0
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 29,999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 59000, max 33333000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 0
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 29,999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD… Exiting…
ERROR: from element /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc1: TIMEOUT
Additional debug info:
Argus Error Status
EOS on shutdown enabled – waiting for EOS after Error
Waiting for EOS…

(gst-launch-1.0:12798): GStreamer-CRITICAL **: 10:34:42.727: gst_mini_object_set_qdata: assertion ‘object != NULL’ failed

Your command worked, but it only gives a picture from one camera, and the other half is just a black square

jetson@jetson:~$ gst-launch-1.0 nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=320 sink_0::height=240 sink_1::xpos=320 sink_1::ypos=0 sink_1::width=320 sink_1::height=240 ! nvoverlaysink nvarguscamerasrc sensor-id=0 ! “video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1” ! queue ! nvvidconv ! nvivafilter cuda-process=true customer-lib-name=“libnvsample_cudaprocess.so” ! “video/x-raw(memory:NVMM), format=(string)RGBA” ! comp. -e nvarguscamerasrc sensor-id=1 ! “video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=(fraction)30/1” ! queue ! nvivafilter cuda-process=true customer-lib-name=“libnvsample_cudaprocess.so” ! “video/x-raw(memory:NVMM), format=(string)RGBA” ! comp. -e
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:649 Invalid camera device specified 1 specified, 0 max index

(gst-launch-1.0:9318): GStreamer-CRITICAL **: 10:50:27.482: gst_mini_object_set_qdata: assertion ‘object != NULL’ failed
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 59000, max 33333000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 0
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 29,999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
Redistribute latency…

jetson@jetson:~$ gst-launch-1.0 nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=320 sink_0::height=240 sink_1::xpos=320 sink_1::ypos=0 sink_1::width=320 sink_1::height=240 ! nvoverlaysink nvarguscamerasrc sensor-id=0 ! “video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1” ! queue ! nvvidconv ! nvivafilter cuda-process=true customer-lib-name=“libnvsample_cudaprocess.so” ! “video/x-raw(memory:NVMM), format=(string)RGBA” ! comp. -e nvarguscamerasrc sensor-id=1 ! “video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=(fraction)30/1” ! queue ! nvivafilter cuda-process=true customer-lib-name=“libnvsample_cudaprocess.so” ! “video/x-raw(memory:NVMM), format=(string)RGBA” ! comp. -e
nvbuf_utils: Could not get EGL display connection
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
GST_ARGUS: Creating output stream
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 59000, max 33333000;

CONSUMER: Waiting until producer is connected…
GST_ARGUS: Running with following settings:
Camera index = 1
Camera mode = 0
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 29,999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 59000, max 33333000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 0
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 29,999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
CONSUMER: Producer has connected; continuing.
nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD… Exiting…
ERROR: from element /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc1: TIMEOUT
Additional debug info:
Argus Error Status
EOS on shutdown enabled – waiting for EOS after Error
Waiting for EOS…

(gst-launch-1.0:9241): GStreamer-CRITICAL **: 18:59:58.932: gst_mini_object_set_qdata: assertion ‘object != NULL’ failed

My guess would be that both are sent into input0 of compositor. You may add sink number property:

gst-launch-1.0 nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=320 sink_0::height=240 sink_1::xpos=320 sink_1::ypos=0 sink_1::width=320 sink_1::height=240 ! nvoverlaysink nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! queue ! nvvidconv ! nvivafilter cuda-process=true customer-lib-name="libnvsample_cudaprocess.so" ! 'video/x-raw(memory:NVMM), format=(string)RGBA' ! comp.sink_1 -e      nvarguscamerasrc sensor-id=1 ! "video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=(fraction)30/1" ! queue ! nvivafilter cuda-process=true customer-lib-name="libnvsample_cudaprocess.so" ! 'video/x-raw(memory:NVMM), format=(string)RGBA' ! comp.sink_0 -e

Also have a check below command.

gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_0  nvarguscamerasrc sensor-id=1 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_1  nvcompositor name=comp sink_0::alpha=0.5 sink_1::alpha=0.5 ! nvoverlaysink sync=false -e

I tried it. The output is black on the screen, that is, there is no picture. and this is what he writes to the terminal

jetson@jetson:~$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! comp.sink_0 nvarguscamerasrc sensor-id=1 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! comp.sink_1 nvcompositor name=comp sink_0::alpha=0.5 sink_1::alpha=0.5 ! nvoverlaysink sync=false -e
nvbuf_utils: Could not get EGL display connection
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
GST_ARGUS: Creating output stream
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 59000, max 33333000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 0
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 29,999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 59000, max 33333000;

GST_ARGUS: Running with following settings:
Camera index = 1
Camera mode = 0
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 29,999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
CONSUMER: Producer has connected; continuing.
nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD… Exiting…
ERROR: from element /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0: TIMEOUT
Additional debug info:
Argus Error Status
EOS on shutdown enabled – waiting for EOS after Error
Waiting for EOS…

(gst-launch-1.0:10896): GStreamer-CRITICAL **: 09:09:37.795: gst_mini_object_set_qdata: assertion ‘object != NULL’ failed
Redistribute latency…

jetson@jetson:~$ gst-launch-1.0 nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=320 sink_0::height=240 sink_1::xpos=320 sink_1::ypos=0 sink_1::width=320 sink_1::height=240 ! nvoverlaysink nvarguscamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1’ ! queue ! nvvidconv ! nvivafilter cuda-process=true customer-lib-name=“libnvsample_cudaprocess.so” ! ‘video/x-raw(memory:NVMM), format=(string)RGBA’ ! comp.sink_1 -e nvarguscamerasrc sensor-id=1 ! “video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=(fraction)30/1” ! queue ! nvivafilter cuda-process=true customer-lib-name=“libnvsample_cudaprocess.so” ! ‘video/x-raw(memory:NVMM), format=(string)RGBA’ ! comp.sink_0 -e
nvbuf_utils: Could not get EGL display connection
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
GST_ARGUS: Creating output stream
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 59000, max 33333000;

GST_ARGUS: Running with following settings:
Camera index = 1
Camera mode = 0
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 29,999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 59000, max 33333000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 0
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 29,999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
CONSUMER: Producer has connected; continuing.
nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD… Exiting…
ERROR: from element /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc1: TIMEOUT
Additional debug info:
Argus Error Status
EOS on shutdown enabled – waiting for EOS after Error
Waiting for EOS…

(gst-launch-1.0:11056): GStreamer-CRITICAL **: 09:15:36.793: gst_mini_object_set_qdata: assertion ‘object != NULL’ failed

Does both cameras work independently and concurrently ? (Note this assumes your local monitor can do 1080p)

gst-launch-1.0 -v nvarguscamerasrc sensor_id=0 num-buffers=150 ! 'video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv flip-method=2 ! nvoverlaysink

gst-launch-1.0 -v nvarguscamerasrc sensor_id=1 num-buffers=150 ! 'video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv flip-method=2 ! nvoverlaysink

gst-launch-1.0 -v nvarguscamerasrc sensor_id=0 num-buffers=150 ! 'video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv flip-method=2 ! nvoverlaysink overlay=1    nvarguscamerasrc sensor_id=1 num-buffers=150 ! 'video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv flip-method=2 ! nvoverlaysink overlay=2  

The last command may only show second camera overlaying the first one, but this is just for checking if both nvarguscamersrc work before using nvcompositor.

The first and second command are working. One by one, the cameras are launched and give a picture.

The third command does not work and outputs this log:

jetson@jetson:~$ gst-launch-1.0 -v nvarguscamerasrc sensor_id=0 num-buffers=150 ! “video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)NV12, framerate=(fraction)30/1” ! nvvidconv flip-method=2 ! nvoverlaysink overlay=1 nvarguscamerasrc sensor_id=1 num-buffers=150 ! “video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)NV12, framerate=(fraction)30/1” ! nvvidconv flip-method=2 ! nvoverlaysink overlay=2
nvbuf_utils: Could not get EGL display connection
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
/GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstNvOverlaySink-nvoverlaysink:nvoverlaysink-nvoverlaysink0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstNvOverlaySink-nvoverlaysink:nvoverlaysink-nvoverlaysink1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/Gstnvvconv:nvvconv1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
GST_ARGUS: Creating output stream
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 59000, max 33333000;

GST_ARGUS: Running with following settings:
Camera index = 1
Camera mode = 0
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 29,999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 59000, max 33333000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 0
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 29,999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
CONSUMER: Producer has connected; continuing.
nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD… Exiting…
ERROR: from element /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc1: TIMEOUT
Additional debug info:
Argus Error Status
Execution ended after 0:00:02.777882912
Setting pipeline to PAUSED …
Setting pipeline to READY …

(gst-launch-1.0:9817): GStreamer-CRITICAL **: 08:44:41.916: gst_mini_object_set_qdata: assertion ‘object != NULL’ failed
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Cleaning up
GST_ARGUS: Done Success
Setting pipeline to NULL …
Freeing pipeline …

The monitor supports a resolution of 1920 by 1080

What’s your BSP version?

JetPack 4.5
L4T 32.5

R32 (release), REVISION: 5.0 (/etc/nv_tegra_release)
CTI version AGX-32.5-V004 (/etc/cti/CTI-L4T.version)

Have try this patch

thank you it helped. 2 cameras are displayed on the screen, but the image is superimposed on one another.

@ShaneCCC @kayccc @Honey_Patouceul Could you help some more?

This command from the terminal works, displays 2 cameras on the screen, although it overlays the image one on the other.

gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! comp.sink_0 nvarguscamerasrc sensor-id=1 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! comp.sink_1 nvcompositor name=comp sink_0::alpha=0.5 sink_1::alpha=0.5 ! nvoverlaysink sync=false -e

But when I insert this command into the code, it does not work. Here is a code snippet.

#!/usr/bin/env python

import numpy as np
import rospy
from std_msgs.msg import String
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
import gi
import sys
import argparse
import subprocess
sys.path.insert(1, ‘/usr/local/lib/python2.7/site-packages’)
import cv2
print(cv2.version)
gi.require_version(‘Gst’, ‘1.0’)
from gi.repository import GObject, Gst

g1_str = “nvarguscamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! comp.sink_0”
g2_str = “nvarguscamerasrc sensor-id=1 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! comp.sink_1”
nv_str = “nvcompositor name=comp sink_0::alpha=0.5 sink_1::alpha=0.5 ! nvoverlaysink sync=false -e”

def open_cam_onboard(width, height):
gst_elements = str(subprocess.check_output(‘gst-inspect-1.0’))

gst_str = g1_str + " " + g2_str + " " + nv_str #+ " ! videoconvert ! appsink"

print("______")
print(gst_str)
print("______")

#gst_str = ('nvarguscamerasrc sensor-id=1 ! '
#               'video/x-raw(memory:NVMM), '
#               'width=(int)1920, height=(int)1080, '
#               'format=(string)NV12, framerate=(fraction)30/1 ! '
#               'nvvidconv flip-method=0 ! '
#               'video/x-raw, width=(int){}, height=(int){}, '
#               'format=(string)BGRx ! '
#               'videoconvert ! appsink').format(width, height)

return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)

def gCamera():
bridge = CvBridge()
cap = open_cam_onboard(1920, 1080)

if not cap.isOpened():
    sys.exit('Failed to open camera!')

pub = rospy.Publisher('li_camera_raw', Image, queue_size=30)
rospy.init_node('camera_driver',anonymous=True)

while not rospy.is_shutdown():
    t, img = cap.read() # grab the next image frame from camera
    try:
        pub.publish(bridge.cv2_to_imgmsg(img, "bgr8"))
    except CvBridgeError as e:
        print(e)
        
           
cap.release()
cv2.destroyAllWindows()

if name == ‘main’:
try:
gCamera()
except rospy.ROSInterruptException:
pass

Log:

jetson@jetson:~$ roslaunch camera_driver debug_camera.launch
… logging to /home/jetson/.ros/log/94b24eda-0411-11ec-948f-8f35fecbba5a/roslaunch-jetson-19450.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://jetson:42429/

SUMMARY

PARAMETERS

  • /rosdistro: melodic
  • /rosversion: 1.14.11

NODES
/
camera_driver (camera_driver/camera_driver.py)

ROS_MASTER_URI=http://10.211.1.101:11311/

process[camera_driver-1]: started with pid [19459]
4.1.1


nvarguscamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! comp.sink_0 nvarguscamerasrc sensor-id=1 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! comp.sink_1 nvcompositor name=comp sink_0::alpha=0.5 sink_1::alpha=0.5 ! nvoverlaysink sync=false -e


nvbuf_utils: Could not get EGL display connection

(python:19459): GStreamer-CRITICAL **: 18:18:06.710: gst_element_make_from_uri: assertion ‘gst_uri_is_valid (uri)’ failed

(python:19459): GStreamer-CRITICAL **: 18:18:06.713: gst_element_make_from_uri: assertion ‘gst_uri_is_valid (uri)’ failed

(python:19459): GStreamer-CRITICAL **: 18:18:06.713: gst_element_make_from_uri: assertion ‘gst_uri_is_valid (uri)’ failed

(python:19459): GStreamer-CRITICAL *: 18:18:06.713: gst_element_make_from_uri: assertion ‘gst_uri_is_valid (uri)’ failed
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (711) open OpenCV | GStreamer warning: Error opening bin: syntax error
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Failed to open camera!
[camera_driver-1] process has died [pid 19459, exit code 1, cmd /home/jetson/catkin_ws/src/camera_driver/scripts/camera_driver.py __name:=camera_driver __log:=/home/jetson/.ros/log/94b24eda-0411-11ec-948f-8f35fecbba5a/camera_driver-1.log].
log file: /home/jetson/.ros/log/94b24eda-0411-11ec-948f-8f35fecbba5a/camera_driver-1
.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor…
… shutting down processing monitor complete
done

I tried to enable appsync - it doesn’t help

Now that you have applied the patch and both cameras can be used simultaneously, does the command in post#2 work ?

If yes, for using the composed image from opencv, you would have to convert the output of compositor into BGR:

gst_str= "nvcompositor name=nvcomp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=1920 sink_0::height=1080 sink_1::xpos=1920 sink_1::ypos=0 sink_1::width=1920 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1     nvarguscamerasrc sensor_id=0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=30/1 ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM),format=RGBA ! nvcomp.sink_1      nvarguscamerasrc sensor_id=1 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=30/1 ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM),format=RGBA ! nvcomp.sink_0"
cap = cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)

@Honey_Patouceul The command from the second post opens a black screen and gives this output in the console

jetson@jetson:~$ gst-launch-1.0 nvcompositor name=nvcomp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=1920 sink_0::height=1080 sink_1::xpos=1920 sink_1::ypos=0 sink_1::width=1920 ! nvvidconv ! ‘video/x-raw(memory:NVMM), format=I420, width=3840, height=1080’ ! nv3dsink nvarguscamerasrc sensor_id=0 ! ‘video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=30/1’ ! nvvidconv flip-method=2 ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! nvcomp.sink_1 nvarguscamerasrc sensor_id=1 ! ‘video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=30/1’ ! nvvidconv flip-method=2 ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! nvcomp.sink_0
nvbuf_utils: Could not get EGL display connection
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
Error generated. /media/snchen/project/project/32.5/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:649 Invalid camera device specified 1 specified, 0 max index

(gst-launch-1.0:9641): GStreamer-CRITICAL **: 18:45:14.684: gst_mini_object_set_qdata: assertion ‘object != NULL’ failed
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1920 x 1080 FR = 29,999999 fps Duration = 33333334 ; Analog Gain range min 1,000000, max 31,622776; Exposure Range min 59000, max 33333000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 0
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 29,999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
Redistribute latency…
Caught SIGSEGV
#0 0x0000007f93bfce28 in __GI___poll (fds=0x55874c9620, nfds=547940770696, timeout=) at …/sysdeps/unix/sysv/linux/poll.c:41
#1 0x0000007f93d09f58 in () at /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0
#2 0x00000055873582e0 in ()
Spinning. Please run ‘gdb gst-launch-1.0 9641’ to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.

nvbuf_utils: Could not get EGL display connection
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (711) open OpenCV | GStreamer warning: Error opening bin: could not link nvvconv1 to nvcomp, nvcomp can’t handle caps video/x-raw(memory:NVMM), format=(string)RGBA
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Failed to open camera!
[camera_driver-1] process has died [pid 9057, exit code 1, cmd /home/jetson/catkin_ws/src/camera_driver/scripts/camera_driver.py __name:=camera_driver __log:=/home/jetson/.ros/log/94b24eda-0411-11ec-948f-8f35fecbba5a/camera_driver-1.log].
log file: /home/jetson/.ros/log/94b24eda-0411-11ec-948f-8f35fecbba5a/camera_driver-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor…
… shutting down processing monitor complete
done

Seems argus has only found one camera. Does it still work with nvoverlaysink ?

[EDIT: Some wrong commands make crash argus. You may use the following for restarting it before a new trial until you get the working command:

sudo systemctl restart nvargus-daemon.service

]

@Honey_Patouceul I wrote a simple program based on your post. But still the program gives an error

Code:

#!/usr/bin/env python

import sys
import argparse
import subprocess

import cv2

WINDOW_NAME = ‘CameraDemo’

def open_cam_onboard():
gst_str = “nvcompositor name=nvcomp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=1920 sink_0::height=1080 sink_1::xpos=1920 sink_1::ypos=0 sink_1::width=1920 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1 nvarguscamerasrc sensor_id=0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=30/1 ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM),format=RGBA ! nvcomp.sink_1 nvarguscamerasrc sensor_id=1 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=30/1 ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM),format=RGBA ! nvcomp.sink_0”
return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)

def open_window(width, height):
cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
cv2.resizeWindow(WINDOW_NAME, width, height)
cv2.moveWindow(WINDOW_NAME, 0, 0)
cv2.setWindowTitle(WINDOW_NAME, ‘Camera Demo for Jetson TX2/TX1’)

def read_cam(cap):
show_help = True
full_scrn = False
help_text = ‘“Esc” to Quit, “H” for Help, “F” to Toggle Fullscreen’
font = cv2.FONT_HERSHEY_PLAIN
while True:
if cv2.getWindowProperty(WINDOW_NAME, 0) < 0:
# Check to see if the user has closed the window
# If yes, terminate the program
break
_, img = cap.read() # grab the next image frame from camera
if show_help:
cv2.putText(img, help_text, (11, 20), font,
1.0, (32, 32, 32), 4, cv2.LINE_AA)
cv2.putText(img, help_text, (10, 20), font,
1.0, (240, 240, 240), 1, cv2.LINE_AA)
cv2.imshow(WINDOW_NAME, img)
key = cv2.waitKey(10)
if key == 27: # ESC key: quit program
break
elif key == ord(‘H’) or key == ord(‘h’): # toggle help message
show_help = not show_help
elif key == ord(‘F’) or key == ord(‘f’): # toggle fullscreen
full_scrn = not full_scrn
if full_scrn:
cv2.setWindowProperty(WINDOW_NAME, cv2.WND_PROP_FULLSCREEN,
cv2.WINDOW_FULLSCREEN)
else:
cv2.setWindowProperty(WINDOW_NAME, cv2.WND_PROP_FULLSCREEN,
cv2.WINDOW_NORMAL)

def main():
print(‘OpenCV version: {}’.format(cv2.version))

cap = open_cam_onboard()

if not cap.isOpened():
    sys.exit('Failed to open camera!')

open_window(1920, 1080)
read_cam(cap)

cap.release()
cv2.destroyAllWindows()

if name == ‘main’:
main()

Error:

OpenCV version: 4.1.1
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (711) open OpenCV | GStreamer warning: Error opening bin: could not link nvvconv1 to nvcomp, nvcomp can’t handle caps video/x-raw(memory:NVMM), format=(string)RGBA
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Failed to open camera!