I’m trying to make an udp stream in gstreamer with two cameras with nvcamerasrc as source but do not know what’s wrong.
gstreamer pipeline:
gst-launch-1.0 -e \
videomixer name=mix sink_0::xpos=0 sink_1::xpos=1920 \
! jpegenc ! rtpjpegpay ! udpsink host=172.16.1.1 port=5000 \
nvcamerasrc sensor-id=0 fpsRange="30 30" \
! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' \
! mix.sink_1 \
nvcamerasrc sensor-id=2 fpsRange="30 30" \
! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' \
! mix.sink_0
The error i get is:
WARNING: erroneous pipeline: could not link nvcamerasrc0 to mix
My setup is a jetson tx1 with j20 module and 2 IMX219. I know it works separately but not all together.
I would be happy if there is someone who can help me
It may be a memory space issue… nvcamerasrc outputs into NVMM memory, but videomixer expects src only from CPU memory.
You may check these with:
gst-inspect-1.0 videomixer
and look for its sink capabilites: it has only from video/x-raw,
while:
gst-inspect-1.0 nvcamerasrc
will show that nvcamerasrc has only src capabilities to video/x-raw(memory:NVMM).
You may use nvvidconv after nvcamerasrc in your pipeline for sending from one memory to the other, or use another input than nvcamerasrc, such as v4l2src that outputs into standard memory, but you may have to convert from your sensor capabilities into a suitable format for videomixer input.
You may also use -v to get more verbose output from gst-launch.
I have only the onboard camera with me now, but this works on my TX2:
gst-launch-1.0 -ev videomixer name=mix sink_0::xpos=0 sink_1::xpos=640 ! xvimagesink nvcamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw, format=I420' ! mix.sink_0 videotestsrc ! 'video/x-raw, width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1' ! mix.sink_1