Multiple stream from one camera via gstreamer

Hi.

I want to stream two streams from one camera (on Jetson Nano). The first stream should be in size e.g. 320px x 240 px, and the second: 1920 px x 1080 px.
I stream sucesfully one video track using gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), ....
When I try to add another stream with different paramters but from the same source I got an error:
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:568 Failed to create CaptureSession

How can one stream multiple stream from one camera using gstreamer?

The camera cannot be opened twice, but you can duplicate the stream with tee:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=1920, height=1080' ! tee name=t ! queue ! nvv4l2h264enc insert-sps-pps=1 ! rtph264pay ! udpsink tgtIP port=tgtPort      t. ! queue ! nvvidconv ! 'video/x-raw(memory:NVMM), width=320, height=240' ! nvv4l2h264enc insert-sps-pps=1 ! rtph264pay ! udpsink tgtIP2 port=tgtPort2

@Honey_Patouceul, Thank you very much for your help.
Do you have an idea how can I dynamically change the parameters of the gstreamer? For example, I have to streams implemented in the way that you suggested. I want to have one stream with constants parameters (eg. full HD resolution) and the second stream 320 px x 240 px or 500 px x 300 px ?

I don’t think that NVENC or NVDEC plugins support dynamic resolution change (CPU codecs may be able to manage, but with poor performance on Jetson). You would have to program your gstreamer pipelines and manage their states. If you are not familiar with gstreamer programming, you may try to use RidgeRun gstreamer daemon.

Build and install gstd and its interpipe plugins.

Then you may try something like this script, that would duplicate nvarguscamerasrc stream into 2 sources, then a pipeline reading first source, encoding and RTP streaming to UDP/5000, then 2 pipelines reading 2nd source, rescaling respectively to 320x240 and 640x480, encoding and RTP streaming to UDP/5001.
After 10s, the script will switch from 320x240 to 640x480.

#!/bin/bash
set -x

# Duplicate nvarguscamerasrc into 2 sources
gstd-client pipeline_create camera2multisrc \
    nvarguscamerasrc ! "video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=30/1" ! tee name=t \
    t. ! queue ! interpipesink name=src_1 caps="video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=30/1" \
    t. ! queue ! interpipesink name=src_2 caps="video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=30/1"

# Read src1, encode into H264 and stream to localhost UDP/5000
gstd-client pipeline_create udpstream1 \
    interpipesrc name=sink_1 listen-to=src_1 is-live=true allow-renegotiation=true stream-sync=compensate-ts ! queue ! nvv4l2h264enc insert-sps-pps=1 insert-vui=1 ! rtph264pay ! udpsink host=127.0.0.1 port=5000 

# Read src2, rescale to 320x240, encode into H264 and stream to localhost UDP/5001
gstd-client pipeline_create udpstream2 \
    interpipesrc name=sink_2 listen-to=src_2 is-live=true allow-renegotiation=true stream-sync=compensate-ts ! queue ! nvvidconv ! "video/x-raw(memory:NVMM),format=NV12,width=320,height=240,framerate=30/1" ! nvv4l2h264enc insert-sps-pps=1 insert-vui=1 ! rtph264pay ! udpsink host=127.0.0.1 port=5001 

# Read src2, rescale to 640x480, encode into H264 and stream to localhost UDP/5001
gstd-client pipeline_create udpstream3 \
    interpipesrc name=sink_3 listen-to=src_2 is-live=true allow-renegotiation=true stream-sync=compensate-ts ! queue ! nvvidconv ! "video/x-raw(memory:NVMM),format=NV12,width=640,height=480,framerate=30/1" ! nvv4l2h264enc insert-sps-pps=1 insert-vui=1 ! rtph264pay ! udpsink host=127.0.0.1 port=5001 


gstd-client pipeline_play camera2multisrc
gstd-client pipeline_play udpstream1
gstd-client pipeline_play udpstream2
sleep 10

gstd-client pipeline_stop udpstream2
gstd-client pipeline_play udpstream3
sleep 10

gstd-client pipeline_stop udpstream1
gstd-client pipeline_stop udpstream3
gstd-client pipeline_stop camera2multisrc

gstd-client pipeline_delete udpstream1
gstd-client pipeline_delete udpstream2
gstd-client pipeline_delete udpstream3
gstd-client pipeline_delete camera2multisrc

How is related OpenCV to RidgeRun? Are they comparable?

Is is possible to read one stream from OpenCV, (e.g. FullHD) then scale it and stream two streams to RTP (e.g. UDP 5000 -320 x 240 and UDP 5001 - 1920 x 1080)?

gstd is just a high level management of gstreamer pipelines.
OpenCv is a more general computer vision library.
If you program from opencv, you wouldn’t use gstd.
You can use gstreamer pipelines for VideoCapture and VideoWriter.
So for your case you would use a VideoCapture reading from camera with a gstreamer pipeline converting to BGR, and use various VideoWriters rescaling, encoding and RTP streaming. Your opencv application would decide writing frames into each writer.

You may try this from python:

#!/usr/bin/env python

import cv2

cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! queue ! appsink drop=1", cv2.CAP_GSTREAMER)
if not cap.isOpened():
   print('Failed to open camera')
   exit

rtpudp1080 = cv2.VideoWriter("appsrc ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc insert-sps-pps=1 insert-vui=1 ! rtph264pay ! udpsink host=127.0.0.1 port=5000", cv2.CAP_GSTREAMER, 0, 30.0, (1920,1080)) 
if not rtpudp1080.isOpened():
   print('Failed to open rtpudp1080')
   cap.release()
   exit

rtpudp320 = cv2.VideoWriter("appsrc ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! video/x-raw(memory:NVMM),width=320,height=240 ! nvv4l2h264enc insert-sps-pps=1 insert-vui=1 ! rtph264pay ! udpsink host=127.0.0.1 port=5001", cv2.CAP_GSTREAMER, 0, 30.0, (1920,1080)) 
if not rtpudp320.isOpened():
   print('Failed to open rtpudp320')
   rtpudp1080.release()
   cap.release()
   exit

rtpudp640 = cv2.VideoWriter("appsrc ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! video/x-raw(memory:NVMM),width=640,height=480 ! nvv4l2h264enc insert-sps-pps=1 insert-vui=1 ! rtph264pay ! udpsink host=127.0.0.1 port=5001", cv2.CAP_GSTREAMER, 0, 30.0, (1920,1080)) 
if not rtpudp640.isOpened():
   print('Failed to open rtpudp640')
   cap.release()
   rtpudp1080.release()
   rtpudp320.release()
   exit


for i in range(300):
	ret_val, img = cap.read();
	if not ret_val:
		break

	rtpudp1080.write(img);
	rtpudp320.write(img);
	cv2.waitKey(1)

for i in range(300):
	ret_val, img = cap.read();
	if not ret_val:
		break

	rtpudp1080.write(img);
	rtpudp640.write(img);
	cv2.waitKey(1)

rtpudp320.release()
rtpudp640.release()
rtpudp1080.release()
cap.release()

Receiving on Jeston, you can use NVDEC for reading port 5000 with fixed resolution:

gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! xvimagesink

but for port 5001 with changing resolution, you have to use CPU codec avdec_264:

gst-launch-1.0 -ev udpsrc port=5001 ! application/x-rtp,encoding-name=H264 ! rtph264depay ! h264parse ! avdec_h264 ! xvimagesink

Thank you @Honey_Patouceul for the example, unfortunately, it doesn’t work in my environment.
I use python3 and cv2 3.2.0 version.
I removed 3rd (0) argument from cv2.VideoWriter, because with it I got an error TypeError: an integer is required (got type tuple).
When I execute the above code without 3rd parameter I got:

Failed to open camera
Failed to open rtpudp1080
Failed to open rtpudp320
Failed to open rtpudp640

How can I debug it?

In the example, one can see that video conversion/image manipulation (e. g. change resolution) is made by GStreamer. What will be the difference in performance (or other aspects) if I will make the conversion in OpenCV (e.g. using cv2.resize() function then send it to udp?

Has your opencv build support for gstreamer ?
If no, rebuild opencv with gstreamer support.

If yes, it may be that opencv3.2 had not yet backend argument in API, so try rather removing cv2.CAP_GSTREAMER. 0 is the 4CC code, but isn’t used for gstreamer backend, so 0 should be ok.

Also, note that with python the identation is important for branch blocks such as if, while, etc… If camera cannot open, I would expect it to abort and exit, so not see next error messages. Be sure to reproduce code with same spacing.

Resizing with opencv is possible, but would be done with CPU, it would be slow for high resolution.
Why would one prefer a slow solution with CPU usage while there is a HW block that can do that very fast with almost no CPU overhead ?

I build OpenCV with support for gstreamer according to the link that you provided. I don’t have any errors right now.

Resize is only one operation that I want to archive. I used to think that it will be easier and more logical if I will make all of the operations in OpenCV instead of mixing gstreamer with OpenCV.

Can gstreamer recognize objects on video?
How deepstream is related to OpenCV? Does deepstream support nvidia hardware?

This depends on your expectation (or cost function). I understand having local opencv code doing all, however with different L4T versions you may have varying nv plugings, varying gstreamer version, so any opencv build supporting these may have to be rebuilt and and installed, and may have pipelines adapted…

Gstreamer is a general media framework. My understanding is that DeepStream defines some easy way to define a processing graph, provide easy way to specify source or sink nodes of various kind and uses these from gstreamer.
In DS, there is a plugin nvinfer that can do object detection and more.

We can assume that right now I don’t need to do any operations on images beyond resize and crop - both I can do in gstreamer. I also need OpenCV to dynamically stream cropped video from the camera (it will be digital zoom feature of part of the image).

Resuming, my main goal is to:
a) Capture 4k resolution screen
b) Resize it to e.g. 1600 x 1200
c) Resize it to 320 x 240
d) Dynamically crop (from 4K) part of the video in resolution 640x480.

I have a 4k camera (Arducam on IMX477 chip). When I use the following code in bash:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=4032, height=3040' ! tee name=t ! queue ! nvv4l2h264enc insert-sps-pps=true ! h264parse !  rtph264pay pt=96 ! udpsink host=127.0.0.1 port=5004 sync=false t. ! queue ! nvvidconv ! 'video/x-raw(memory:NVMM), width=640, height=480' ! nvv4l2h264enc insert-sps-pps=true ! h264parse !  rtph264pay pt=96 ! udpsink host=127.0.0.1 port=5005 sync=false

I got two streams, the 4k stream has a latency of about 3-4 seconds (!), the smaller one has a minimum latency (<150ms).

  1. Do you have an idea why the latency of the 4k stream is that big?

When I use the following code from Python

import cv2

cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)4032, height=(int)3040,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! queue ! appsink drop=1", cv2.CAP_GSTREAMER)
print(cap)
if not cap.isOpened():
   print('Failed to open camera')
   exit

rtpudp1013 = cv2.VideoWriter("appsrc ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc insert-sps-pps=1 insert-vui=1 ! rtph264pay ! udpsink host=127.0.0.1 port=5004", cv2.CAP_GSTREAMER, 0, 30.0, (1440, 1080)) 
if not rtpudp1013.isOpened():
   print('Failed to open rtpudp1013')
   cap.release()
   exit

rtpudp82 = cv2.VideoWriter("appsrc ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! video/x-raw(memory:NVMM),width=320,height=240 ! nvv4l2h264enc insert-sps-pps=1 insert-vui=1 ! rtph264pay ! udpsink host=127.0.0.1 port=5005", cv2.CAP_GSTREAMER, 0, 30.0, (109,82)) 
if not rtpudp82.isOpened():
   print('Failed to open rtpudp82')
   rtpudp1013.release()
   cap.release()
   exit


while True:
	ret_val, img = cap.read()
	if not ret_val:
		break

	rtpudp1013.write(img)
	rtpudp82.write(img)
	cv2.waitKey(1)


rtpudp1013.release()
rtpudp82.release()
cap.release()

Both streams look like “color horizontal stripes”. When I
a) change the video resolution in the resized pipeline (rtpudp1013) to 1920x1080 there is no change in the stream (color horizontal stripes)
b) change video resolutions both in captured and resized (rtpudp1013) pipeline to 1920x1080 it works well.

  1. Why I can’t capture 4k resolution?
  2. Are there any rules on what I can capture and how can I resize it? If yes. How Can I find them?

There is also one important thing to mention, each time when I use gstreamer from python I can see:

[ WARN:0] global /home/stageeye/Projects/stream/workspace/opencv-4.5.0/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1

It is only a warning, but I’m not sure if it is an issue or not.