GStreamer python script to send and receive UDP stream

Hi!

My Jetson and Python skills are very limited and I could therefore use a little guidance.
If possible, how could I change this script to use another GStreamer pipeline to send the live stream of these four cameras as a H.264 encoded UDP stream.

https://github.com/IDCrypticore/x4IMX290/blob/main/x4IMX290.py

What this does now is opening four IMX290 sensors, stacked horizontally.

Hi,
Please refer to this sample:
Stream processed video with OpenCV on Jetson TX2 - #5 by DaneLLL

You may further explain if you intend to do more processing with opencv. From the code above, it just concatenate 4 cams.
You can do that without opencv nor python with a single gstreamer command in terminal:

gst-launch-1.0 -e nvcompositor name=mix background-w=1920 background-h=270 \
    sink_0::xpos=0    sink_0::ypos=0   sink_0::width=480 sink_0::height=270 \
    sink_1::xpos=480  sink_1::ypos=0   sink_1::width=480 sink_1::height=270 \
    sink_2::xpos=960  sink_2::ypos=0   sink_2::width=480 sink_2::height=270 \
    sink_3::xpos=1440 sink_3::ypos=0   sink_3::width=480 sink_3::height=270 \
    ! 'video/x-raw(memory:NVMM),format=RGBA, width=1920,height=270' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=I420' ! nvv4l2h264enc insert-vui=1 insert-sps-pps=1 ! h264parse ! rtph264pay ! udpsink host=127.0.0.1 port=5000  \
    nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM),format=NV12,width=480,height=270,framerate=30/1' ! queue ! mix.sink_0    \
    nvarguscamerasrc sensor-id=1 ! 'video/x-raw(memory:NVMM),format=NV12,width=480,height=270,framerate=30/1' ! queue ! mix.sink_1    \
    nvarguscamerasrc sensor-id=2 ! 'video/x-raw(memory:NVMM),format=NV12,width=480,height=270,framerate=30/1' ! queue ! mix.sink_2    \
    nvarguscamerasrc sensor-id=3 ! 'video/x-raw(memory:NVMM),format=NV12,width=480,height=270,framerate=30/1' ! queue ! mix.sink_3

@Honey_Patouceul

Neat!

Well, there are other python scripts that will be ran in parallel.
For instance there is a script that will control a servo motor for cleaning sight glasses on the same structure these cameras will be mounted in.
I just think it would be easier to start the entire “system” as a whole with opencv / python, rather than having to fetch the live stream alone through the terminal.

I’m sorry if this sounds strange, since there are probably a thousand more interesting and effective ways to utilize a Jetson.

You can launch the pipeline from python without opencv. The thing is that opencv VideoCapture may generate some useless CPU load, while nvcompositor can do that from HW. Try:

#!/usr/bin/env python

import signal
import sys
import time
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GObject

# Signal handler for stopping pipeline before destruction, so that Argus keeps ok.
def signal_handler(sig, frame):
    p.set_state(Gst.State.NULL)
    sys.exit(0)

# Initialize gstreamer
GObject.threads_init()
Gst.init(None)

# Define pipeline 
#gst_str = "nvarguscamerasrc ! video/x-raw(memory:NVMM),format=(string)NV12,width=(int)640,height=(int)480, framerate=30/1 ! nvvidconv ! xvimagesink "
gst_str = "nvcompositor name=mix background-w=1920 background-h=270 \
    sink_0::xpos=0    sink_0::ypos=0   sink_0::width=480 sink_0::height=270 \
    sink_1::xpos=480  sink_1::ypos=0   sink_1::width=480 sink_1::height=270 \
    sink_2::xpos=960  sink_2::ypos=0   sink_2::width=480 sink_2::height=270 \
    sink_3::xpos=1440 sink_3::ypos=0   sink_3::width=480 sink_3::height=270 \
    ! video/x-raw(memory:NVMM),format=RGBA, width=1920,height=270 ! nvvidconv ! video/x-raw(memory:NVMM),format=I420 ! nvv4l2h264enc insert-vui=1 insert-sps-pps=1 ! h264parse ! rtph264pay ! udpsink host=127.0.0.1 port=5000  \
    nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM),format=NV12,width=480,height=270,framerate=30/1 ! queue ! mix.sink_0    \
    nvarguscamerasrc sensor-id=1 ! video/x-raw(memory:NVMM),format=NV12,width=480,height=270,framerate=30/1 ! queue ! mix.sink_1    \
    nvarguscamerasrc sensor-id=2 ! video/x-raw(memory:NVMM),format=NV12,width=480,height=270,framerate=30/1 ! queue ! mix.sink_2    \
    nvarguscamerasrc sensor-id=3 ! video/x-raw(memory:NVMM),format=NV12,width=480,height=270,framerate=30/1 ! queue ! mix.sink_3 "

# Create the pipeline
p = Gst.parse_launch (gst_str)

# Register signal handler for proper termination if receiving SIGINT such as Ctrl-C
signal.signal(signal.SIGINT, signal_handler)

# Start the pipeline
p.set_state(Gst.State.READY)
p.set_state(Gst.State.PAUSED)
p.set_state(Gst.State.PLAYING)

# Run for 10s 
time.sleep(10)

# Done. Stop the pipeline before clean up on exit.
p.set_state(Gst.State.NULL)
exit(0)

Thank you @Honey_Patouceul

I’ve tried to establish a pipeline, but not been able to receive a stream yet.

Opening in BLOCKING MODE
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:532 Failed to create CaptureSession

I’ll take a closer look at it tomorrow when I’m back at campus, since I’ve accessed the Xavier from home by SSH and VPN, this might cause some issues in itself.

From jetson (as the previous script is streaming to localhost), you would use:

gst-launch-1.0  udpsrc port=5000 ! application/x-rtp,encoding-name=H264 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! xvimagesink

If you connect remotely, you would use ssh -X or -Y from a host with a X server (such as a Linux host) so that xvimagesink events are sent to your host xserver.

Noob alert, again!

Launching the pipeline using the script you provided yields;

GLib.Error: gst_parse_error: no element "nvcompositor" (1)

Next attempt:

ssh -X teamten@teamten-desktop
sudo nvpmodel -m 0
sudo jetson_clocks
gst-launch-1.0  udpsrc port=5000 ! application/x-rtp,encoding-name=H264 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! xvimagesink

Then all four cameras opens on the screen connected to the Xavier, however if I wave over them my hand shows up on screen half a minute or so later.

Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 

(gst-launch-1.0:24103): GStreamer-CRITICAL **: 09:32:37.033: gst_mini_object_unref: assertion 'mini_object != NULL' failed
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
WARNING: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
There may be a timestamping problem, or this computer is too slow.

Still not been able to fetch a live stream to the receiving end.
As mentioned, I have little to no experience with Linux, Jetson and Python. My head has never been able to really understand computing and programming at this depth, and in combination with having a stress level bursting through the ceiling, makes this objective near impossible for me to achieve.

Sweat no more, thank you for you patience, @Honey_Patouceul !

I don’t see that. However, as it has worked later, I suppose you have this gstreamer plugin.

You would only try that from a remote host. I’d suggest to first try locally with GUI and monitor.

You may try adding rtpjitterbuffer plugin in your receiver, and/or add sync=false to your videosink, or try another videosink:

gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264 ! rtpjitterbuffer latency=500 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! xvimagesink  sync=false

gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264 ! rtpjitterbuffer latency=500 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvoverlaysink  sync=false 

Thank you very, very much @Honey_Patouceul !
My head got around it eventually, and now we’ve achieved the desired result.

You will be mentioned and referred to in our report :D