RTSP stream delay / Pipeline optimization

Hello,

I have managed to get my hands on Jetson Nano which I am using to learn about vision AI applications.
Unfortunately, due to a lot of travelling I do not have the possibility to have an external setup and I only operate Jetson Nano in headless mode. To solve the issue, after a long trial and error I have managed to create a capture with opencv and setup a RTSP stream which I am viewing from my computer. However, my stream has a 2 second delay. and I am looking for ways to improve my pipeline and have some general questions about gstreamer as I am still confused about how it works.

RTSP server:

import gi
gi.require_version('Gst','1.0')
gi.require_version('GstVideo','1.0')
gi.require_version('GstRtspServer','1.0')
from gi.repository import GObject, Gst, GstVideo, GstRtspServer

Gst.init(None)

mainloop = GObject.MainLoop()
server = GstRtspServer.RTSPServer()
mounts = server.get_mount_points()
factory = GstRtspServer.RTSPMediaFactory()
gst_rtsp = "(udpsrc auto-multicast=0 name=pay0 port=5400 buffer-size=52428 \
                 caps=\"application/x-rtp, media=video, clock-rate=90000, \
                 encoding-name=(string)H264, payload=96 \")"

factory.set_launch(gst_rtsp)

mounts.add_factory("/test", factory)
server.attach(None)

mainloop.run()

Video capture:

import time
import cv2

gst_in = ("v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! videoconvert ! "
          "video/x-raw,format=BGR ! queue ! appsink")

gst_in2 = ("v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! nvvidconv ! "
           "video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw, format=BGRx !"
            "videoconvert ! video/x-raw,format=BGR ! queue ! appsink")

camera = cv2.VideoCapture(gst_in2, cv2.CAP_GSTREAMER)

if camera.isOpened() is not True:
    print("Unable to open cam")
    exit()

w = camera.get(cv2.CAP_PROP_FRAME_WIDTH)
h = camera.get(cv2.CAP_PROP_FRAME_HEIGHT)
fps = camera.get(cv2.CAP_PROP_FPS)
print('Src opened, %dx%d @ %d fps' % (w, h, fps))


gst_out = 'appsrc ! videoconvert ! \
            omxh264enc bitrate=12000000 ! video/x-h264, \
            stream-format=byte-stream ! rtph264pay pt=96 ! \
            queue ! udpsink host=127.0.0.1 port=5400 auto-multicast=0 '


out = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, float(fps), (int(w), int(h)))

while True:
    res, frame = camera.read()
    if not res:
        print("Read error")
        time.sleep(10)
        pass
    # Do some stuff here
    out.write(frame)

camera.release()

My camera:

ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'YUYV'
	Name        : YUYV 4:2:2

Some of the questions I have:

How can I improve my pipeline? Maybe it is possible to get rid of the 2 second delay when viewing RTPS stream?

What is the difference between “gst_in” and “gst_in2” pipelines in terms of efficiency? Maybe there is a way to implement DeepStream for better performance?

Is there any good other good resources where I can learn about gstreamer concepts and practices?

Is there any other obvious mistakes I am making related to the gstreamer?

I appreciate your help and time,
Best wishes.

Hi,
The latency may be from the source. We suggest try this setup:
Jetson Nano FAQ

Q: Is there an example for running UDP streaming?

To check how much latency is seen while using videotestsrc in UDP. And may try your source in UDP. We suggest try different combinations to identify where the latency if from and how much the latency is.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.