RTSP only starts when I start a connection to it

• Hardware Platform (Jetson / GPU)
Jetson Orin
• DeepStream Version
6.1.1
• JetPack Version (valid for Jetson only)
5.0.2
• TensorRT Version
8.4.1-1+cuda11.4

I have a Gstreamer pipeline:

appsrc name=source is-live=true block=true format=GST_FORMAT_TIME  
caps=video/x-raw,format=BGR,width={},height={},framerate={}/1
! videoconvert ! video/x-raw,format=I420 
! x264enc speed-preset=ultrafast tune=zerolatency 
! rtph264pay config-interval=1 name=pay0 pt=96 
Gst.init(None)
server = GstServer() #GstRtspServer.RTSPServer
loop = GLib.MainLoop()
loop.run()

The stream only starts when I call it from VLC or within another application.
How can I make loop.run() run in the background automatically?

Could you describe your needs in detail?

I think that is how it is designed to work.
If you need a pipeline that is continuously running and when you connect to RTSP, you get the frames that are currently in the pipeline you may need to change you structure.
You can have your main loop run a pipeline where the sink is udpsink or tcpserversink, etc.

And then start a rtsp server with a pipeline that has udpsrc to consume frames being sent to udpsink.

You can refer to the /opt/nvidia/deepstream/deepstream-6.1/sources/apps/apps-common/src/deepstream_sink_bin.c (Function name is start_rtsp_streaming)

1 Like

Deepstream-app is a very complex app. Some of its implementations are complex and perfect. The python demo app just shows how to use a simple customized pipeline in python. We have no plan to implement it in python at present. You can try to implement it by yourself if you want to use it in python now.

@marmikshah Thanks for the explanation!

@yuweiw
I am using the repo below to create an RTSP server to stream video frames that I process using openCV.
https://github.com/prabhakar-sivanesan/OpenCV-rtsp-server.git

I noticed it is quite different from Deepstream because the stream only starts when I call it from VLC or within another application.

@marmikshah I added udpsink to my pipeline and also added :

"( udpsrc name=pay0 port=%d caps=\"application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)%s )" % (int(5400), "H264")

But I get a segmentation fault.
I appreciate your help I am getting started with gstreamer pipelines.
My code below:

class SensorFactory(GstRtspServer.RTSPMediaFactory):
    def __init__(self, **properties):
        super(SensorFactory, self).__init__(**properties)

    
        self.cap = cv2.VideoCapture("/path/to/mp4/video.mp4")
        self.number_frames = 0
        self.fps = 2
        self.duration = 1 / self.fps * Gst.SECOND  # duration of a frame in nanoseconds
        self.launch_string = 'appsrc name=source is-live=true block=true format=GST_FORMAT_TIME ' \
                             'caps=video/x-raw,format=BGR,width={},height={},framerate={}/1 ' \
                             '! videoconvert ! video/x-raw,format=I420 ' \
                             '! x264enc speed-preset=ultrafast tune=zerolatency ' \
                             '! rtph264pay config-interval=1 name=pay0 pt=96 ! udpsink host=localhost port=5000' \
                             .format(opt.image_width, opt.image_height, self.fps)
    # method to capture the video feed from the camera and push it to the
    # streaming buffer.
    def on_need_data(self, src, length):
        if self.cap.isOpened():
            ret, frame = self.cap.read()
            if ret:
            
      
                # It is better to change the resolution of the camera 
                # instead of changing the image shape as it affects the image quality.
                frame = cv2.resize(frame, (opt.image_width, opt.image_height), interpolation = cv2.INTER_LINEAR)
                data = frame.tobytes()
                buf = Gst.Buffer.new_allocate(None, len(data), None)
                buf.fill(0, data)
                buf.duration = self.duration
                timestamp = self.number_frames * self.duration
                buf.pts = buf.dts = int(timestamp)
                buf.offset = timestamp
                self.number_frames += 1
                retval = src.emit('push-buffer', buf)
                print('pushed buffer, frame {}, duration {} ns, durations {} s'.format(self.number_frames,
                                                                                       self.duration,
                                                                                       self.duration / Gst.SECOND))
                if retval != Gst.FlowReturn.OK:
                    print(retval)
    # attach the launch string to the override method
    def do_create_element(self, url):
        
        return Gst.parse_launch(self.launch_string)
    
    # attaching the source element to the rtsp media
    def do_configure(self, rtsp_media):
        self.number_frames = 0
        appsrc = rtsp_media.get_element().get_child_by_name('source')
        appsrc.connect('need-data', self.on_need_data)

# Rtsp server implementation where we attach the factory sensor with the stream uri
class GstServer(GstRtspServer.RTSPServer):
    def __init__(self, **properties):
        super(GstServer, self).__init__(**properties)

        self.factory = SensorFactory()

        self.factory.set_launch("( udpsrc name=pay0 port=%d caps=\"application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)%s )" % (int(5400), "H264"))

        self.factory.set_shared(True)
        self.set_service(str(opt.port))
        self.get_mount_points().add_factory(opt.stream_uri, self.factory)
        self.attach(None)

About this topic, you can ask the author on github directly.
We suggest you refer to the following link to use DeepStream. https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-test1-rtsp-out

I don’t think you need to subclass the RTSPMediaFactory.
You can refer to this simple example (note is uses hw accelerated plugins. You can change to CPU if needed)

import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstRtspServer', '1.0')
from gi.repository import Gst, GstRtspServer, GLib, GObject
Gst.init([])


def create_rtsp_server():
    server = GstRtspServer.RTSPServer.new()
    server.set_property("service", "8554")
    server.attach(None)

    factory = GstRtspServer.RTSPMediaFactory.new()

    pipeline = "( udpsrc port=5400 name=pay0 caps=\"application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)H264, payload=96 \" )"
    factory.set_launch(pipeline)
    factory.set_shared(True)
    server.get_mount_points().add_factory("/stream", factory)


def process_video(path):
    pipeline = Gst.parse_launch(f" filesrc location={path} \
                                ! decodebin \
                                ! nvvideoconvert \
                                ! nvv4l2h264enc \
                                ! h264parse \
                                ! rtph264pay \
                                ! udpsink host=127.0.0.1 port=5400 sync=true async=false ")

    create_rtsp_server()
    pipeline.set_state(Gst.State.PLAYING)
    loop = GLib.MainLoop.new(None, False)
    loop.run()


process_video("path/to/mp4/video.mp4")

Based on this you can add your OpenCV injector code.
Instead of loop.run() you can have your cap.read() in the main loop.

(You can view your RTSP stream at rtsp://localhost:8554/stream)

1 Like

Can you please show me how to achieve this?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Have you tried to use the current rtsp-out demo code of deepstream to see if it can work?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.