Jetson Nano CSI Camera to RTSP Stream,MP4 Recording and JPEG Snapshot

Hi I have some question about Converting CSI camera reading into RTSP Stream and also with some command take a MP4 recording and JPEG Snapshot.

I have followed Deepstream SDK Development Guide and I successfully compiled and tested some examples in the Deestream Python Example.
From the example I can use :

  1. USB camera to Display.
  2. Stream to RTSP from File
  3. I can use gst-launch command line to try to display the camera and save it to the mp.4

However I am quite confused on how to go from where I am right now. The example in python have no example for using CSI cameras and to save the files to mp4.
Is there any guidelines or tutorial I can follow to understand the choices of sources and sink from the pipeline?
I can understand the examples but I cannot find the choices and the explanation of the code (the API) from the Deepstream SDK guides.
Your help into some links or direction to solve the problems will be much appreciated.

Hi,
You may customize deepstream-test1-rtsp-out to construct the usecase. Please check the default sample:
deepstream_python_apps/apps/deepstream-test1-rtsp-out at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

The source implementation is

filesrc ! h264parse ! nvv4l2decoder ! nvstreammux ! ...

Need to change to

nvarguscamerasrc bufapi-version=1 ! nvstreammux ! ...

The sink implementation is

... ! nvv4l2h264enc ! rtph264pay ! udpsink

and function calls of setting RTSP server.

Need to change to

... ! nvv4l2h264enc ! h264parse ! qtmux ! filesink

and remove setting RTSP server.

1 Like

Hey @DaneLLL ,

Thanks for your help!
Currently I already manage to get data from the CSI camera and stream it through RTSP.

However, I saw the stream through RTSP using the modified example to have about 1000ms delay while if I use gst-launch command line from the following settings
“nvarguscamerasrc ! video/xraw(memory:NVMM) width=1080 height=720 framerate=120/1 format=NV12 ! nvv4l2h264enc maxperf-enable=1 insert-sps-pps=true bitrate=10000000 ! h264parse ! h264parse ! rtph264pay name=pay0 pt=96”
I only have about 200ms delay which is desirable.
How do I add all other parameters and the “bufapi-version=1” to my nvarguscamerasrc source?

Another question is actually I wanted to stream through RTSP, but at sometime I want to be able to record mp.4 and take jpeg picture. The MP4 and JPEG I want it to be in 4K resolution while the RTSP stream is kept at HD 1080p.
How can I achieve the desired results? I assume the stream mux can do this where I split the obtained image source into the RTSP stream while reducing the resolution through nvcap and then another mux to save it to the mp4.

Hey DaneLLL,

I attach my code here for reference

import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstRtspServer', '1.0')
from gi.repository import GObject, Gst, GstRtspServer
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call

import pyds

bitrate=10000000

def main(args):

    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("nvarguscamerasrc", "src-elem")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    nvvidconv_postosd = Gst.ElementFactory.make("nvvideoconvert", "convertor_postosd")
    if not nvvidconv_postosd:
        sys.stderr.write(" Unable to create nvvidconv_postosd \n")

    # Create a caps filter
    caps = Gst.ElementFactory.make("capsfilter", "filter")
    caps.set_property("caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=NV12"))

    # Make the encoder
    encoder = Gst.ElementFactory.make("nvv4l2h264enc", "encoder")
    print("Creating H264 Encoder")
    if not encoder:
        sys.stderr.write(" Unable to create encoder")
    encoder.set_property('maxperf-enable',1)
    encoder.set_property('bitrate', bitrate)
    if is_aarch64():
        encoder.set_property('insert-sps-pps', 1)
    # Make the payload-encode video into RTP packets
    rtppay = Gst.ElementFactory.make("rtph264pay", "rtppay")
    print("Creating H264 rtppay")
    if not rtppay:
        sys.stderr.write(" Unable to create rtppay")
    # Make the UDP sink
    updsink_port_num = 5400
sink = Gst.ElementFactory.make("udpsink", "udpsink")
    if not sink:
        sys.stderr.write(" Unable to create udpsink")

    sink.set_property('host', '224.224.255.255')
    sink.set_property('port', updsink_port_num)
    sink.set_property('async', False)
    sink.set_property('sync', 1)



    print("Creating EGLSink \n")
    source.set_property('bufapi-version', True)

    #sink.set_property('sync', False)

    print("Adding elements to Pipeline \n")

    pipeline.add(source)
    pipeline.add(nvvidconv_postosd)
    pipeline.add(caps)
    pipeline.add(encoder)
    pipeline.add(rtppay)
    pipeline.add(sink)


    # we link the elements together
    print("Linking elements in the Pipeline \n")
    source.link(nvvidconv_postosd)
    nvvidconv_postosd.link(caps)
    caps.link(encoder)
    encoder.link(rtppay)
    rtppay.link(sink)


    # create an event loop and feed gstreamer bus mesages to it
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Start streaming
    rtsp_port_num = 8554

    server = GstRtspServer.RTSPServer.new()
    server.props.service = "%d" % rtsp_port_num
    server.attach(None)

    factory = GstRtspServer.RTSPMediaFactory.new()
    factory.set_launch( "( udpsrc name=pay0 port=%d buffer-size=524288 caps=\"application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)%s, payload=96 \" )" % (updsink_port_num, "H264"))
    factory.set_shared(True)
    server.get_mount_points().add_factory("/test", factory)
 print("\n *** DeepStream: Launched RTSP Streaming at rtsp://localhost:%d/test ***\n\n" % rtsp_port_num)


    # start play back and listen to events
    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)
if __name__ == '__main__':
    sys.exit(main(sys.argv))
      

This code works almost fine but it has 1000ms delay while if I use command line gst launch it will have only 200ms launch, what modification I should do to remove the delay? or is it the pipeline limitation?
@DaneLLL @kayccc any comments?

Hi,
We suggest run deepstream-app as a comparison. There is a reference config file:

/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/source1_csi_dec_infer_resnet_int8.txt

You may enable type=4 and check the latency.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html#sink-group

Hi,

I saw the .txt file how do I implement it to actual example in python?
Also, how about the other question?
The example from deepstream python gives the 1s delay but the one from gst-launch command line works without delay.

Hey @DaneLLL,

Now I already managed to stream with CSI cameras while recording it.
I use tee to branch out the source and matroskamux followed by filesink.
The mp.4 is playable but I cannot pause or slide it. In command line it is solved by adding -e, how can I add it in python program?
In addition, how do I make the mp4 playing start and stop recording from external command so I don’t need to record the whole stream but only record the specific part of videostream?

Best Regards,
Widy

Hey @DaneLLL,

I already check the deepstream-app in C++ with the config text, this shows the RTSP without delay which is desirable.
Does it means the RTSP implementation in python is problematic since it caused latency?
I also have few unanswered problem :

  1. How to send a command to start and stop recording, does the smart recording examples is the right direction to go?
  2. I right now can split the stream to rtsp stream and save to mp4, but as said before the playback does not have information and cannot be fast-forward or slide
  3. How to process the image using opencv from a running pipeline
    Best Regards,
    Widy

Hi,
For running python code, it goes through one more software stack:

/opt/nvidia/deepstream/deepstream-5.0/lib/pyds.so

This may introduce certain latency. If it does not meet the requirement, we suggest use deepstream-app in C.

Smare recording is for save frames with detected objects instead of saving all frames. If your usecase is this, you may check the document.

If it is a complete MP4, you should be able to seek it in video playback. Usually seeking or fast-forward is not supported in live streaming.

For accessing the buffer via OpenCV, you can take a look at

/opt/nvidia/deepstream/deepstream-5.0/sources/gst-plugins/gst-dsexample/README

Hey @DaneLLL,

Yeah I am planning to use the deepstream-app in C.

That is not my use case, my use case is lets say I stream the video through RTSP. I want the RTSP to keep on streaming lest say it is already streamed for 10 minute. I wanted to be able to get 2 mp.4 files that is part of the 10 minutes stream. Son in the middle of the stream I can send a command to start and stop the recording to only record part of the stream. I don’t know where to go from here other than executing the examples and the pipelines.
Are there any documentation on how to modify the deepstream-app in C?

Best Regards,
Widy

Hi,
The multifilesink may be used in the usecase:
multifilesink: GStreamer Good Plugins 1.0 Plugins Reference Manual

We usually set up to split at fixed conditions such as reaching file size, getting key frame. But you would like to arbitrarily stop/start it. Not sure if it is possible to configure it in such case. May need other users to share experience.