Opencv & gstreamer capture and record

I am developing an application which captures video from an rtsp stream, processes it and records it.
I need:

  1. Minimal latency in capture
  2. Maximum efficiency in using Nvidia hardware

For capture I use:

cv2.VideoCapture(“rtspsrc location=rtsp://192.168.12.100:554/stream1 latency=0 ! rtph265depay ! h265parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink”, cv2.CAP_GSTREAMER)

This is working well, but i want to make sure - is this the best way to do it?
Is there a lower latency way of doing this?
Does using cv2 add latency?
Should I try using gstreamer python binding directly without opencv?

For recording I use:

cv2.VideoWriter(‘appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=RGBA ! nvvidconv ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=video.mov’, cv2.CAP_GSTREAMER,30,(width, height))

I am getting a file - but I can’t play it with vlc or any other player i have tried - what am I doing wrong?

Hi,

This is the optimal way of hooking gstreamer with OpenCV. Hardware engines do not support BGR, so it is required to have additional buffer copy in running a gsteamer pipeline in cv2.VideoCapture(). For eliminating the buffer copy, you can try to get NvBufSurface in appsink. It will be similar to the sample:
How to run RTP Camera in deepstream on Nano - #29 by DaneLLL
For Orin we use NvBufSurface instead of NvBuffer, so the way of getting NvBufSurface is same as this patch:
Jetson Nano CSI Raspberry Pi Camera V2 upside down video when run an example with deepstream-app - #7 by DaneLLL
The function calls are

    GstBuffer *buf = (GstBuffer *) info->data;
    GstMapInfo outmap = GST_MAP_INFO_INIT;
    gst_buffer_map (buf, &outmap, GST_MAP_WRITE);
    NvBufSurface*  surface = (NvBufSurface *)outmap.data;

You may try matroskamux plugin. Here is a sample for reference:
Displaying to the screen with OpenCV and GStreamer - #9 by DaneLLL

Hi DaneLLL,
Thank you very much for the quick reply!
I am not sure I understand how to implement your recommjendation regarding the applying patch (I am not experienced with this yet):

Where are the files which need to be patched located in the Orin filesystem?

sources\apps\apps-common\src\deepstream_source_bin.c

What do I need to do after updating the files?
After applying the patch - is there any change necessary in the python code?

Could you supply some more information regarding this?
Thanks again,
Eli

Hi,
It may not be easy to implement a full solution based on the suggestion. Please use OpenCV and execute sudo jetson_clocks to enable CPU cores at maximum clock. This shall bring maximum throughput to get buffers in OpenCV.

If your use-case is to run deep learning inference, may consider use DeepStream SDK:
NVIDIA Metropolis Documentation

Hi DaneLLL,
Thank you - I will try this as a work-around.
Do you have an idea of what latency I should expect?
I am currently seeing between 200-300 miliseconds, does this make sense?
Eli

Hi,
It looks expected. RTSP source is h264/h265 stream and buffering the stream is required.

For comparison, you may try this setup:
Gstreamer TCPserversink 2-3 seconds latency - #5 by DaneLLL

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.