I am developing an application which captures video from an rtsp stream, processes it and records it.
Minimal latency in capture
Maximum efficiency in using Nvidia hardware
For capture I use:
cv2.VideoCapture(“rtspsrc location=rtsp://192.168.12.100:554/stream1 latency=0 ! rtph265depay ! h265parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink”, cv2.CAP_GSTREAMER)
This is working well, but i want to make sure - is this the best way to do it?
Is there a lower latency way of doing this?
Does using cv2 add latency?
Should I try using gstreamer python binding directly without opencv?
For recording I use:
cv2.VideoWriter(‘appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=RGBA ! nvvidconv ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=video.mov’, cv2.CAP_GSTREAMER,30,(width, height))
I am getting a file - but I can’t play it with vlc or any other player i have tried - what am I doing wrong?
This is the optimal way of hooking gstreamer with OpenCV. Hardware engines do not support BGR, so it is required to have additional buffer copy in running a gsteamer pipeline in
cv2.VideoCapture(). For eliminating the buffer copy, you can try to get NvBufSurface in appsink. It will be similar to the sample:
How to run RTP Camera in deepstream on Nano - #29 by DaneLLL
For Orin we use NvBufSurface instead of NvBuffer, so the way of getting NvBufSurface is same as this patch:
Jetson Nano CSI Raspberry Pi Camera V2 upside down video when run an example with deepstream-app - #7 by DaneLLL
The function calls are
GstBuffer *buf = (GstBuffer *) info->data;
GstMapInfo outmap = GST_MAP_INFO_INIT;
gst_buffer_map (buf, &outmap, GST_MAP_WRITE);
NvBufSurface* surface = (NvBufSurface *)outmap.data;
You may try
matroskamux plugin. Here is a sample for reference:
Displaying to the screen with OpenCV and GStreamer - #9 by DaneLLL
Thank you very much for the quick reply!
I am not sure I understand how to implement your recommjendation regarding the applying patch (I am not experienced with this yet):
When update the code with
g_object_set (G_OBJECT (bin->src_elem), "bufapi-version", FALSE, NULL);
the message is
ERROR from src_bin_muxer: Input buffer number of surfaces (0) must be equal to mux->num_surfaces_per_frame (1)
For reference, i’m attached the output file and deepstream_source_bin.c.
Obs: I tried different code updates to the deepstream_source_bin.c without success. Finlay, i back to the original code and update only with the previous recommendation and the error me…
Where are the files which need to be patched located in the Orin filesystem?
What do I need to do after updating the files?
After applying the patch - is there any change necessary in the python code?
Could you supply some more information regarding this?
It may not be easy to implement a full solution based on the suggestion. Please use OpenCV and execute sudo jetson_clocks to enable CPU cores at maximum clock. This shall bring maximum throughput to get buffers in OpenCV.
If your use-case is to run deep learning inference, may consider use DeepStream SDK:
NVIDIA Metropolis Documentation
Thank you - I will try this as a work-around.
Do you have an idea of what latency I should expect?
I am currently seeing between 200-300 miliseconds, does this make sense?
It looks expected. RTSP source is h264/h265 stream and buffering the stream is required.
For comparison, you may try this setup:
Gstreamer TCPserversink 2-3 seconds latency - #5 by DaneLLL
April 5, 2023, 1:24am
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.