I have 3 ip camera which can be get by rtsp,but when receiving videos from 3 cameras at the same time, the video will have a 5s delay(reciving 1 camera is real-time).
The receiving code is just like cv2.VideoCapture(url),url is the rtsp of ip camera.
Maybe receiving by Gstreamer will be real-time…,so I run follows code:
Terminal:
ace@ace-desktop:~/acecombat$ python3 ipc.py
[ WARN:0] global /home/ace/opencv/modules/videoio/src/cap_gstreamer.cpp (2076) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module rtspsrc0 reported: Internal data stream error.
[ WARN:0] global /home/ace/opencv/modules/videoio/src/cap_gstreamer.cpp (1053) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/ace/opencv/modules/videoio/src/cap_gstreamer.cpp (616) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
plz dont print
plus: cv2.getBuildInformation()show Gstreamer is YES.
i have run sudo apt install nvidia-l4t-* but it can not work.
Is the pipeline wrong or the Gstreamer config have error?
Thanks.
If you must run gstreamer pipeline in cv2.VideoCapture(), please excute sudo nvpmodel -m 0 and sudo jetson_clocks. This fixes CPU cores at max clock and is the optimal solution for the use-case.
I wonder if Gstreamer can solve the video stream delay problem of jetson connecting three ip cameras?
I’m not sure if the pipeline I wrote is correct,can you show me the correct pipeline code that py-opencv read rtsp video by gstreamer?
Very very thanks.
Thank you for your help ,my problem has been solved.
I have another question to ask you.
I want to use yolo to detect the acquired images, and then convert the opencv image into rtsp and push it to another website. How do I use Gstreamer to do it?
Get the rtsp video I use cv2.VideoCapture("Gstreamer string command"), but I dont know how to push, what opencv function I need?
Thank you for your generous answer.
Hi,I use the command gst-launch-1.0 uridecodebin uri=‘rtsp://169.254.224.11/user=admin&password=&channel=1&stream=0.sdp?’ ! nvvidconv ! nvegltransform ! nveglglessink
Setting pipeline to PAUSED …
terminal showes:
Using winsys: x11
Pipeline is live and does not need PREROLL …
Got context from element ‘eglglessink0’: gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://169.254.224.11/user=admin&password=&channel=1&stream=0.sdp?
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source: Could not open resource for reading and writing.
Additional debug info:
gstrtspsrc.c(7469): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source:
Failed to connect. (Generic error)
ERROR: pipeline doesn’t want to preroll.
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …
thks.
gst-launch-1.0 rtspsrc location=“rtsp://169.254.224.11/user=admin&password=&channel=1&stream=0.sdp?” ! fakesink
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://169.254.224.11/user=admin&password=&channel=1&stream=0.sdp?
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (request) SETUP stream 1
Progress: (open) Opened Stream
Setting pipeline to PLAYING …
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
WARNING: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not read from resource.
Additional debug info:
gstrtspsrc.c(5427): gst_rtspsrc_reconnect (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Could not receive any UDP packets for 5.0000 seconds, maybe your firewall is blocking it. Retrying using a tcp connection.
however I use python cv2.VideoCapture(“rtsp://169.254.224.11/user=admin&password=&channel=1&stream=0.sdp?”) and cv2.imshow(), I can get the video
Hi,
You would need to figure out a valid URI which can be read correctly in rtspsrc. So that can use nvv4l2decoder for hardware decoding. Please check if you can contact the camera vendor for help.
Not sure if it works but may add real_stream at tail like
Hi,thank you suggestion, I add real_stream,and it still cant work…
Rtsp link i written is according to the supplier format given.I add the port in link and it cant receive source,but the port is got in computer software…
terminal:
fwav@fwav-desktop:~/qwe$ python3 ipc.py
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module rtspsrc0 reported: Internal data stream error.
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Gtk-Message: 16:04:30.149: Failed to load module “canberra-gtk-module”
False
Traceback (most recent call last):
File “ipc.py”, line 18, in
cv2.imshow(“1”, f1)
cv2.error: OpenCV(4.1.1) /home/nvidia/host/build_opencv/nv_opencv/modules/highgui/src/window.cpp:352: error: (-215:Assertion failed) size.width>0 && size.height>0 in function ‘imshow’