View rtsp stream with opencv on jetson

This is my first post on this forum. and I’m not good at gstreamer.

I want to watch rtsp stream with opencv on jetson.
I could same thing on x86-64 laptop.
first, I typed this command in terminal.

gst-launch-1.0 rtspsrc location=rtsp://192.168.53.1/live latency=0 dulation=-1 ! decodebin ! videoconvert ! autovideosink

It worked fine and I made python code.

import cv2
cap = cv2.VideoCapture("rtspsrc location=rtsp://192.168.53.1/live latency=0 dulation=-1 ! decodebin ! videoconvert ! appsink" , cv2.CAP_GSTREAMER)

while(True):
    ret,frame=cap.read()
    cv2.imshow("cap",frame)
    cv2.waitKey(1)

Yes, it works.
I want to do same thing on jetson. But the above command does not work on jetson.
I think “rtspsrc” is not available for jetson. I tried many things but it didn’t work.
So I researched alternatives. This is itSo I researched alternatives. This is it.

gst-launch-1.0 playbin uri=rtsp://192.168.53.1/live uridecodebin0::source::latency=0

It works on both laptop and jetson.
I had a problem getting this working in python.

cap = cv2.VideoCapture("playbin uri=rtsp://192.168.53.1/live uridecodebin0::source::latency=0")

and error

open OpenCV | GStreamer warning: cannot find appsink in manual pipeline

How to add “appsink” using “playbin”? or can I use “rtspsrc” and type at the end “! appsink”

my jetson is agx xavier and jetpack 5.0.2
Thank you in advance

Try:

cap = cv2.VideoCapture("uridecodebin uri=rtsp://192.168.53.1/live source::latency=0 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1", cv2.CAP_GSTREAMER)

# or
cap = cv2.VideoCapture("rtspsrc location=rtsp://192.168.53.1/live latency=0 ! decodebin ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1", cv2.CAP_GSTREAMER)

That code isn’t work with this message.

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 

** (python3:4753): CRITICAL **: 09:26:37.137: gst_buffer_add_video_time_code_meta_full: assertion 'GST_IS_BUFFER (buffer)' failed

(4753 is PID and 09:26:37.137 is time)
I got same error while using gst-launch-1.0
What does this message mean and what can I do?

You may add verbose flag (-v) to gst-launch in the working case with playbin, it will tell what plugins and caps in between are used in the working pipeline.
Post these here if these are not clear to you, with some details about your stream from:

gst-discoverer-1.0 rtsp://192.168.53.1/live

This is the result of gst-discoverer-1.0 rtsp://192.168.53.1/live.

Analyzing rtsp://192.168.53.1/live
Opening in BLOCKING MODE 
Done discovering rtsp://192.168.53.1/live
Analyzing URI timed out

Topology:
  unknown: application/x-rtp
    video: H.264 (Main Profile)

Properties:
  Duration: 99:99:99.999999999
  Seekable: no
  Live: yes

This result has too much output and I don’t know what to look at.

And I realized one thing.
The rtsp stream I want to see is actually the drone’s camera.
I used vlc from another computer to rtsp stream an mp4 video and your code (uridecodebin) worked fine.
Why can’t I see the drone’s camera…

I’m concerned about this
** CRITICAL ** : gst_buffer_add_video_time_code_meta_full: assertion 'GST_IS_BUFFER (buffer)' failed
This message does not appear when using vlc.

Hmm… Is it a wifi network ?

You may post this piece of information for me to try providing a working pipeline. You can stop after it displays, mostly the pipeline setup is relevant here. I cannot further help if not sharing that.

DRONE —wireless–> CONTROLLER —usb ethernet–> COMPUTER
and laptop doesn’t have that output. Here is laptop’s output

Analyzing rtsp://192.168.53.1/live
Done discovering rtsp://192.168.53.1/live

Topology:
  unknown: application/x-rtp
    video: H.264 (Main Profile)

Properties:
  Duration: 99:99:99.999999999
  Seekable: no
  Live: yes
  Tags: 
      video codec: H.264 (Main Profile)

OK, command:
gst-launch-1.0 -v uridecodebin uri=rtsp://192.168.53.1/live source::latency=0 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1 > gst-v.txt
and output:
gst-v.txt (50.1 KB)
This command got stuck, so I hit CTRL-C later.
(I tryed “appsink”. Is that ok?)

From the attached log, it seems running fine to me from gst-launch.

Using appsink with gst-launch is not so good, because no application will free the buffers on sink and therefore memory usage will continuously grow till outage if you don’t interrupt before. Better use fakesink instead for testing the pipeline.

You may confirm it receives/decodes fine with (assuming you have a X display):

gst-launch-1.0 -v uridecodebin uri=rtsp://192.168.53.1/live source::latency=0 ! nvvidconv ! xvimagesink

You should be able to see the video feed.

If ok, you would try this python code:

import cv2
print(cv2.getBuildInformation())

# RTSP capture pipeline
gst_cap = "uridecodebin uri=rtsp://192.168.53.1/live source::latency=0 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1"
cap = cv2.VideoCapture(gst_cap, cv2.CAP_GSTREAMER)
if not cap.isOpened() :
    print("capture open failed")
    exit(-1)
w = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
h = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
fps = cap.get(cv2.CAP_PROP_FPS)
print('Src opened, %dx%d @ %d fps' % (w, h, fps))

# Video writer to fpsdisplaysink using xvimagesink
gst_out = "appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! fpsdisplaysink video-sink=xvimagesink "
out= cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, float(fps), (int(w), int(h)))
if not out.isOpened():
    print("Failed to open writer")
    exit(-2)

# Main loop
while True:
    _, frame = cap.read()

    # Some process here on frame...

    # imshow may not be that efficient on some Jetsons such as Nano, better use videowriter:
    out.write(frame)
    #cv2.imshow('Test', frame)
    #cv2.waitKey(1)

If not working, you may post the generated output.

It outputs a blacked out window named “gst-launch-1.0”, but nothing happens.
here is log.
gst-v_out2.txt (38.8 KB)

And I tried your python code.
It doesn’t work.
“gst_buffer_add_video_time_code_meta_full” again.
There has no extra window and extra message.
python_out.txt (6.1 KB)

Maybe the latency is too low for your network. Increase to 500 (or remove it, default would be 2000 ms).

In case it doesn’t work out, someone else better skilled may have to answer.
I have spent almost one hour trying to help your case. I’m helping for free, but only for polite users.

I tried latency=500, 1000, 2000, and removed.
The result did not change…

I may have to convey my apologies and gratitude to you.
I’m not a native English speaker so I apologize for being rude.
(Most of my messages are based on google translate.)
And thank you very much for taking time out for me.

It’s ok…sorry I was a bit upset yesterday.

It may be an issue with Parrot RTSP server encoder not sending IDR frames.

Can you try to start receiver before server ?

Or try CPU decoder (you may have to install ffmpeg if not yet done):

gst-launch-1.0 -v rtspsrc location=rtsp://192.168.53.1/live latency=500 is-live=1 ! application/x-rtp,encoding-name=H264 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! xvimagesink

YES, it worked! PERFECT!!
Thank you very very much!
You are my hero. Thank you so much for taking the time out for me!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.