decode the rtsp stream from a IP camera

I try to decode the rtsp stream from a IP camera by ffmpeg and the CUVID SDK .
Is this applicable on Jetson TX2?

The SDK(Video_codec_sdk) seems to have restriction to the GPU.
Can FFmpeg achieve this goal alone (not to use the SDK) ?

Video codec SDK is not supported on TX2.
You can use gstreamer:
https://devtalk.nvidia.com/default/topic/1014789/jetson-tx1/-the-cpu-usage-cannot-down-use-cuda-decode-/post/5188538/#5188538
https://developer.nvidia.com/embedded/dlc/l4t-accelerated-gstreamer-guide-28-2-ga

Thanks you DaneLLL ,
I have recieved your suggestion and I met some problem about the GStreamer.

OPENCV version 3.4.1-dev
Camera daemon stopped function…
OPENCV ERROR : Unspecified error (GStreamer:unable to start pipeline) cvCaptrueFromCAM_GStreamer, file /home/opencv/modules/videoio/src/cap_gstreamer.cpp,line 890.

Please clearly describe your issues in detail

python tegra_cam.py --uri rtsp://admin:scu508123@192.168.128.103
Called with args:
Namespace(image_height=1080, image_width=1920, rtsp_latency=200, rtsp_uri=‘rtsp://admin:scu508123@192.168.128.103’, use_rtsp=False, use_usb=False, video_dev=1)
OpenCV version: 3.4.1-dev
Socket read error. Camera Daemon stopped functioning…
gst_nvcamera_open() failed ret=0
OpenCV(3.4.1-dev) Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp, line 890
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

OpenCV(3.4.1-dev) /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline
in function cvCaptureFromCAM_GStreamer

Failed to open camera!

Hi,
The script is for onboard camera input, not rtsp input. Please check links in comment #2

Thx a lot ,
https://blog.csdn.net/zong596568821xp/article/details/80306987
this is the link where the script from and different from links above.

I am not sure about the reason why this error occurs.
GStreamer: unable to start pipeline

If you are using the ‘tegra_cam.py’ script I wrote (https://jkjung-avt.github.io/tx2-camera-with-python/), you’ll need to add a ‘–rtsp’ to the command.

python tegra_cam.py --rtsp --uri rtsp://admin:scu508123@192.168.128.103

hello,today I start the TX2 to try your solution and it cannot boot ,strangely.
tx2 cannot find firmware direct firmware load for tegra18x_xusb_firmware failed with error 2
falling back to user helper.

HELLO, jkjung13,
I have tried your suggestion and it did not work for the error above.

gst_nvcamera_open() failed ret=0
OpenCV(3.4.1-dev) Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp, line 890
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

OpenCV(3.4.1-dev) /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline
in function cvCaptureFromCAM_GStreamer

  1. As stated in my blog post, you need to install the following gstreamer components to be able to demux/decode RTSP streams from IP CAM.
$ sudo apt-get install gstreamer1.0-plugins-bad-faad \
                       gstreamer1.0-plugins-bad-videoparsers
  1. Otherwise, try the following from command line to make sure you have all necessary gstreamer components installed on your JTX2 system first.
$ gst-launch-1.0 rtspsrc location=rtsp://admin:scu508123@192.168.128.103 ! \
                 rtph264depay ! h264parse ! omxh264dec ! nveglglessink

Thanks a lot for answering my questions.
And I have figure out the cause of above error.
Two more question :
1.How to save the decoded rtsp stream into a video file ?
2.The present FPS is about 35 f/s and how to decode multi-camera stream at the same time ? For example , can TX2 decode four rtsp streams at the same time ?
Best wishes.

  1. You could check out my “Tegra Camera Recorder” blog post: https://jkjung-avt.github.io/tx2-camera-recorder/

  2. Reference my tegra_cam.py code and try to initiate multiple cv2.VideoCapture()'s.

Hello jkjung13,
Thanks for your reply and I meet some problem about the decoding of rtsp stream on TX2.

  1. Can we decode the rtsp stream and lower the resolution. For example, the original resolution of rtsp stream from a IP camera is 19201080 and we want to decode it and save it with a lower resolution (320240).
  2. The resource of TX2 platform can satisfy the need of multi-cameras decoding ?
    Best wishes.
  1. In my tegra-cam.py (https://gist.github.com/jkjung-avt/86b60a7723b97da19f7bfa3cb7d2690e) code, I used ‘nvvidconv’ to scale down images captured from TX2 onboard camera. Please refer to the code snippet below.
def open_cam_onboard(width, height):
    # On versions of L4T prior to 28.1, add 'flip-method=2' into gst_str
    gst_str = ('nvcamerasrc ! '
               'video/x-raw(memory:NVMM), '
               'width=(int)2592, height=(int)1458, '
               'format=(string)I420, framerate=(fraction)30/1 ! '
               'nvvidconv ! '
               'video/x-raw, width=(int){}, height=(int){}, '
               'format=(string)BGRx ! '
               'videoconvert ! appsink').format(width, height)
    return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
  1. Short answer, yes. Please refer to official NVIDIA documentation.

Hi jkjung13,
Thanks for sharing your tegra-cam.py code. I am working on a project to perform object detection tasks from some video source. I tried an IP camera first and your code works perfectly. After that I got video from a HDMI camera, and I connect the HDMI camera with an decode board(with ip add: 192.168.1.168) which will transfer the video stream from HDMI to H.264, and send the H.264 video through LAN to my JTX2. When I try to display video by:

cv2.VideoCapture("rtsp://admin:admin@192.168.1.168")

it works, but with unexpected high latency. Around 8 to 10 seconds. So I’m planning to decode the video by using gstreamer to get a lower latency, and I tried the tegra-cam.py code to work on the H.264 stream from the decode board. Then I got the following error:

OpenCV Error: Unspecified error (GStreamer: unable to start pipeline) in cvCaptureFromCAM_GStreamer, file /home/nvidia/src/opencv-3.4.0.modules/videoio/src/cap_gstreamer.cpp, line 890
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

/home/nvidia/src/opencv-3.4.0/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline in function cvCaptureFromCAM_GStreamer)

Do you have any ideas why this is happening? Any thoughts helps, thank you!

I’m guessing the RTSP stream generated by your HDMI encoder board requires a different demux or decoder.

Try the following from a terminal. Does it fail with a similar error message?

gst-launch-1.0 rtspsrc latency=200 location=rtsp://admin:admin@192.168.1.168 ! rtph264depay ! h264parse ! omxh264dec ! nveglglessink

Then try the gstreamer commands listed here: https://devtalk.nvidia.com/default/topic/1049103/jetson-tx2/gstreamer-decode-live-video-stream-with-latency/post/5324384/#5324384

Once you’ve found a working gstreamer pipeline, copy it to the gstreamer string in the ‘open_cam_rtsp’ function (https://gist.github.com/jkjung-avt/86b60a7723b97da19f7bfa3cb7d2690e#file-tegra-cam-py-L51). That should solve the problem.

Hello jkjung13,

Thanks for your reply! I tried the other pipelines listed in https://devtalk.nvidia.com/default/topic/1049103/jetson-tx2/gstreamer-decode-live-video-stream-with-latency/post/5324384/#5324384
and I found the last one finally works.

gst-launch-1.0 -v rtspsrc location=rtsp://192.168.1.111:80/live/0/mjpeg.sdp latency=0 ! application/x-rtp,media=video ! decodebin ! videoconvert ! xvimagesink

When I turn the overclock mode on, the video is being displayed well. And after that I changed my code to:

cap = cv2.VideoCapture("rtspsrc location=rtsp://admin:admin@192.168.1.168 latency=0 ! application/x-rtp,media=video ! decodebin ! videoconvert ! xvimagesink", cv2.CAP_GSTREAMER)

And the error becomes:

OpenCV Error: Unspecified error (GStreamer: cannot find appsink in manual pipeline) in cvCaptureFromCAM_GStreamer, file /home/nvidia/src/opencv-3.4.0.modules/videoio/src/cap_gstreamer.cpp, line 805

My question is: I used xvimagesink in my code and why appsink is still indicated as the problem?

EDIT: I find out that I have to add format=(string)BGRx into my code to make sure OpenCV can properly display them, and after change xvimagesink to appsink, the code just magically worked! Appreciated for your great help!