I failed to create an RTSP streaming server using test-launch

device:jetson orin nano
jetpack version:5.1.3

I want to use the test-launch command to create an RTSP streaming server for testing the stream of cameras on Orin Nano, but after executing the following command, it cannot pass rtsp://:8554/test Obtain stream data

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1, format=NV12 ! nvvidconv ! video/x-raw format=I420 ! x264enc ! video/x-h264, profile=baseline, stream-format=byte-stream ! h264parse ! rtph264pay name=pay0 pt=96 config-interval=1 "

When I use the VLC tool(rtsp://192.168.0.12:8554/test) on my computer to obtain the RTSP stream, the server prints the following:

Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:770 Failed to create CaptureSession

I have been trying for a long time but have not been successful. If anyone could guide me, I would be very grateful!

Hi @jack.yan

The error looks to be on the nvarguscamerasrc element. Have you tried a simple basic capture-display pipe? Just to validate you can get frames from the source?

gst-launch-1.0 nvarguscamerasrc ! nvvidconv ! xvimagesink

Also, have you tried replacing the “nvarguscamerasrc + nvvidconv” by a videotestsrc? To check if the problem is on the source.

Regards,

Enrique Ramirez
Embedded SW Engineer at RidgeRun
Contact us: support@ridgerun.com
Developers wiki: https://developer.ridgerun.com
Website: www.ridgerun.com

Hi @enrique.ramirez
When I use gst-launch-1.0 to preview, it works. Now I have successfully implemented streaming using RTSP。The command is as follows:

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1, format=NV12 ! nvvidconv ! video/x-raw,width=1920,height=1080,framerate=30/1 ! x264enc bitrate=2048 tune=zerolatency preset=veryfast ! rtph264pay name=pay0 pt=96 config-interval=1"

And then I use command to optain the RTSP stream:

gst-launch-1.0 rtspsrc location=rtsp://192.168.0.12:8554/test protocols=udp latency=100 ! rtph264depay ! h264parse ! nvv4l2decoder ! videoconvert ! appsink

After executing the command, there were no errors, but there was also no preview. I would like to modify the command to be implemented in Python after successful testing to facilitate obtaining frame data for target recognition。
It seems to be a data format issue sent to appsink, but I don’t know what data formats appsink supports

How can I use python/c++ to implement the function of pulling RTSP stream?

Hi,
Please try this command and see if preview is shown:

$ gst-launch-1.0 rtspsrc location=rtsp://192.168.0.12:8554/test protocols=udp latency=100 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=I420 ! xvimagesink

Hi @DaneLLL :
Thank for your answer.
I only use the command to test that RTSP is working properly. I need to use Python/C++to obtain the RTSP stream and decode it into the format required by the object recognition algorithm. When I use the command to test, both xvimagesink and nv3dsink can work properly.
I am currently using Python to obtain the stream, but it always fails. I tested and found that Python requires the sink to be appsink, but when I use the appsink program, it blocks in cv2. VideoCap().
Here is my program:

import cv2
def gstreamer_pipeline():

    return (
        "rtspsrc location=rtsp://192.168.0.12:8554/test protocols=udp latency=100 ! "
        "rtph264depay ! h264parse ! nvv4l2decoder ! "
        "videoconvert ! appsink"
    )
#gst-launch-1.0 rtspsrc location=rtsp://192.168.0.12:8554/test protocols=udp latency=100 ! rtph264depay ! h264parse ! nvv4l2decoder ! autovideoconvert ! autovideosink
def show_camera():
    window_title = "gimabal camera"
    print(gstreamer_pipeline())
    cap = cv2.VideoCapture(gstreamer_pipeline(), cv2.CAP_GSTREAMER)
    if not cap.isOpened():
        print("Unable to open camera...")
        exit()
    
    try:
        winHdl = cv2.namedWindow(window_title, cv2.WINDOW_AUTOSIZE)
        while True:
            ret, frame = cap.read()
            if not ret:
                print("Err: unable to read frame...")
                exit()

            if cv2.getWindowProperty(window_title,cv2.WND_PROP_AUTOSIZE) >= 0:
                cv2.imshow(window_title, frame)
            else:
                break
                
            keyCode = cv2.waitKey(10) & 0xFF
            if keyCode == 27 or keyCode == ord('q'):
                break

    finally:
        cap.release()
        winHdl.destroyAllWindows()
        

if __name__ == "__main__":
    show_camera()

Hi,
Please try this sample with the URI:
Doesn't work nvv4l2decoder for decoding RTSP in gstreamer + opencv - #3 by DaneLLL

Hi, @DaneLLL
I was used that command to test,it is not work:

gst-launch-1.0 rtspsrc location=rtsp://192.168.0.12:8554/test latency=100 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink 

In pyhton, the program still blocks in cv2.VideoCapture, I found the answer to this link, and I also tested it

Hi,
From our experience, if the gstreamer pipeline works in gst-launch-1.0 command with fakesink, it shall work same in cv2.VideoCapture() with appsink. It’s strange you can successfully run the gst-launch-1.0 command but fails in OpenCV.

Hi:
I can use the same python code to work normally on Windows, but there is an exception on Orin Nano, so I cannot find the reason.