Live stream freezing problem while using gstreamer with opencv

Hi All,
I am working on a project that uses gstreamer with opencv, a simple code that aims to capture the image from a camera and stream out using ethernet ports of jetson nano. Normally I did manage to make the image very clear, smooth, high quality and has very low latency and do not freeze while streaming out. But I did this on a setup that has an ethernet cable between jetson nano where I stream out the image and the platform where I display the image. Up until here I have no problem. But we are using this system with datalinks. So instead of over ethernet cable, system communicates wirelessly. Here occurs the problem. On this setup, live stream freezes a lot so I cant display the image the way I want. I am kinda sure that this problem is caused by my pipelines. Because when I use this exact setup without nano just the camera and datalinks it works fine. But when jetson nano insterted between the camera and the links live stream freezes. I am using:

pipeCam      = 'rtspsrc location=rtsp://<ip> latency=1 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! videoscale ! video/x-raw,width=1280,height=720 ! appsink'

this pipeline for capturing from the camera. And

pipeOut3 = 'appsrc ! video/x-raw, format=BGR ! videoconvert ! omxh264enc ! h264parse config-interval=1 ! rtph264pay pt=96 ! udpsink host=<ip> port=50001 sync=false'

this pipeline for stream out.

 pipeOut2='appsrc ! video/x-raw, format=BGR ! videoconvert ! omxh264enc cabac-entropy-coding=true bitrate=2000000 peak-bitrate=2000000 ! h264parse config-interval=1 ! rtph264pay pt=96 ! udpsink host=10.224.10.255 port=50001 sync=false'

And also when I use this pipeline with cabac-entropy-coding=true bitrate=2000000 peak-bitrate=2000000 added to pipeline above that one, there happens pixelization when moving the camera.

gst-launch-1.0 udpsrc port=50001 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! queue ! avdec_h264 ! d3dvideosink sync=false

I am using this pipeline for displaying the image that is streaming out from jetson nano.

Lastly I observed that when camera is not moving the stream looks fine. But in our system camera is on a very moving platform so I suppose maybe this problem occurs when camera moves.
I am newbie about all that and I am trying to learn gstreamer. So I could not find a solution for that. I would be glad if someone helps me for that.
Thanks.

Not sure, but at first I’d suggest disabling multicast for a wifi network. Try streaming to a given host at first:

... ! udpsink host=<target_host_ip> port=50001 auto-multicast=false

Also when streaming with RTP, you may add rtpjitterbuffer on receiver with a decent latency according to your network performance that you may try to further decrease and see:

gst-launch-1.0 udpsrc port=50001 auto-multicast=0 ! application/x-rtp,encoding-name=H264,payload=96 ! rtpjitterbuffer latency=1000 ! rtph264depay ! ...

Thanks for the answer.
I have tried but they didnt make much of a difference. Today I discovered that when the datalink that streams the image is close to the datalink that receives the image it is all fine, there are no freeze and I can receive the live stream. But when I apart the datalinks up to 100 meters stream freezes constantly. I am not sure but I guess because of the distance some packages drop and gstreamer cannot recover the rest properly but as I said it is just a guess i dont know what the problem is. Using rtpjitterbuffer seems to be dealing with that issue but didnt solve the problem exactly. I dont know the problem so I dont know what to do. Datalink works fine, we have tested it when camera connected directly to datalink without jetson nano instered between them.

Any suggestions for that will be greatly appreciated. Thanks.

You may increase latency (1 ms may only work with localhost).
You may also try using TCP transport:

For IP cam source:

pipeCam = 'rtspsrc location=rtsp://<ip> latency=1000 protocols=tcp ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw,width=1280,height=720 ! appsink'

When streaming, using MPEG2 TS over TCP:

pipeOut3 = 'appsrc ! video/x-raw, format=BGR ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc ! h264parse ! mpegtsmux ! tcpserversink

and receiving with :

gst-launch-1.0 tcpclientsink ! tsdemux ! h264parse ! ...
gst-launch-1.0 tcpclientsrc ! tsdemux ! h264parse ! ...

Hi Thanks Again,
I used your pipeline that receives from IP cam source but it did not work so instead of nvv4l2decoder I used omxh264dec and it worked. But a minute or two later I start the code cpu hits 100% suddenly and receiving stop. And also receiving pipeline:

gst-launch-1.0 tcpclientsink ! tsdemux ! h264parse ! ...

is not working and I realize this pipeline starts with a sink. I dont know if it should be like that or not but if it is not I would be glad if you can write the right version.
In addition I would like to ask that is there any way that I receive compressed data from ip camera and just redirect without depaying and compressing again using gstremer.
Thanks a lot.

Sorry for the confusion, indeed it would rather be tcpclientsrc. I’ll correct that. I sometimes reply without being able to test, and this applies to this post as well, so don’t be surprized if there is someting wrong.

Depaying would not be expensive as compared to decoding. If you don’t need to process video butjust relay, you may just use gstreamer:

gst-launch-1.0 -v rtspsrc location=rtsp://<camera_ip> latency=1000 protocols=tcp ! rtph264depay ! h264parse ! mpegtsmux ! tcpserversink port=4953

And receive with:

gst-launch-1.0 -v tcpclientsrc host=<Jetson_IP> port=4953 ! tsdemux ! h264parse ! avdec_h264 ! videoconvert ! autovideosink

# Or
gst-play-1.0 tcp://<Jeston_IP>:4953

# Or
ffplay tcp://<Jeston_IP>:4953

# Or
cvlc tcp://<Jeston_IP>:4953

Thanks,
I could not try this new receiver pipeline but I did use the pipelines from previous answer that uses tcp instead of udp. But I have some issues and thoughts here:

  1. Firstly if I make latency=1000 there is 1s delay as expected. But we need live stream that has low latency(400ms max).

  2. Secondly as I stated before when I use the pipelines with tcp, after a minute system crushes and I cannot receive any images after that unless I rerun the code and the same thing happens every time.
    Independent from that I thought maybe the algorithm I am using may be problematic. I am receiving two images from two different cameras and decoding them and compressing them again and stream out the image that is desired to display from ground. But only process I am having while doing this is scaling the images. So I thought the problem I am having freezing while streaming from a distance may be caused by this process but I am not sure.

  • First i would like to ask that is the way my code works (decoding-encoding-decoding) healthy and can nano handle this or should I change it to the way -as I asked before- receiving image and redirecting without decoding and compressing again?

  • Or this freezing problem from a distance(works great at close range up to 100m) can be solved by making minimal changes like using tcp instead of udp or adding some caps to pipelines that I am already using?

This works for me because it would be better if I dont make big changes on the system.
And lastly :

gst-launch-1.0 -v rtspsrc location=rtsp://<camera_ip> latency=1000 protocols=tcp ! rtph264depay ! h264parse ! mpegtsmux ! tcpserversink port=4953

with this pipeline we are receiving images from ip camera depaying , parsing and sending to tcpserversink. And on the receiver side:

gst-launch-1.0 -v tcpclientsrc host=<Jetson_IP> port=4953 ! tsdemux ! h264parse ! avdec_h264 ! videoconvert ! autovideosink

using this to receive that image.

  • If we dont want to decode the image in the first place do we have to depay and parse?

  • And we are depaying while receiving but dont paying again when we want to stream out. I dont understant the logic here.

  • We are parsing in both pipeline also, is it necessary?

I am asking these because I am trying to understand the way that gstreamer and its caps work.
I know this topic goes along for a while and I have asked many questions. But because I am new to all these and it is hard to find resources to learn gstreamer I have to ask. And also I am working on a important project that has a close due date so am stressed out.
I appreciate for the answers so far. Thanks a lot.

Sorry I’m unable to teach gstreamer in a few posts.
Probably here you’re missing that mpegtsmux is performing another kind of payloading.
You may try to omit h264parse and see…but I don’t think that would be worse than decoding/encoding/decoding that is basically useless, wasting power and maybe also some image quality.

If your project is late, I’d suggest to create a new topic into Jetson Nano forum, explaining your exact use case in details:

  • What are your camera sources : you may at least send what gstreamer finds from them:
gst-discoverer-1.0 -v rtsp://<cam1>
gst-discoverer-1.0 -v rtsp://<cam2>
  • Tell how these are connected to Nano (Wired ethernet ?)

  • Tell how Nano is connected to receiver host (Wifi ?)

  • Tell what is the expected processing result from these 2 IP cameras to receiver host. You mentioned rescaling, is it composing ? Or else ?

The more detailed use case you would provide, the better advice you would get.

Hi Again,
My aim is to capture two images from two different cameras, one is IP camera that has higher resolution and the other one is analog usb cam that has lower resolution and image quality, and switch these images according to the commands receiving from ground control station. So it’s basically a camera switch program that I programmed using opencv python and gstreamer on jetson nano. IP camera connected to jetson nano through ethernet cable and the other cam is connected to usb port. And we are using wireless data link to make connection between jetson nano and the receiver host over ethernet cable. But even though this system works fine without any problem in short distance up to 50 meters the image starts to freeze when the data link distance increased after that. Low latency (200-300 ms), good image quality and stable live stream are requirements for this system. So far only problem is freezing due to link distance.
Here is the code that receives image from cameras:

import cv2
class CamType:
  
    pipeIP      = 'rtspsrc location=rtsp://192.168.1.160:554/live/track0 latency=1 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! videoscale ! video/x-raw,width=1280,height=720 ! appsink'
    pipeAnalog  = 'v4l2src device=/dev/video0 ! videoconvert ! video/x-raw,format=I420,interlace-mode=interleaved ! videoconvert ! appsink sync=false'
  
    def __init__(self):
        self.currentCam = cv2.VideoCapture(self.pipeIP, cv2.CAP_GSTREAMER)          

    def switchCamera(self, camType):
        if(camType == 0):	
            self.currentCam.release()
            self.currentCam  = cv2.VideoCapture(self.pipeAnalog, cv2.CAP_GSTREAMER)
            return self.currentCam
        elif(camType == 2 or camType == 1):
            self.currentCam.release()
            self.currentCam = cv2.VideoCapture(self.pipeIP, cv2.CAP_GSTREAMER)
            return self.currentCam
     

And here is the main code that makes the stream:

import cv2
import numpy as np
import serial
import moduleParser
import sys
import crc16
import threading
import CAMSELECT as cam
def main():
    try:
        serial_port = serial.Serial(port="/dev/ttyTHS1", baudrate=115200, bytesize=serial.EIGHTBITS, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, rtscts=False, dsrdtr=False, xonxoff=False)
        print("port open")
    except:
        print("port is not yet open")
        serial_port = serial.Serial(baudrate=115200, bytesize=serial.EIGHTBITS, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE)
    
    pipeOut = 'appsrc ! video/x-raw, format=BGR ! videoconvert ! omxh264enc ! h264parse config-interval=1 ! rtph264pay pt=96 ! udpsink host=10.224.10.255 port=50001 sync=false'
    cameraSelector = cam.CamType()
    capMain = cameraSelector.currentCam
    out     = cv2.VideoWriter(pipeOut, 0, 30/1, (1280, 720))
    
    while 1:
        stat, frame = capMain.read()

        if((frame is None) | (stat is False)):
            return 
        if (camera==0):#Here we need to resize the analog image
            frame=frame[95:,:]
            frame = cv2.resize(frame,(1280,720))

        if serial_port.inWaiting() >= 18:   
            data = serial_port.read(18)
            serial_port.reset_input_buffer()
            message = moduleParser.Message(data)
            
            if (message is None):
                continue
            
            if(message.functionID == 0):#Change cam
                if (camera != message.params[0]):
                    camera = message.params[0]
                    print("cam:",camera)
                    print("message:",message.params)
                    capMain = cameraSelector.switchCamera(camera)
        cv2.imshow("frame",frame)
        out.write(frame)

        if cv2.waitKey(5) == ord('q'):
            print("Escaping stream")
            break


if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        print("Interrup received Ctrl + C")
        sys.exit(0)

Also there is a code that parse the message receiving from our ground control station but I dont think it is necessary to share it.

And finally receiving pipeline in ground control station:

gst-launch-1.0 udpsrc port=50001 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! queue ! avdec_h264 ! d3dvideosink sync=false

I am receiving compressed images from the cameras and decoding them, after that I am selecting the image using opencv according to the cam select command I receive and encode that selected image and stream with another pipeline. Finally I am decoding that compressed image on the ground station and in this way my system works.
Is this a logical way to achieve what I want? Should I change the whole code to a better working algorithm or can I solve my problem with optimizing the pipelines I am using?
Thanks

If the Wifi radio signal gets too low at one distance for it to be received, that would fail anyway.

I’d suggest to try:

  • Disabling multicast that might be weird with WiFi. Add property auto-multicast=0 to udpsink and stream to a given host over wifi.
... ! udpsink host=10.35.1.20 port=50001 auto-multicast=0
  • Try TCP transport that would query for lost packets to be resent:
... ! h264parse config-interval=1 ! mpegtsmux ! tcpserversink host=<host_IP> port=50001 

Note that omx plugins are deprecated, so you may use:

pipeOut = 'appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc insert-vui=1 insert-sps-pps=1 idrinterval=15 ! h264parse ! mpegtsmux ! tcpserversink host=<host_IP> port=50001

and receive with:

gst-launch-1.0 tcpclientsrc host=<host_ip> port=50001 ! tsdemux ! decodebin ! nvvidconv ! autovideosink

# Or if not NVIDIA
gst-launch-1.0 tcpclientsrc host=<host_ip> port=50001 ! tsdemux ! decodebin ! autovideosink

The better improvement may be achieved with Wifi signal analysis, though.