Python+OpenCV+RTMPSINK Gstreamer - Bug?

Hi all,
I’m facing to a problem that I don’t know if is caused by OpenCV or Gstreamer.

If my Python script tries to write to an offline RTMP server, Pyhon remains stucked and there is no way to recover it.
I need to kill process with “-9” in order to terminate it brutally.

Code:

...
rtmpUrl = 'rtmp://127.0.0.1/live/camera1'
send_gst = "appsrc ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc bitrate=4000000 ! video/x-h264,stream-format=(string)byte-stream,alignment=(string)au ! h264parse ! queue !  flvmux name=mux ! rtmpsink location=" + rtmpUrl
out_stream = cv2.VideoWriter(send_gst, 0, stream_fps, (stream_width, stream_height))

while True:
	...
	   out_stream.write(frame_output)
	...
...

If RTMP server is online all works properly but if RTMP is offline, the script remains stucked.

I also tried to write in try/except way:

while True:
	...
           try:
	       out_stream.write(frame_output)
           except:
               out_stream.release()
	...
...

but seems that it remains stucked in gstreamer pipeline, I don’t know how to solve…

I also tried this solution:
https://stackoverflow.com/questions/366682/how-to-limit-execution-time-of-a-function-call-in-python?answertab=active#tab-top
but without no luck…

Python Version: 2.7.17 (default installed in Jetson Nano Image)
Opencv Version: 3.3.1 (default installed in Jetson Nano Image)
Gstreamer Version: 1.14.5 (default installed in Jetson Nano Image)

Hi,
Yo may check if the pipeline works in gst-launch-1.0 and then apply to python code. Some debug tips are in
https://devtalk.nvidia.com/default/topic/1037884/

I solved using a workaround, before to push frame to RTMP server I try to open a socket connection to RTMP port 1935 in order to check if port is open.

But also checking gstreamer documentation:
https://gstreamer.freedesktop.org/documentation/rtmp/rtmpsink.html?gi-language=c#pad-templates

There is nothing similar to “timeout” or “connection” related to RTMP server.

I suppose that I found the same issue in VideoCapture.

In my application each time that I try to get a frame, I try to open a socket client to RTSP IP camera server.
If IP Camera RTSP replies before timeout, I get the frame otherwise I close the capture using: cap.release():

Capture GST Pipeline

gst_pipeline = "rtspsrc location=rtsp://admin:Password@192.168.0.204:554/ ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! videorate ! video/x-raw(memory:NVMM),format=BGRx,framerate=15/1, width=(int)1280, height=(int)720 ! nvvidconv ! video/x-raw ! videoconvert ! video/x-raw,format=BGR ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=350000000 ! appsink"

Code

...
while True:
    # ----------------------------------------------------------------------------- #
    if videoClass.checkCamera(config['source'], 1) is False: # <<--- My class to check if RTSP server is OPEN
        if cap.isOpened():
            cap.release()
        continue
    else:
        if cap.isOpened() is False:
            cap = cv2.VideoCapture(gst_pipeline)
    # ----------------------------------------------------------------------------- #

    if cap.isOpened():
        ret, frame_read = cap.read()
...

Each time that I initialize VideoCapture the previous resource should be release but in my Jetson Nano RAM usage increase till kill process cause of out of memory.

Couple things.

if cap.isOpened() is False

You probably want “==” instead in this case, or more simply, “if not cap.isOpened()”, but there is better. You can just do:

cap = cv2.VideoCapture(gst_pipeline)
cap.open()
while cap.isOpened()
    ret, frame = cap.read()
    if not ret:
        break
    # do stuff with frame
cap.release()

No need for try: finally block

Some more usage examples:
http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html

However, OpenCV is not very performant. Most of it does not even use the GPU and the parts that do are buggy. If you simply with to dump video to file, or even do analysis on video, you may want to use Nvidia’s gstreamer elements exclusively, with no OpenCV at all.

Also easy to use and fast if you like Python is the jetson-inference project. Though not as fully featured as DeepStream, it’s a lot easier to work with.

https://github.com/dusty-nv/jetson-inference/blob/master/README.md

Thanks mdegans for your tips.

It was not written in my previous post but I already release the cap out while loop.

,,,
while True:
    ...

if cap.isOpened():
    cap.release()

Anyway I use Python cause I’m using YoloV3-Tiny (and I don’t know C or C++), I tested jetson-inference models and results are poor compared to Yolo.

Please check my post:
https://devtalk.nvidia.com/default/topic/1057006/jetson-nano/hello-ai-world-now-supports-python-and-onboard-training-with-pytorch-/post/5384517/#5384517

Consider that I’m able to reach 18-20 Fps using YoloV3-Tiny-prn at a resolution of 576x320 with a very better detection then ssd-mobilenet-v1/v2 or ssd-inception-v2

Hi,
We have reference application of running YoloV3 tiny in DeepStream SDK. Suggest you check and try.

The latest release is DS4.0.2