I got the RTSP video stream on the jetson nano and ran it for about 50 frames before returning false

The code is as follows. May I ask why?

" video = cv2.VideoCapture(rtsp_url)
success, frame = video.read()"

My nano has implemented a detection model + trace model. Is it possible that some system resources are exhausted, which results in the return of false? (I have tried three cameras, all of which are like this, so it should be something else.)

How do I solve this

Hi,
cv2.VideoCapture(rtsp_url) does not utilize hardware decoding. We suggest use gstreamer. Please try

$ gst-launch-1.0 uridecodebin uri=rtsp://freja.hiof.no:1935/rtplive/_definst_/hessdalen02.stream ! nvoverlaysink

And replace with your URI.

What should python code do?

Hi,
There is a python gstreamer tutorial. Please check

Ok, so before I start moving to gstreamer, I want to ask why jetson nano doesn’t support video capture? I used to be very stable on X86 computers.

Hi,
We can run the python code on Jetson Nano/r32.3.1:

import sys
import cv2

vcap = cv2.VideoCapture("rtsp://freja.hiof.no:1935/rtplive/definst/hessdalen02.stream")

while(1):
    ret, frame = vcap.read()
    cv2.imshow('VIDEO', frame)
    cv2.waitKey(1)

FYR. Would suggest use gstreamer or tegra_multimedia_api to leverage hardware decoding.

I am the same code, but I used a detection model + tracking model, and both video memory and memory are full. Is this the problem that causes success to fail? What should I do in this situation

Hi,
For running deep learning inference, we have DeepStream SDK. Would suggest you install it through SDKManager and try. There are samples of running Yolo, ResNet models with optimization. The latest release is DS4.0.2

OpenCV is mostly CPU-based software stacks and Jetson Nano is with limit capability comparing to TX1, TX2. High-CPU-loading applications may not run well on it. It should run better if you can port your detection model to run on DeepStream SDK.

Now what I want is very simple, video capture capture don’t fail capture, is there a way to get there without making too many changes? thank you

Or is there another way, how does the inference of tensorRT(.trt) limit video memory?

test

hi gzchenjiajun:
try opencv with gst together like this to avoid fail capture
gst = “rtspsrc location=rtsp://10.0.2.130:554/s1 name=r latency=0 ! rtph264depay ! h264parse ! omxh264dec ! appsink”
video_capture = cv2.VideoCapture(gst)

I would personally not recommend using OpenCV as it’s slow. If you have a model, you can use the nvinfer element in a gstreamer pipeline to do your inferences and various other DeepStream gsteamer elements to parse, display, and store the metadata (eg, in various database backends). Please check out the DeepStream examples (sudo apt install deepstream-4.0). Samples are in /opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps

Various python DeepStream examples are also availalbe.

Thank you very much. I solved the problem

Thank you very much. I solved the problem