" video = cv2.VideoCapture(rtsp_url)
success, frame = video.read()"
My nano has implemented a detection model + trace model. Is it possible that some system resources are exhausted, which results in the return of false? (I have tried three cameras, all of which are like this, so it should be something else.)
I am the same code, but I used a detection model + tracking model, and both video memory and memory are full. Is this the problem that causes success to fail? What should I do in this situation
Hi,
For running deep learning inference, we have DeepStream SDK. Would suggest you install it through SDKManager and try. There are samples of running Yolo, ResNet models with optimization. The latest release is DS4.0.2
OpenCV is mostly CPU-based software stacks and Jetson Nano is with limit capability comparing to TX1, TX2. High-CPU-loading applications may not run well on it. It should run better if you can port your detection model to run on DeepStream SDK.
hi gzchenjiajun:
try opencv with gst together like this to avoid fail capture
gst = “rtspsrc location=rtsp://10.0.2.130:554/s1 name=r latency=0 ! rtph264depay ! h264parse ! omxh264dec ! appsink”
video_capture = cv2.VideoCapture(gst)
I would personally not recommend using OpenCV as it’s slow. If you have a model, you can use the nvinfer element in a gstreamer pipeline to do your inferences and various other DeepStream gsteamer elements to parse, display, and store the metadata (eg, in various database backends). Please check out the DeepStream examples (sudo apt install deepstream-4.0). Samples are in /opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps