VideoCapture.read() hangs, nvargus error

Hi,

On my Jetson Nano I’m trying to do a simple face detection by using a loop in which I read a frame (cap.read(), with cap being an instance of VideoCapture using the gstreamer pipeline), then pass it through a network, display the result and continue looping. After a while (sometimes 15 minutes, sometimes 5 seconds), the script just hangs at cap.read() (actually during the .grab() call).
In the syslog, the error is:

nvargus-daemon[24528]: (Argus) Error InvalidState: (propagating from src/api/ScfCaptureThread.cpp, function run(), line 109)
nvargus-daemon[24528]: SCF: Error Invalidstate: Session has suffered from a critical failure (in src/api/Session.cpp, function capture(), line 667)

I did try a stress test, can’t reproduce this. I do have the last version of Jetpack.
The camera is the Raspberry Pi’s one.

Thanks in advance!

Baptiste

What’s the pipeline?
Any error kernel message?
Have you try any others sensor mode?

Hi,

Here’s the pipeline:

def gstreamer_pipeline(capture_width=1260, capture_height=640, framerate=120, flip_method=0):
    return f'nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int){capture_width}, ' \
        f'height=(int){capture_height}, format=(string)NV12, framerate=(fraction){framerate}/1 ! nvvidconv ' \
        f'flip-method={flip_method} ! video/x-raw, width=(int)840, height=(int)420, ' \
        f'format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink max-buffers=1 drop=true'

I don’t have any kernel message…

What do you mean by other sensor modes?

Check for the sensor mode cap by v4l2-ctl --list-formats-ext and replace the capture_width, capture_height, framerate to any others to try.

Oh ok, yes I tried with different modes, doesn’t change anything…

OK, then did you try the gst-launch-1.0 pipeline like below.

gst-launch-1.0 nvarguscamerasrc ! "video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=(fraction)120/1" ! nvvidconv flip-method=0 ! "video/x-raw, width=(int)840, height=(int)420, format=(string)BGRx" ! videoconvert ! "video/x-raw, format=(string)BGR" ! fakesink

It doesn’t show the camera.
The last line in the terminal is:

CONSUMER: Producer has connected; continuing.

But no camera showing.

Yes, because it’s fakesink. Have below sink to have the preview.

gst-launch-1.0 nvarguscamerasrc ! "video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=(fraction)120/1" ! nvvidconv flip-method=0 ! "video/x-raw, width=(int)840, height=(int)420" ! videoconvert ! xvimagesink

Alright, this runs. But it does not seem to crash for now.

Maybe the problem comes from the fact that we ask too much to the cpu/gpu at every iteration of the loop?

You have to narrow down which part cause the problem from your source.

I already did, it’s the

cap.read()

that hangs.

Have a check with below link on your device.

https://devtalk.nvidia.com/default/topic/1025356/jetson-tx2/how-to-capture-and-display-camera-video-with-python-on-jetson-tx2/post/5215101/#5215101

I’m using a Jetson Nano by the way.
I’ll try, but it also uses the cap.read() so the error should be the same.

Same problem here when we use our framerate of 120. However, it seems not to hang with a lower framerate.

Alright, so we tested different configurations but in the end it still crashes after some time… Any other idea?

What kind of the configure will crashes?
Did you have a try boost the system by below commnad

sudo su
nvpmodel -m 0
jetson_clocks

I mean we tried different gstreamer pipelines, but it always crashes…

Yeah sure, we tried this mode, still the same problem.

Hi, any news of the matter…?

I have run the tegra_camera.py on my nano with imx219 PI v2 without any problem.