I’m developing a video recording application on NVIDIA Jetson Nano. My application runs on more than 200 Jetson Nano devices, each record 3-10 videos per day. A single device usually records 40-45 minutes with a 5 minute interval in-between. The problem is: sometimes (< 5%) a video is corrupted and cannot be post-processed.
The pipeline (line breaks added for readability):
v4l2src name=video_source ! videorate ! video/x-raw, height=720, width=1280, framerate=30/1 ! nvvidconv ! omxh264enc ! queue ! mux. pulsesrc device=alsa_input.usb-046d_Logitech_BRIO_FC1248A5-03.analog-stereo name=audio_source ! audio/x-raw, rate=44100, channels=2, width=32, depth=32 ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! lamemp3enc bitrate=256 ! queue ! mux. qtmux name=mux ! filesink location=filename.mp4
I tried running this pipeline with the Python binding (
gst-launch-1.0 and had problems with both.
- Using the Python binding, I set a clock event on the pipeline’s clock object. When it fires, I send and EOS event to the pipeline. My log indicates that the call to
pipeline.send_eventis always made, but sometimes fail to return.
gst-launch-1.0, I added the
-eflag (force EOS) and use a Python’s
subprocessto initiate the process. The main Python process then simply sleeps 40-45 minutes then send a SIGINT event to the subprocess. The log by
gst-launch-1.0sometimes stop at
EOS on shutdown enabled -- Forcing EOS on the pipeline.
In either case, the camera is not released and need to be terminated by hand. The resulting video is corrupted (missing the moov atom) and cannot be played or read with OpenCV for further processing.
Is this a problem with my pipeline, or a device specific problem, or a Gstreamer bug?
How do I fix this?