Small FPS while saving video and face detecting

Good evening!

My code tries to get frames from webcam, detect faces and save video file simultaneously. I use Python, opencv for image and video processing, face_recognition library for face detection and recognition.

The problem is that recorder video is too slow - final FPS is about 1.3, however the camera has 30 FPS. I tried to use multithreading, but the problem didn’t disappear.

To create a video stream:

src="nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink"

capture = cv2.VideoCapture(src, cv2.CAP_GSTREAMER)

Know I don’t even know what to do… Do you have any ideas, what should I try to process video and save it to file at the same time with normal speed?

Thank you!

Hi,
In OpenCV, main format is BGR which is not supported by most hardware engines in Jetson, and have to utilize significant CPU usage. It limits performance. For running deep learning inference, we suggest try DeepStream SDK. You can try default deepstream-app and then apply your model.

Thank you, I will give it a try! :)

If I understand properly, I can modify the pipeline to simultaneously process video capture in Python code and to save video to file. I read that I should use tee, queue and filesink. But I have some problems - if I try to modify my pipeline, it crashes.

It’s my current pipeline:

src="nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink"

Not sure if correctly understand your case but you may try this:

src="nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1 ! tee name=t \
t. ! queue ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink drop=1 \
t. ! queue ! nvv4l2h264enc insert-vui=1 ! h264parse ! qtmux ! filesink location=test_1080p30_h264.mp4"

Thank you, it works! But I can’t play the recorded file on my local machine aftrwards. What should I change in pipeline?

May be you just need to change the player. On Jetson, you may play with gst-play-1.0:

gst-play-1.0 test_1080p30_h264.mp4

But I try to open it on Windows machine… I tried 2 different players with no result - I recieve an error every time. Maybe something wrong with codec?

Did you kill the pipeline while running ? For properly saving before closing, sending an EOS may be be required but this is not obvious from opencv.
You may just read read the frame in opencv with appsink and send it to a videoWriter such as here.

Yes, you are right - I’m killing the pipeline while running. But using cv2 videoWriter is a bad idea - as I wrote at the beggining of this topic, I also perform some face recognition manipulations, which leads to distortion in speed of saved video.

Do you have any ideas about how to save video using your gstreamer pipeline properly? I’ve read something about -e option…

You may try removing the muxer and save H264 file.

Otherwise, there might be several ways to do this. I think the simplest way would be to use a gst-launch -e command with tee and one sub-pipeline ancoding and saving file, and a second pipeline for your app, sinking to a v4l2loopback node (or shmsink but it may be limited). From opencv app, you would just open the v4l2loopback node (may use V4L API).

Hello!

My attempts are in vain. I cannot achieve the desired result - to process video and to save video to the file at the same time. Can you help me, please? I am trying to find the answer myself but with no results yet. How should I modify the pipeline above to reach the goal?

You may try removing the muxer and save H264 file.

If I delete qtmux, the result file opens OK but I can’t play it.

Even if I try to execute simple pipeline like gst-launch-1.0 nvarguscamerasrc num-buffers=2000 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! nvvidconv ! x264enc ! qtmux ! filesink location=test.mp4 -e from terminal, I can’t play the recorded video afterwards on Windows machine using various players :(

@Honey_Patouceul can you help me, please? I really want to solve this problem :D

On Jetson, you would rather try:

gst-launch-1.0 -ev nvarguscamerasrc num-buffers=2000 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=test.mp4 

If I use the following pipeline with opencv, It works Ok and saves video properly. But I still can’t open it on Windows :(

src = "nvarguscamerasrc sensor-id=0 !         video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=(fraction)30/1 ! tee name=t         t. ! queue! omxh265enc ! matroskamux !         filesink -e location=test.mkv         t. ! queue! nvvidconv ! video/x-raw, format=(string)BGRx !         videoconvert ! video/x-raw, format=(string)BGR ! appsink"
capture = cv2.VideoCapture(src, cv2.CAP_GSTREAMER)

Maybe I should reinstall opencv like it’s mentioned in this topic?

UPD: Now I can open it with VLC player, but the video is still quite slow :( But we can open it - it’s a great result itself! :D

I think, it happens because face detection and face recognition takes some time and I should use 2 different pipelines - one for processing and one for saving. But I have no idea how to do it.

Try adding h265parse between H265 encoder and matroskamux.