Optimising Gstreamer Pipeline for 1080p60 recording

Hi,

I am trying to record 19201080p videos at 60 fps using an IMX477 sensor on the Jetson Nano B01 . This is the gstreamer pipeline that I am using to display (sensor-mode=1 corresponds to the 19201080p 60fps mode on the sensor):

gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=1 ! “video/x-raw(memory:NVMM), width=1920, height=1080, framerate=60/1” ! nvvidconv ! nvoverlaysink

This runs fine, and it seems to be displaying correctly. However, if I try to record I am getting extremely low fps in the video file. If I scale the images down to 1280720 then I’m able to record at 60 fps, but at the full 19201080 resolution I’m only getting 1-3 fps. This is the pipeline I used for recording:

gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=1 ! ‘video/x-raw(memory:NVMM), width=1280, height=1080, format=NV12 framerate=60/1’ ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=test.mp4

Is there a more efficient way to do this so that I can record videos at the full resolution?
Would it be better to save the raw data and encode at a later stage?

I am running in 10W mode with a 5V 5A barrel jack power supply to maximise power and have also run jetson_clocks.

Not sure, but it may just be a player issue.
Check with gstreamer on Nano with:

# No display
gst-launch-1.0 filesrc location=test.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=1 -v

# Display to local HDMI monitor, be sure your current mode supports at least 60Hz
gst-launch-1.0 filesrc location=test.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! fpsdisplaysink text-overlay=0 video-sink=nvoverlaysink sync=1 -v

Thank you! Turns out it was actually recording correctly and the video player just couldn’t play it.

How would I use this pipeline in opencv (python)? Do I need to use a different pipeline for cv2.VideoCapture and cv2.VideoWriter?

gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=1 ! ‘video/x-raw(memory:NVMM), width=1280, height=1080, format=NV12 framerate=60/1’ ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=test.mp4

This depends on what processing you want to do.

  • If you want to use CPU algorithms, you would have to use a capture pipeline with gstreamer backend into BGR format for appsink. You can get BGRx at high speed with jetson HW, but removing the extra fourth byte for BGR with videoconvert may be slow. Depending on your resolution, framerate, processing, opencv CPU may be not so fast with your jetson model. Be sure to use all cores and boost your clocks if power saving is not a big concern. You would also have to use videoconvert in video writer pipeline for converting into BGRx or RGBA before reaching HW converter and encoder. You may use queue before appsink and after appsrc in pipelines, or try n-threads property of videoconvert.

  • Alternately, you may try using gstreamer pluging nvivafilter that can support opencv cuda operations with gpu if you can translate your processing into that filter. That would overcome the above restrictions, but not all opencv algorithms are available with CUDA backend for a filter. You may also write your own CUDA kernels. Better use RGBA as output format of nvivafilter (NV12 has some stride to be manged if width is not aligned).

Thank you for the response!

For now, all I want to do is save the 1080p60 videos. They will later be copied over from the Nano onto a desktop PC for processing (the cameras are being used as a stereopair so that I can extract depth information from the captured videos). I want to use opencv to do the video capture as I am running an object detection system using a third camera (USB) and want everything to run in the same program so that its easy to run and monitor