Hi members,
I am currently using Jetson NX (jetpack 4.5 L4T 32.5.0) and SD Card (SamSung 64gb evo plus uhs-i / u3 / class 10).
I want to write HD images to SDcard from Zed2 camera(left image + right image) with 30fps for each camera,
but I am getting only 6fps currently for each camera.
I’m using OPENCV to write image.
Any idea to write faster? Can you give me some scripts to reach it?
Many thanks,
Run jetson_clocks to push the cpu clk to max and see if this enhance the perf.
Hi WayneWWW,
Thanks for your help, but I have enabled jetson_clocks.
Limit writing speed of SDcard is 60MB/s, each image is 2.8MB(.bmp), 1.2MB(.png).
Whether I can save 30fps for each camera? Is it hardware limit?
Just want to know, are you talking about doing this over NX devkit? or it is your own custom board with extra sdcard slot?
Thank WayneWWW,
It is jetson NX devkit.
Hi,
Please share information about ZED2 camera:
$ v4l2-ctl -d /dev/videoX --list-formats-ext
A general format is YUV422 and it should runs fine on Jetson platforms. For other special format, there might be additional memory copy and we may see worse performance.
Hi all,
This is output of the command:
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUYV 4:2:2
Size: Discrete 2560x720
Interval: Discrete 0.017s (60.000 fps)
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Size: Discrete 1344x376
Interval: Discrete 0.010s (100.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Size: Discrete 3840x1080
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Size: Discrete 4416x1242
Interval: Discrete 0.067s (15.000 fps)
For a clarify, If I only read frame from ZED2, it still get 30fps for each camera. However, when I read + write images to SDcard, I only get ~6 frame/s.
Thanks for your help.
Hi,
Please try gst-launch-1.0 command and check if you can achieve target framerate:
$ gst-launch-1.0 v4l2src device=/dev/video1 num-buffers=300 ! video/x-raw,format=YUY2,width=1344,height=376,framerate=30/1 ! fpsdisplaysink text-overlay=0 video-sink='filesink location=a.yuv sync=0' -v
Please execute sudo nvpmodel -m 2 and sudo jetson_clocks.
Hi @DaneLLL ,
This is output,
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFileSink:filesink0: sync = true
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-raw, format=(string)YUY2, width=(int)1344, height=(int)376, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:5:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, format=(string)YUY2, width=(int)1344, height=(int)376, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:5:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink.GstProxyPad:proxypad0: caps = video/x-raw, format=(string)YUY2, width=(int)1344, height=(int)376, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:5:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFileSink:filesink0.GstPad:sink: caps = video/x-raw, format=(string)YUY2, width=(int)1344, height=(int)376, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:5:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink: caps = video/x-raw, format=(string)YUY2, width=(int)1344, height=(int)376, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:5:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw, format=(string)YUY2, width=(int)1344, height=(int)376, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:5:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFileSink:filesink0: sync = true
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 16, dropped: 0, current: 31.96, average: 31.96
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 31, dropped: 0, current: 29.95, average: 30.96
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 46, dropped: 0, current: 29.95, average: 30.62
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 61, dropped: 0, current: 29.95, average: 30.45
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 76, dropped: 0, current: 29.94, average: 30.35
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 91, dropped: 0, current: 29.96, average: 30.29
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 106, dropped: 0, current: 29.95, average: 30.24
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 121, dropped: 0, current: 29.95, average: 30.20
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 136, dropped: 0, current: 29.95, average: 30.17
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 151, dropped: 0, current: 29.96, average: 30.15
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 166, dropped: 0, current: 29.95, average: 30.13
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 181, dropped: 0, current: 29.95, average: 30.12
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 196, dropped: 0, current: 29.96, average: 30.11
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 211, dropped: 0, current: 29.95, average: 30.09
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 226, dropped: 0, current: 29.95, average: 30.09
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 241, dropped: 0, current: 29.95, average: 30.08
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 256, dropped: 0, current: 29.95, average: 30.07
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 271, dropped: 0, current: 29.95, average: 30.06
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 286, dropped: 0, current: 29.95, average: 30.06
Got EOS from element "pipeline0".
Execution ended after 0:00:10.237994889
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
I have run sudo nvpmodel -m 2 and sudo jetson_clocks . before
Thanks.
Hi,
For the print, it can achieve 30fps when setting to 1344x376 mode. Do you know which sensor mode you set in OpenCV? May try it in gst-launch-1.0 command and clarify if low fps only happens in OpenCV.
Hi @DaneLLL,
I still get 30fps with opencv when I read image without writing them to SDcard. The problem only occur when I read and write image to SDcard. So You can propose how to improve writing speed to SDcard.
Many thanks
Hi,
Please try the sample:
import sys
import cv2
def read_cam():
cap = cv2.VideoCapture("v4l2src ! video/x-raw,width=1344,height=376,format=YUY2,framerate=30/1 ! videoconvert ! video/x-raw,format=BGR ! appsink sync=0 ")
w = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
h = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
fps = cap.get(cv2.CAP_PROP_FPS)
print('Src opened, %dx%d @ %d fps' % (w, h, fps))
gst_out = "appsrc ! video/x-raw, format=BGR ! queue ! filesink location=/tmp/a.yuv sync=0 "
out = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, float(fps), (int(w), int(h)))
if not out.isOpened():
print("Failed to open output")
exit()
if cap.isOpened():
for i in range(1, 300):
ret_val, img = cap.read();
if not ret_val:
break;
out.write(img);
else:
print("pipeline open failed")
print("successfully exit")
cap.release()
out.release()
if __name__ == '__main__':
read_cam()
Not sure if it helps but we generally run gstreamer pipeline in cv2.VideoCapture() and cv2.VideoWriter(). If it still cannot achieve target fps, we suggest encode to h264/h265 steam. Please refer to this sample:
Displaying to the screen with OpenCV and GStreamer - #9 by DaneLLL
Hi @DaneLLL ,
Thank you for your support, I really appreciate it.
I am trying with your approach. I don’t read image by cv2.VideoCapture, I read image by zed API. So, I have a question that I can read image without VideoCapture but write image with videowriter?
Sorry if my question is naive, but until now I am trying read image by zed API and videowriter can not write output video.
Hi,
We are not sure how zed API works. Not have experience of using the API. Since it shows YUYV in v4l2-ctl --list-formats-ext, so would like to know if running gstreamer pipeline can achieve performance or not. Performance in running gst-launch-1.0 looks fine.