Cannot record 2 camera at 30fps jetson nano

I am trying to record videos from 2 cameras at 30 fps and resolution of 640x480, but i am unable to get 30fps.

I am using gstreamer and opencv like,

v4l2src device=/dev/video0  ! video/x-raw,width=640,height=480,framerate=30/1 ! videoconvert ! video/x-raw,format=BGR ! queue ! appsink
v4l2src device=/dev/video1 ! video/x-raw,width=640,height=480,framerate=30/1 ! videoconvert ! video/x-raw,format=BGR ! queue ! appsink

#motion detection in cv2 appsink and record only when there is motion 
 
appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc ! h264parse ! matroskamux ! filesink location={}.mkv
appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc ! h264parse ! matroskamux ! filesink location={}.mkv

when i look at jtop the ram, cpu and gpu usages are under the limit, Jetson_clocks is enabled and the cameras fps and auto-exposure are set to give constant 30fps.

Questions:
Is it possible to record at 30fps 640x480 2 usb cameras on the jetson nano 2gb dev kit?
Is there any error in the gstreamer pipeline? can it be made more efficient to achieve 30fps ?

H/W: jetson nano 2gb dev kit
S/W: jetpack 4.6

Suppose the memory copy hurt the performance.
Try the nvv4l2camerasrc instead of v4l2src to verify it.

hey shane will try it out and let you know. thanks

hey shane, by changing to nvv4l2camerasrc there is no improvement still the same.

Is it USB camera? USB2.0 or USB3.0?

Please verify by v4l2-ctl to launch two camera.

v4l2-ctl --set-fmt-video=width=640,height=480 --stream-mmap -d /dev/video0 &
v4l2-ctl --set-fmt-video=width=640,height=480 --stream-mmap -d /dev/video1 &

It is a USB camera on a USB 2.0 port. I have ran the 2 commands and it runs at 29.75 FPS on an average.

Hi,
You can run this and check if it achieves 30 fps for both cameras:

gst-launch-1.0 -v v4l2src device=/dev/video0  ! video/x-raw,width=640,height=480,framerate=30/1 ! videoconvert ! video/x-raw,format=BGR ! queue ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 v4l2src device=/dev/video1  ! video/x-raw,width=640,height=480,framerate=30/1 ! videoconvert ! video/x-raw,format=BGR ! queue ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0

There is additional buffer copy when using OpenCV, so the performance is very likely to be capped by CPU capability.

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 1099, dropped: 0, current: 29.72, average: 29.80
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink1: last-message = rendered: 1126, dropped: 0, current: 29.98, average: 29.21

Hi,
It looks like both sources can achieve 30fps from the prints. And we would suggest try to create individual thread for each camera. A user has shared a sample and please take a look:
Nvidia-desktop kernel: [407343.357549] (NULL device *): nvhost_channelctl: invalid cmd 0x80685600 - #18 by DawnMaples

sure let me try will get back

hey Dane, i am already using a separate thread just to read frames in which i am reading both the cameras. Although now i have made 2 separate threads now each for a camera and the resultant fps has not improved its still at 25fps.

Hi,
You can execute sudo tegrastats to get system loading. From the description it is very likely the performance is capped by CPU capability. OpenCV generally takes significant CPU usage.

Hey Dane, found the problem in my application it was due to high file io, i have solved the issue now and it does seem that even cv2 is also adding to the cpu usage but that under my threshold.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.