Speed up python USB camera read in python without opencv

I have posted about consistent FPS from USB camera before in python. Using gstreamer and opencv I can achieve consistent FPS limited by the exposure time and threading the function. However when I reach a low exposure time and fps goes over about 50 fps the FPS becomes unpredictable and appears to max out from 50-60 fps. Running “v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=MJPG --stream-mmap --stream-count=100 -d /dev/video0” provides the correct FPS displayed in the command line limited by exposure or the max FPS of the camera. Right now in python i am running the following in a thread:
gstr = ‘v4l2src device=/dev/video0 ! video/x-raw, format=(string)UYVY,
num-buffers=500, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1
! videoconvert ! video/x-raw,format=BGR ! appsink’

cap = cv2.VideoCapture(gstr, cv2.CAP_GSTREAMER)
while(True):
   ret, frame = cap.read()

Is there any other method besides opencv in python to grab from the camera that might be faster. I think opencv is too slow and is what is causing the unpredictability when I get to 50-60 fps region. Or alterntativley can you set it to grab at a specified fps. I am only interested in sampling at 10 fps, this fps though is not available though in the format I am reading in.

Hi,
Please run sudo nvpmodel -m 2 and sudo jetson_clocks to get maximum throughput of CPU cores.

For dropping frames, you may try to get frame data in I420 in appsink:
[Gstreamer] nvvidconv, BGR as INPUT - #4 by DaneLLL

And if you don’t want to process the buffer, can skip calling

cv2.cvtColor(img, cv2.COLOR_YUV2BGR_I420);

It should save some CPU resource by not to converting I420 to BGR for each frame.

Those pipelines in that post did not work for me. However I found a work around that seems to do the trick. I will call the device in the same manor as my gstreamer command in opencv read thru a terminal command in python. I store the average fps results in a text file and then read out the last average fps. This seems to match the fps read from opencv perfectly. Cheers!

gstr = ‘v4l2src device=/dev/video0 ! video/x-raw, format=(string)UYVY,
num-buffers=500, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1
! videoconvert ! video/x-raw,format=BGR ! appsink’
f = open(“fps_data.txt”,“w”)
camera_process = subprocess.Popen(‘gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, format=UYVY, width=1280, height=720, pixel-aspect-ratio=1/1 ! videoconvert ! video/x-raw,format=BGR ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 -v’,
shell=True, stdout=f, preexec_fn=os.setsid)
time.sleep(5)
os.killpg(os.getpgid(camera_process.pid), signal.SIGTERM)
camera_process.kill()
with open(‘fps_data.txt’) as f:
lines = f.readlines()
fps_orig = float(lines[-1].split(‘average:’)[-1].split(‘\n’)[0])
os.remove(“fps_data.txt”)
time.sleep(1)
cap = cv2.VideoCapture(gstr, cv2.CAP_GSTREAMER)

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.