I am attempting to read from a USB connected camera at a consistent FPS. I am doing frequency analysis on the frames so slight incorrect spacing between frames causes problems. Right now I have tried doing a threaded process with the following setup for cap.read(). I have ensured the exposure time is sufficient to operate at 30 fps.
gstr = ‘v4l2src device=/dev/video0 ! video/x-raw, format=(string)UYVY, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1 } ! videoconvert ! video/x-raw,format=BGR ! appsink’
cap = cv2.VideoCapture(gstr, cv2.CAP_GSTREAMER)
I can also get away with inconsistent spacing if I know that exact time a frame was grabbed. Doing a time.time() in my cap.read() while loop produces inaccurate results.
It’s pretty consistent in terminal. Is there anyway to have a text overlay of current timestamp on each frame and grab with opencv. Or have timestamp of each frame based on the buffer timestamp and extract with python? When I try adding in the text overlay with gstr = ‘…textoverlay commands’ and cap = cv2.VideoCapture(cgstr, cv2.CAP_GSTREAMER) it won’t grab any frames?