Hi,
I am using a Raspberry v2 Csi camera in opencv. As far as I know this is only working with gstreamer pipelines.
As my lighting situation is difficult sometimes I was wondering if i could set the auto exposure or gain properties manually. After sole tests I realized that cv2.set(cv2.propery) doesn‘t work with gstreamer.
Do you know how to do it?
I found this in this forum and then added my pipeline:
import cv2
"""
gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
Flip the image by setting the flip_method (most common values: 0 and 2)
display_width and display_height determine the size of each camera pane in the window on the screen
Default 1920x1080 displayd in a 1/4 size window
"""
def gstreamer_pipeline(
sensor_id=0,
capture_width=1640,
capture_height=1232,
display_width=640,
display_height=480,
framerate=30,
flip_method=2,
):
return (
"nvarguscamerasrc sensor-id=%d !"
"video/x-raw(memory:NVMM), width=(int)%d, height=(int)%d, framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
"videoconvert ! "
"video/x-raw, format=(string)BGR ! appsink max-buffers=1 drop=True"
% (
sensor_id,
capture_width,
capture_height,
framerate,
flip_method,
display_width,
display_height,
)
)
# Open Pi Camera
cap = cv2.VideoCapture(gstreamer_pipeline(sensor_id=0), cv2.CAP_GSTREAMER)
# Set auto exposure to false
cap.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0.75)
exposure = 0
while cap.isOpened():
# Grab frame
ret, frame = cap.read()
# Display if there is a frame
if ret:
cv2.imshow('Frame', frame)
if cv2.waitKey(25) & 0xFF == ord('q'):
break
# Set exposure manually
cap.set(cv2.CAP_PROP_GAIN, exposure)
# Increase exposure for every frame that is displayed
exposure += 0.5
# Close everything
cap.release()
cv2.destroyAllWindows()
Note that ArgusCamera is not a real camera device, it is an application that manages a camera.
So by default it tries to auto-tune gains, exposure, whitebalance.
The real camera in case of RPi v2 Cam is a bayer RG10 device.
So if you turn off auto-tuning and set exposure to real device, it may work:
import cv2
import subprocess
import time
def gstreamer_pipeline(
sensor_id=0,
capture_width=1280,
capture_height=720,
display_width=640,
display_height=480,
framerate=30,
flip_method=2,
):
return (
"nvarguscamerasrc sensor-id=%d gainrange=\"16 16\" ispdigitalgainrange=\"1 1\" aelock=1 awblock=1 ! "
"video/x-raw(memory:NVMM), width=(int)%d, height=(int)%d, framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
"videoconvert ! "
"video/x-raw, format=(string)BGR ! appsink max-buffers=1 drop=True"
% (
sensor_id,
capture_width,
capture_height,
framerate,
flip_method,
display_width,
display_height,
)
)
min_exposure = 13
max_exposure = 29999
exposure_step = 500
exposure = min_exposure
# Open Pi Camera
cap = cv2.VideoCapture(gstreamer_pipeline(sensor_id=0), cv2.CAP_GSTREAMER)
while cap.isOpened():
# Grab frame
ret, frame = cap.read()
# Display if there is a frame
if ret:
cv2.imshow('Frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
start = time.time()
subprocess.check_output(['v4l2-ctl', '--set-ctrl', 'exposure='+str(exposure)]).decode("utf-8")
end = time.time()
print('time to set exposure %f seconds' %(end - start))
print(subprocess.check_output(['v4l2-ctl', '--get-ctrl', 'exposure']).decode("utf-8"))
print(subprocess.check_output(['v4l2-ctl', '--get-ctrl', 'gain']).decode("utf-8"))
exposure += exposure_step
if exposure > max_exposure :
exposure = min_exposure
# Close everything
cap.release()
cv2.destroyAllWindows()
However, you’ll see that it takes more than one frame (~ 40 ms on Xavier-NX without boosting clocks), so it may not be suitable for changing exposure on each frame.
It may be possible to do that using v4l2 interface from python. I remember this post:
You may also try to see what is under the hood with strace:
I‘ve tried your code and it worked how you described. The delay I get makes it not usable for me as I am using two cameras. I am working now with the gain range fron the pipeline and I get better results from my detection although the lightning is not perfekt in every situation. To be able to control things a little bit more I have installed a led that I can control in brightness. Its not making the view much brighter but its influencing the auto exposure of my camera.
As this is just a workaround I am still interested in learning how to do that directly. Hope someone can help!
Note that nvarguscamerasrc also has an exposuretimerange property if it helps:
gst-inspect-1.0 nvarguscamerasrc
If not enough, the low performance from the code I posted above may be because of subprocess creation and launch. It may be much faster if you can issue V4L ioctls, but I am unable to further help for this case, I have very poor experience of v4l from python.