Get GStreamer camera settings after initialization

Is there any way to determine the current camera settings after initializing it through opencv, such as the gain and exposure time decided by the auto exposure?

Hi,

If you need to monitor or modify the parameters outside you application you can use guvcview

sudo apt install guvcview
guvcview

If it is inside the code, there are properties on opencv that you can read using the get method of a video capture object:

C++:

virtual double cv::VideoCapture::get ( int  propId ) const

Python:

retval	= cv.VideoCapture.get( propId )

All the propId available are listed here: https://docs.opencv.org/3.4/d4/d15/group__videoio__flags__base.html#gaeb8dd9c89c10a5c63c139bf7c4f5704d
For exposure you can use CAP_PROP_EXPOSURE

Sorry, I should have specified I am using gstreamer to open the camera through opencv. This results is opencv being unable to read or change the camera parameters. Of course, if there is a different way to capture images, I am all ears.

def gstreamer_pipeline (capture_width=3280, capture_height=2464, exposure_time=90, 
                        framerate=8, flip_method=2):
	exposure_time = exposure_time * 1000000 #ms to ns
	exp_time_str = '"' + str(exposure_time) + ' ' + str(exposure_time) + '"'
	return ('nvarguscamerasrc '
        'wbmode=0 '
	'awblock=true '
	'gainrange="1 1" '
	'ispdigitalgainrange="1 1" '
        'exposuretimerange=%s '
	'aelock=true ! ' 
	'video/x-raw(memory:NVMM), '
	'width=%d, height=%d, '
	'format=NV12, '
	'framerate=%d/1 ! '
	'nvvidconv flip-method=%d ! '
	'video/x-raw, '
	'format=I420 ! '	
	'appsink '
	% (exp_time_str, capture_width, capture_height, framerate, flip_method))
camera = cv2.VideoCapture(gstreamer_pipeline(), cv2.CAP_GSTREAMER)
_, image = camera.read()
image = cv2.cvtColor(image, cv2.COLOR_YUV2BGR_I420)

Looking into that, it seems that exposuretimerange and gains are readable properties from nvarguscamerasrc:

gst-inspect-1.0 nvarguscamerasrc

I have poor experience with python, but tried this and failed to read any useful information. Someone else more experienced with gstreamer and python may correct…

#!/usr/bin/env python

import sys, os
import gi
gi.require_version('Gst', '1.0')
gi.require_version('Gtk', '3.0')
from gi.repository import Gst, GObject, Gtk

def verbose_deep_notify_cb(object, orig, pspec, component):
    """
    A quick attempt to mimic gst-launch verbose mode in python.
    """
    if pspec.value_type == Gst.Caps.__gtype__:
    	caps = orig.get_current_caps()
	if caps != None:
	   print("%s/%s/%s: caps = \"%s\"" % (object.get_name(), orig.parent.get_name(), orig.get_name(), caps.to_string()))

class GTK_Main:
    def __init__(self):
        window = Gtk.Window(Gtk.WindowType.TOPLEVEL)
        window.set_title("CamRecorder")
        window.set_default_size(100, 100)
        window.connect("destroy", Gtk.main_quit, "WM destroy")
        vbox = Gtk.VBox()
        window.add(vbox)
        self.movie_window = Gtk.DrawingArea()
        vbox.add(self.movie_window)
        hbox = Gtk.HBox()
        vbox.pack_start(hbox, False, False, 0)
        hbox.set_border_width(10)
        hbox.pack_start(Gtk.Label(), False, False, 0)
        self.button = Gtk.Button("Start")
        self.button.connect("clicked", self.start_stop)
        hbox.pack_start(self.button, False, False, 0)
        self.button2 = Gtk.Button("Quit")
        self.button2.connect("clicked", self.exit)
        hbox.pack_start(self.button2, False, False, 0)
        hbox.add(Gtk.Label())
        window.show_all()

	self.player = Gst.Pipeline.new("player")
	self.player.connect('deep-notify', verbose_deep_notify_cb, self)
        source = Gst.ElementFactory.make("nvarguscamerasrc", "camsrc")
	conv = Gst.ElementFactory.make("nvvidconv", "conv")
        sink = Gst.ElementFactory.make("xvimagesink", "imagesink")
	self.player.add(source)
	self.player.add(conv)
	self.player.add(sink)
	source.link(conv)
	conv.link(sink)
        bus = self.player.get_bus()
        bus.add_signal_watch()
        bus.enable_sync_message_emission()
        bus.connect("message", self.on_message)
        bus.connect("sync-message::element", self.on_sync_message)

    def start_stop(self, w):
        if self.button.get_label() == "Start":
	    print("Setting pipeline to READY ...")
	    self.player.set_state(Gst.State.READY)
            print("Setting pipeline to PAUSED ...")
            self.player.set_state(Gst.State.PAUSED)
            print("Setting pipeline PLAYING...")
            self.player.set_state(Gst.State.PLAYING)
            self.button.set_label("Pause")
        else:
	    print ("gainrange=" , self.player.get_by_name("camsrc").get_property("gainrange"))
	    print ("ispdigitalgainrange=" , self.player.get_by_name("camsrc").get_property("ispdigitalgainrange"))
	    print ("exposuretimerange=" , self.player.get_by_name("camsrc").get_property("exposuretimerange"))
            print("Setting pipeline to PAUSED ...")
            self.player.set_state(Gst.State.PAUSED)
            self.button.set_label("Start")

    def exit(self, widget, data=None):
        self.player.send_event(Gst.Event.new_eos())
        print("Setting pipeline to PAUSED ...")
        self.player.set_state(Gst.State.PAUSED)
        print("Setting pipeline to READY ...")
        self.player.set_state(Gst.State.READY)
        print("Setting pipeline to NULL ...")
        self.player.set_state(Gst.State.NULL)
        Gtk.main_quit()

    def on_message(self, bus, message):
        t = message.type
        if t == Gst.MessageType.EOS:
            self.player.set_state(Gst.State.NULL)
            self.button.set_label("Start")
        elif t == Gst.MessageType.ERROR:
            err, debug = message.parse_error()
            print "Error: %s" % err, debug
            self.player.set_state(Gst.State.NULL)
            self.button.set_label("Start")

    def on_sync_message(self, bus, message):
        struct = message.get_structure()
        if not struct:
            return
        message_name = struct.get_name()
        if message_name == "prepare-xwindow-id":
            # Assign the viewport
            imagesink = message.src
            imagesink.set_property("force-aspect-ratio", True)
            imagesink.set_xwindow_id(self.movie_window.window.xid)

Gst.debug_set_active(True)
Gst.debug_set_default_threshold(0)
GObject.threads_init()
Gst.init(None)
GTK_Main()
Gtk.main()

If I click start, the pipeline seems working…Then I reduce xvimagesink window and I click pause. It doesn’t show expected properties.

Setting GST_DEBUG=navarguscamerasrc:6 (nor higher level) doesn’t show much more details.

Hi,
On python and your current setup, you can use v4l2_control and fcntl to manipulate the camera parameters.You will need the Python bindings for the v4l2 userspace api, that can be installed with pip:

pip install v4l2

An example usage:

>>> from v4l2 import *
>>> import fcntl 
>>> vd = open('/dev/video0', 'rw') 
>>> cp = v4l2_capability() 
>>> fcntl.ioctl(vd, VIDIOC_QUERYCAP, cp) 
0

To change or read camera properties you need the kernel defined CID. We did that for TX1 and the code looks something like this:

from v4l2 import *
import fcntl

TEGRA_CAMERA_CID_BASE = 10100736
INITIAL_EXPOSURE_LEVEL = 1000
INITIAL_GAIN_LEVEL = 256

vd = open('/dev/video0', 'rw')

#   Set initial exposure level
exposure = v4l2_control()
exposure.id = TEGRA_CAMERA_CID_BASE + 1
exposure.value = INITIAL_EXPOSURE_LEVEL;
fcntl.ioctl(vd, VIDIOC_S_CTRL, exposure)

#   Set initial gain level
gain = v4l2_control()
gain.id = TEGRA_CAMERA_CID_BASE + 10
gain.value = INITIAL_GAIN_LEVEL;
fcntl.ioctl(vd, VIDIOC_S_CTRL, gain)

There is another v4l2 operation called VIDIOC_G_CTRL, I think you could use that one in a similar way to get the properties instead of setting them.
I hope this helps.

Honey_Patouceul, thanks for the code. Like you mentioned, it seemed to work fine but it doesn’t return any information. Both gain and exposure returned 0.0.

miguel.taylor, thanks for the code as well. I was unable to get v4l2 to work properly. I kept getting errorno 22 on the arguments. This is probably because I have no idea where to find the kernel defined CID on the nano. However, this did point me in the right direction.

The v4l2-ctrl command in terminal is able to get camera settings in real time. This might be an inelegant solution, but in python, I can execute the command and capture its output.

import subprocess

def printCamSettings():
    print subprocess.check_output(['v4l2-ctl', '--get-ctrl', 'exposure']),
    print subprocess.check_output(['v4l2-ctl', '--get-ctrl', 'gain'])
>>> printCamSettings()
exposure: 90000
gain: 16

I manually set exposure time to 90 ms, so the output is in microseconds. I manually set both digital and analog gain to 1, so perhaps the gain output is scaled by 16. Putting this in a loop shows how the auto modes change the gain and exposure over time.

As an update in case anyone else is looking, the values can also be set using v4l2-ctl. For example, to set the gain to 25:

>>> v4l2-ctl --set-ctrl gain=25

Unfortunately I haven’t found a way to enable/disable auto-exposure, auto-gain, or auto-whitebalance in real time. For me, this isn’t a huge issue since I’m not too worried about program startup time. I can just start the camera twice, once in auto mode, then again in manual after I’ve grabbed the exposure and gain settings.

I’m having a similar issue, I can only utilize the v4l2-ctl when the cap is not running. I believe this is a result of the gstreamer capture pipeline claiming priority and preventing parameter changes. Has anyone been able to change properties like exposure on a live feed from an OpenCV VideoCapture using the gstreamer backend?