Device tree exposure values in microseconds. Documentation misleading or example file wrong?

Hello,
just a quick question because it is confusing to me that I can’t get the documentation for sensor driver development and the imx219 example aligned. I get it why we need gain/framerate/exposure factor to convert floating point values to fixed signed. It makes total sense. But with exposure in the documentation it says the value should be in microseconds.

Minimum exposure time limit for the mode, in microseconds. The value is rounded up to an integer.

This value is calculated from:

minimum exposure time in float = (minimum coarse integration time) * line_length / pix_clk_hz * 1000000

The value specified must be multiplied by exposure_factor.

min_exp_time = (minimum exposure time in float) * exposure_factor

But if the value should be in the end in microseconds that would mean I have to set it to the value in microseconds multiplied by the framerate factor. And who wants to set exposure time in fractions of microseconds? :D

Since in the imx219 example the value specified is even lower than the exposure_factor I assume we are talking about seconds, which become microseconds if we use an exposure_factor of 1,000,000, right? So in that case I guess the documentation is kind of misleading.

Correct me if I am wrong.

Best regards,
jb

mode0 { /* IMX219_MODE_3264x2464_21FPS */
mclk_khz = “24000”;
num_lanes = “2”;
tegra_sinterface = “serial_a”;
phy_mode = “DPHY”;
discontinuous_clk = “yes”;
dpcm_enable = “false”;
cil_settletime = “0”;

  			active_w = "3264";
  			active_h = "2464";
  			pixel_t = "bayer_rggb";
  			readout_orientation = "90";
  			line_length = "3448";
  			inherent_gain = "1";
  			mclk_multiplier = "9.33";
  			pix_clk_hz = "182400000";

  			gain_factor = "16";
  			framerate_factor = "1000000";
  			exposure_factor = "1000000";
  			min_gain_val = "16"; /* 1.00x */
  			max_gain_val = "170"; /* 10.66x */
  			step_gain_val = "1";
  			default_gain = "16"; /* 1.00x */
  			min_hdr_ratio = "1";
  			max_hdr_ratio = "1";
  			min_framerate = "2000000"; /* 2.0 fps */
  			max_framerate = "21000000"; /* 21.0 fps */
  			step_framerate = "1";
  			default_framerate = "21000000"; /* 21.0 fps */
  			** min_exp_time = "13"; ** /* us */ 
  			** max_exp_time = "683709";** /* us */ 
  			step_exp_time = "1";
  			default_exp_time = "2495"; /* us */

  			embedded_metadata_height = "2";
  		};

hello busch.johannes,

please refer to Sensor Software Driver Programming Guide.
you should note that there’re V4L2 Kernel Driver (Version 1.0) and (Version 2.0).

I am referring to your posted link Version 2.0. The example file uses exposure_factor which was introduced in 2.0 so it seems it should be compatible with Verson 2.0. So my guess here is that the the unit microseconds as stated in Version 2.0 documentatiom is a relict from version 1.0, since in 2.0 the unit changes with the factor you set with a base unit in seconds.

That’s the whole point of my thread, to point out that the documentation is not perfect there.

Of course I can be wrong. That would mean if I set exposure_factor to 1 I can still use microseconds as unit for max_exposure, etc.

Best regards,
jb

hello busch.johannes,

you’re correct, the documentation is not perfect there.
in Version-1.0, exposure time is expressed in microseconds by default.
please also refer to [Exposure Control] session, which we used sensor coarse integration time for version-1.0.
thanks