Auto Exposure with ISP jumping exposure values

Hey I am developing a driver for 2 sensors.

Both sensors seem to jump in very large steps when it gets too higher exposure. Instead, it uses gain to compensate.

So those are exposure values in us I mostly see:
8333
16666
25000
33333
While 33333 being the maximum since I have 30fps.

So what happens is that from maximum exposure he goes down slowly and then suddenly starts jumping down to 8333. And then it finds itself being too dark and increasing gain slightly instead of increasing exposure a bit.

When I reduce light to the sensor, he suddenly goes up to 16666, after first trying to compensate with gain.
This is not intended behavior, is it?

Both running at 30fps.

I am controlling what is set with this command:
watch -n1 ‘v4l2-ctl -d0 --get-ctrl=exposure,gain’

Setting exposure and gain manually works good.

I’ll give you both device tree modes, everything else is pretty much the same.

tegra-camera-platform - both the same:

num_csi_lanes = <4>;
max_lane_speed = <1500000>;
min_bits_per_pixel = <10>;
vi_peak_byte_per_pixel = <2>;
vi_bw_margin_pct = <25>;
isp_peak_byte_per_pixel = <5>;
isp_bw_margin_pct = <25>;

mode0 - sensor2 (4k) - not working:

mode0 {
mclk_khz = “24000”;
num_lanes = “4”;
tegra_sinterface = “serial_a”;
phy_mode = “DPHY”;
discontinuous_clk = “no”;
dpcm_enable = “false”;
cil_settletime = “0”;
dynamic_pixel_bit_depth = “10”;
csi_pixel_bit_depth = “10”;
mode_type = “bayer”;
pixel_phase = “bggr”;
active_w = “3872”;
active_h = “2192”;
readout_orientation = “0”;
line_length = “4053”;
inherent_gain = “1”;
pix_clk_hz = “320000000”;
gain_factor = “128”;
min_gain_val = “128”;
max_gain_val = “1984”;
step_gain_val = “1”;
default_gain = “128”;
min_hdr_ratio = “1”;
max_hdr_ratio = “1”;
framerate_factor = “1000000”;
min_framerate = “1000000”;
max_framerate = “30000000”;
step_framerate = “1”;
default_framerate= “30000000”;
exposure_factor = “1000000”;
min_exp_time = “100”;
max_exp_time = “1000000”;
step_exp_time = “1”;
default_exp_time = “9000”;
embedded_metadata_height = “0”;
};

mode0 - sensor1 (FULL HD) - working:

mclk_khz = “24000”;
num_lanes = “4”;
tegra_sinterface = “serial_a”;
phy_mode = “DPHY”;
discontinuous_clk = “no”;
dpcm_enable = “false”;
cil_settletime = “0”;
dynamic_pixel_bit_depth = “10”;
csi_pixel_bit_depth = “10”;
mode_type = “bayer”;
pixel_phase = “bggr”;
active_w = “1920”;
active_h = “1080”;
readout_orientation = “0”;
line_length = “2560”;
inherent_gain = “1”;
pix_clk_hz = “288000000”;
gain_factor = “16”;
min_gain_val = “16”;
max_gain_val = “255”;
step_gain_val = “1”;
default_gain = “16”;
min_hdr_ratio = “1”;
max_hdr_ratio = “1”;
framerate_factor = “1000000”;
min_framerate = “1000000”;
max_framerate = “30000000”;
step_framerate = “1”;
default_framerate= “30000000”;
exposure_factor = “1000000”;
min_exp_time = “100”;
max_exp_time = “1000000”;
step_exp_time = “1”;
default_exp_time = “9000”;
embedded_metadata_height = “0”;
};

I test with:

gst-launch-1.0 nvarguscamerasrc ! nvoverlaysink

Are those exposure jumps intended by ISP?

Best regards,
jb

Have you check if the sensor exposure/gain are linearity?

You mean from the sensor I am developing? Yes, it is linear!

Testing with v4l2-ctl -c exposure= also works linear.

I mean the output is linearity. Like increase the exposure or gain can get the data brighter linearity.

Yes!

Hey, I just wanted to let you know that this is still an issue. Any idea how to debug this?

The solution was to set

aeantibanding=0

in my nvarguscamerasrc element in my Gstreamer pipeline. Maybe this was because of some misconfiguration because we have a custom sensor and the ISP config file is not made for that sensor.

Thank you anyway for your help!