I am making use of the libargus burst capture functionality (through function ICaptureSession::captureBurst()) in my application and find it is working as expected 99.9%+ of invocations. However, I do occasionally (on <0.1% of attempts) see that some of the output image frames do not have the expected sensor exposure times and gains applied (set through ISourceSettings functions setExposureTimeRange, setGainRange), which I believe is a result of RGB sensor I2C communications occasionally occurring outside the required time window during the burst capture sequence.
Is there anything that may be adjusted in the system to improve the reliability of the I2C communications timing? Will increasing process priority of the nvargus-daemon improve reliability of communications?
Based on my observations with oscilloscope it seems I2C communications with the RGB sensor are triggered on MIPI start-of-frame, which usually provides adequate time margin to apply settings, however on occasion the I2C communications happen on or around the sensor vertical sync signal pulse (XVS), which I believe results in the incorrect application of per-frame sensor integration times/gains.
Thank you for your help or any insights to this issue
We have our system configured for nvpmodel MAXN already.
Can you confirm whether increasing any process priority (like nvargus-daemon) can affect sensor I2C communication timing? Or is sensor I2C communication triggered through system interrupt handling?
Also, I notice that if I run
jetson_clocks
it changes the CPU IdleStates report, e.g. before running jetson_clocks, the --show option reports:
Sorry, I don’t understand the WFI and c7 could be the CPU state.
After running the jetson_clocks keep the CPU to service state to enter idle to reduce the CPU frequency to have any processes been service well.