Auto exposure implementation strategy for Jetson Nano

The application I’m developing requires a continuous stream of video images of a person’s face, and I’m running into difficulties implementing an auto exposure strategy. A simplified class diagram for the application is shown below:


Each class is a separate thread. The VideoCamera thread is based on the yuvJpeg sample code. It produces a stream of images that are consumed by the other threads. For the AutoExposure thread, I’ve implemented code that is based on the userAutoExposure sample code.

The problem I’m running into is two-fold:

  • The AutoExposure thread is effectively a control loop that is affecting the Argus ISP-based auto exposure control loop. This leads to performance issues, such as the priority by which the auto exposure control loops adjusts the camera settings (gain, frame rate, exposure time, etc.). For my application, the control loop needs to keep the frame rate as high as possible, then adjust the exposure time to yield an optimal video signal of a person’s face (digital gain should be remain at unity).
  • After setting a camera setting (e.g., ISourceSettings->setExposureTimeRange()), the AutoExposure thread invokes ICaptureSession->capture() to signal the Argus code to begin using the new camera setting. This appears to cause a “hiccup” in the data stream produced by the VideoCamera thread, which causes problems with the FaceDetectionAndTracking and FaceAnalysis threads (which are expecting a steady stream of video data).

The NVIDIA documentation on the topic if ISP is thorough, but I have found little guidance on how to implement the concepts described therein.

Are there any guidance documents or source code examples that describe how to implement an ISP-based auto exposure algorithm? Based on the performance of my application so far, it would appear that I will need to write that code.

Many thanks!

This document is auto mobile instead of l4t for Jetson platform.
How to check the “hiccup”? Could you check if able reproduce by the argus_camera sample code?


The “hiccup” I’m referring to does not appear to be an empty video frame, but it may be a video frame that is compromised in some manner I cannot explain. Referring to the class diagram earlier in this topic, every time the AutoExposure thread invokes the following code:


Then the OpenCV face tracker utilized in the FaceDetectionAndTracking thread fails. It’s not clear to me why this is happening, and I’m pretty sure I won’t be able to reproduce this in any of the Argus sample code since none of the sample code utilizes OpenCV functions or objects.

Regardless of the “hiccup”, I would like to find some documentation about how to implement (in C++) an auto exposure image processing algorithm within the Argus framework. I’m definitely struggling to find that documentation.

If it helps, I’m using JetPack version 4.6.1.


There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.

What’s if set the same exposure range value again? Or set others like gain/frame rate?


The exposure time range and frame duration range APIs seem to work as expected, but I’m having problems with the gain range API.

The control loop I’ve implemented adjusts the exposure time and the frame duration to darken or lighten an image as appropriate. However, when those settings reach their endpoints (in particular, when trying to brighten a dark image), the control loop resorts to changing the gain range.

What I’ve observed is that when changing the gain range (either by adding a constant delta, or using the technique employed by the userAutoExposure sample code), there is a very long lag time between setting the gain range, and when the video output actually changes (my control loop uses the mode of the video frame histogram as its feedback signal).

Is there some sort of filter in the ISP code that causes the gain range to be applied very slowly over time? That is, it seems that changing the gain range on a frame-by-frame basis does not yield the expected results.


It’s known the setting need 4 -5 frames to take effect.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.