Linking white balance, exposure etc. on arrays of cameras

My application contains a number of Argus cameras, and stitches them together. If I set the cameras up to use auto exposure and one of the cameras sees a brighter scene than the others, for example, then the offending camera “darkens” the overall image and the stitched result is noticeably uneven where the images connect.

Is there a way to put one of the cameras in auto mode (white balance, exposure, etc.) and then have it control all of the other cameras in the system automatically?

As a workaround, I’m considering setting one camera to be “auto”, while the others are “manual”. I’m going to investigate whether or not I can query the “auto” camera for its auto settings so I can propagate them to the others, but if there’s an easier way, I would be thankful. Furthermore I’m concerned that there may be some settings I can’t control.

hello BareMetalCoder,

I have several questions about your use-case,

  1. could you please share the JetPack release version you’re working with.
  2. may I know how many cameras you’re enabled.
  3. were they facing to different orientations.
  4. is there a single stitching output frame you would like to have.

please check the Argus sample applications.
you may refer to Argus samples, userAutoExposure and userAutoWhiteBalance to have implementation.
thanks

  1. Using Jetpack 4.3
  2. We use 3 Leopard IMX577 cameras, with fisheye lenses.
  3. They all face in the same direction, but if a light source is closer to one of them, they take different decisions regarding exposure, white balance, etc.
  4. We stitch three images together to form a single output.

My current approach is to look at the ICaptureMetadata to get awb, exposure, isp gain and analog gain for the “main” camera. I then force the “dependent” lenses to manually follow these settings on a frame buy frame basis via the IRequest interface. It seems to mostly work. The areas near the middle of the fisheye circles seem to match. If I shine a light in one of them, that specific image no longer goes dark (we do see some internal reflection in the lens but that is to be expected).

However, the areas around the edge of the fisheye circles don’t seem to match - the edge of one of the images is significantly darker than the corresponding area on another sensor.

I noticed that the Argus documentation mentions that parameters such as color saturation, optical black, color correction and tone mapping can be determined automatically. Can these settings be queried? I only seem to be able to query the user-specified settings, not the settings that the auto-correction seems to decide on.

Alternatively, am I forgetting a setting? In the mean time, I’ll look at the sample code to see what I can find.

Finally, the discrepancy on the edges is attributable to the .isp file’s lensShading feature, which is not appropriate for our lens type. Disabling lens shading in camera_overrides.isp helped.

What is the path for customers who cannot use the stock lenses provided by a vendor?
Is there a way in which we could request access to the toolset that creates the camera_overrides.isp file?

Alternatively, instructions on how to properly disable this and other lens-specific features in the ISP file would be appreciated.

hello BareMetalCoder,

that’s image tuning process (such as, optical black, color correction, tone mapping and lens shading) we do not support via forum discussion threads.
you may contact with Leopard team for further supports.
thanks

I understand, and I suspected as much. We had to go through a similar process for the sharpness parameter in the ISP file. Leopard have been helpful in the past, and we will work with them.

That being said, I have seen many complaints on this forum from users who cannot fully control the image parameters via Argus alone. A friendly suggestion to NVIDIA would be to consider expanding access to these tools in the future so that power users or users with specific imaging needs can tune to their requirements. Otherwise users may swap out a lens and use the OEM’s mismatched lens shading and other parameters. From a support standpoint, it becomes a chicken-and-egg problem: to justify support from the sensor manufacturer, the user needs to commit to a certain volume, but they can’t commit to any volume if they can’t get their application working because of the mismatched tuning parameters.