Merging two cameras before ISP processing

Hi,

I am recording a wide scene using two IMX477 sensors on a Jetson NX devkit.

The reason I am using two cameras is because the scene is very wide, and I have a need for the resolution two cameras offer. I am running in 4K@30 mode on both cameras.

The cameras spit out 10-bit bayer RGRG/GBGB format.

When doing a straightforward gstreamer pipeline I initialize two cameras and merge them to a single video. I also syncronize exposure and gain manually for both cameras.

Later in the process, when I am stitching the material, I often get very different color tones in the videos as my scene is unevelny lit. This makes stitching far from seamless.

My impression is that the problem this is because each camera has its own seperate ISP process. The white balance (3a) algorithms use pixel values as part of its input to come up with its result. This will obviously give different results under different light conditions, making the images vary in tone.

To get around this I am convinced that I need to run the sensors with identical gain and exposure and merge their raw bayer output into a single image before ISP processing.

This has turned out to be easier said than done, and taken me on quite a ride the past two weeks.

I have been on the following adventures.

  1. I have tried setting the gain and exposure and capturing raw bayer frames using the V4L2 interface. (ioctl and mmap). This is possible and I can render the raw bayer images that look similar in their overlapping areas. (I render bayer data as grayscale, so I cant evaluate color tone at this stage.) But I cant find a way to get the merged image in to nvarguscamerasrc, as the ISP algorithm is happening there.

  2. I have attempted doing a kernel module with a fake /dev/videoX device that would act as proxy for two cameras behind it, but I cant get nvarguscamerasrc to accept the input. Nvarguscamerasrc needs more than just a filehandle that responds to read, write, ioctl and mmap calls. I think it also uses information from elsewhere. Perhaps a device tree, but I know little about that.

  3. I have looked at the v4l2loopback project, but I have not figured out to use it in this context.

  4. I have looked at gstreamer but it does not seem that It supports 10bit bayer data, and there is no way for such an output to be sent to nvarguscamerasrc.

  5. I have been looking at libcamera, but it seems a bit too early for that to be usefull for my skill level.

My next step would be to implement an ISP from scratch, but I think it is unrealistic to to reach the quality nvarguscamerasrc delivers.

Any advice of where I should go from here is highly welcome.

Kind regards

Jesper

It seems that nvarguscamerasrc is opensourced.

But that relies on Libargus which is not, publicly available.

Have a try the syncsensor sample code in MMAPI, that shouldn’t have AWB problem.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.