Argus: virtual cameradevice joining two cameras to one buffer

Inside the jetson_multimedia_api there exist an argus manual

jetson_multimedia_api/argus/docs/Argus.0.98.pdf

On page 13 it is stated:

“A CameraDevice object can be a
one-to-one mapping to a physical camera device, or it can be a virtual device that maps to multiple
physical devices that produce a single image.”

Would anyone happen to have a sample of how one of such virtual devices can be created from two cameras?

Have a check the sample APP argus_camera Multi Session.
Launch by below command

argus_camera --module=3

Hi SahneCCC. Thank you for your reply.

I have not grasped the architecture of the suggested project yet, but it seems to me that the 3A* algorithm is running seperately on each sensor? If I cover one sensor with my hand, that sensor alone adjusts to the circumstances. The other is unchanged.

(*3A = autofocus, autoexposure and auto white balance, where autofocus is not used)

My usecase is that I need to record a wide, unevenly lit, scene using two sensors. The sensors have an overlap, and I need to be able to blend the images seamlessly. Which I am doing in a later step where I know depth etc.

White balance differences are my largest problem when stitching.

As I understand it, my problem is caused by the AWB algorithm is using the sensor pixels as input when guessing on white balance. Therefore different images will result in different white balance results.

What I am after is a setup where both cameras operate using same settings and the 3A algorithms find a compromise across both sensors.

One way to achieve this is to have the two sensors merge their image before the bayer images are fed into a single ISP for 3A algorithm computation.

I have done a prototype where I merge bayer data from two sensors, but I have been unable to find a way to get that data into Argus, so I can convert to crisp RGB images.

Is there a way to achieve such a goal within the argus framework?

Or is the argus sourcecode available under NDA perhaps?

Kind regards

Jesper

Suppose the syncsensor sample code should be have two sensor run the same AWB like master slaver.
And current don’t support merge two sensor frame before ISP also after ISP.
If the syncsensor doesn’t match your case you also can get the raw data from Argus.(PIXEL_FMT_RAW16)

Hi ShaneCCC,

Thank you for your reply.

I can already get the bayer pixel data directly.

I did a demo using the V4L2 interface, where I implemented simple demosaic’ing using CUDA to see that the images end up near each other white-balance wise.

The thing is that I would very much prefer to use the ISP inside argus, as implementing a full ISP, and the tools needed to perform ISP tuning is an enormous task.

I have tried doing a kernel module that exposes a V4L2 character device, but argus wont read from it. I suspect argus is only using the V4L2 interface when probing the device. Something else is used when transferring data. It probably has something to do with a device tree.

If I had a sourcecoder for a virtual camera that argus could interface with then I could perhaps approach it that way?

Kind regards

Jesper

Maybe you can try to below argus API like setawb* setColorCorrectionMatrix …

https://docs.nvidia.com/jetson/archives/l4t-multimedia-archived/l4t-multimedia-3261/group__ArgusAutoControlSettings.html

Hi ShaneCCC,

Thank you for your reply.

Following the “textbook” of how ISP is implemented:

Auto white balance is done before color correction is applied, so though I acknowledge I can adjust the result, I am uncertain that it is a proper way to approach the problem. My problem is as such not to control, rather to make sure that all pixels undergo the same ISP process.

I need a way to feed both camera images through one ISP to achieve my goal.

Kind regards

Jesper

No matter what Xaiver NX only have one ISP to handle multiple sensors by time share.

If only there was an api to access it through. :)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.