Comparison of camera types supported by each MMAPI sample

Hello,


Here is the picture
This is a table comparing camera types supported by each MMAPI sample.

It seems that v4l2 and libargus operate separately like this.

What does it mean that libargus runs on v4l2 as Nvidia says?

Thank you.

Hi,
The code built in libnvscf.so is based on v4l2. So that when you are working on sensor driver/device tree. If you can capture raw frame data(such as RG10 or RG12) in v4l2-ctl command, ideally it should also work in using Argus.

1 Like

Hello,

Is it possible to extract bayer data using csi camera?

Thank you.

Hi,
If your sensor driver and device tree is ready, you should be able to capture raw frame data through v4l2. Please make sure you can capture frames by running v4l2-ctl command first.

And then you can refer to this sample:

/usr/src/jetson_multimedia_api/samples/v4l2cuda/

By default the sample demonstrates capturing YUYV frame data and then convert to RGB though CUDA code. For your use-case, you can customize it to capture frame frame data and do de-bayering through CUDA code. There is no existing CUDA code for de-bayering and you would need to implement it.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.