Best way to develop 4K MIPI camera application: Argus vs V4l2?

Hi,
I’m developing real time multi camera application using two MIPI camera(4k inputs) → processing mainly on GPU / TensorRT / ENC(4-5K)

So far, I used gstreamer’s v4l2src. or Camera provider’s API.
Now having new camera which supporing libargus, I’m a bit confused what is the standard or mainly used API.

It could be objective question.

  1. what is the API that I can controll the camera in low level?
  • low level means, from colour format / Auto exposure setting / Timestamp / etc…
  1. I mainly referencing jetson_multimedia_api. is it still good and latest method when developing mipi camera on jetson?

  2. could you give me a some sense by comparing them if you experienced both API?

  • ChatGPU says…
    Argus supports DFS based on the loading, which will be able to set the clocks during runtime; there’s no DFS for VI in V4L2.
    Producer generates output streams for streaming the cameras with the aforementioned configuration while Consumer processes the frames from the Argus producer (varies as per user application)

Thank you!!

Suppose below document tell the different of v4l2 and argus.

https://docs.nvidia.com/jetson/archives/r35.3.1/DeveloperGuide/text/SD/CameraDevelopment/SensorSoftwareDriverProgramming.html#

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.