Hi,
I’m developing real time multi camera application using two MIPI camera(4k inputs) → processing mainly on GPU / TensorRT / ENC(4-5K)
So far, I used gstreamer’s v4l2src. or Camera provider’s API.
Now having new camera which supporing libargus, I’m a bit confused what is the standard or mainly used API.
It could be objective question.
- what is the API that I can controll the camera in low level?
- low level means, from colour format / Auto exposure setting / Timestamp / etc…
-
I mainly referencing jetson_multimedia_api. is it still good and latest method when developing mipi camera on jetson?
-
could you give me a some sense by comparing them if you experienced both API?
- ChatGPU says…
Argus supports DFS based on the loading, which will be able to set the clocks during runtime; there’s no DFS for VI in V4L2.
Producer generates output streams for streaming the cameras with the aforementioned configuration while Consumer processes the frames from the Argus producer (varies as per user application)
Thank you!!