We use mipi camera with TX1.
I understand from “tegra linux driver development” that some cameras can be used directly with v4l2, Right?
"In applications support a direct V4L2 interface, use this interface to communicate to the NVIDIA V4L2 driver
without having to use the SCF library. Use this path for a YUV sensor since this sensor has a built-in ISP and frame does not need extra processing…
Read the following sections to learn how to develop these; our examples use OmniVision OV5693 sensor, and the
source code for OV5693 sensor is available to customers.
But OV5693 , does not output YUV, only RAW !
So, how can it be used with direct v4l ?
Another thing: Can I assume that the latency with v4l direct approach is much better than using the camera subsystem (ISP) ?