Recommended pixel format for v4l2 driver (for gstreamer integration)

Hello,
We are developing a custom camera module (and associate driver ) that will be used with the Jetson nano. We plan to use v4l2-ctl to verify the camera module and would prefer for gstreamer integration to use the v4l2src plugin.
We know that nvidia gstreamer plugins typically prefer NV12 as the input pixel format and I believe this is what we are planning to use. However, it is not clear if the underlying pixel buffers used for downstream elemements ( nvcompositor, nvvidconv and nvv4l2h264enc) are configured for pitchLinear or blockLinear input format ( or if the selection of this input format makes a difference in the efficiency of downstream processing).

Is it possible to provide any guidance on the preferred input pixel format for a camera sensor that is expected to pass buffers to downstream nvidia gstreamer plugins ( We plan to bypass the ISP interface and would prefer not to use nvarguscamerasrc)

Thanks
Victor

If you would like to bypass ISP you can use YUV sensor and reference to the sample code.

tegra_multimedia_api/samples/12_camera_v4l2_cuda/

Shane,
Thanks for the response. I had a couple of followup questions

  1. Is there a reason to prefer YUV over NV12 ( It appears that the nvidia plugins work natively with NV12. And, for output of raw video streams our application will require NV12 ). If possible we would like to avoid the conversion from YUV to NV12 as a downstream step on the output video stream from the sensor
  2. If we do have to go with YUV, and the downstream elements are nvidia gstreamer plugins, does it make sense to then convert the sensor stream to NV12 using the nvvidconv gstreamer plugin.

Thanks
Victor

  1. If you would like to use v4l2src the YUV sensors are recommend.
  2. Using argus for NV12 Bayer raw sensor are recommend.