I’m working on an image stitching pipeline that requires very low latency from camera to display.
There are various ways of getting images from a camera as Nvidia describes in the camera architecture stack doc. In my currently conceptual (low) understanding this is V4L2, Gstreamer with libargus backend, Eglstream with libargus backend.
On the processing side you have VisionWorks and upcoming VPI.
What are recommended efficient ways of connecting the camera with the VisionWorks and VPI processing libraries?
For instance, Libargus is designed to output to Eglstreams. Do you have code samples of how to connect VisionWorks or VPI with Eglstreams?