I’m trying to evaluate what is the absolute fastest way to output the image of an Mipi camera to a monitor via HDMI via a Jetson Nano. Right now I am using libargus via the nvarguscamerasrc gstreamer plugin with the following command:
gst-launch-1.0 nvarguscamerasrc exposuretimerange=“500000 500000” gainrange=“1 1” ispdigitalgainrange= “1 1” awblock=0 tnr_mode=0 ee-mode=0 ! ‘video/x-raw(memory:NVMM), width=1280, height=720,framerate=60/1’ ! nvoverlaysink
The camera in use is the Raspberry Pi V2 camera and the stream is displayed on a 60FPS monitor.
This gstreamer command results in a very low latency of around 50 ms [3 frames of the monitor].
However I need the additional control, that only the low-level C API of libargus provides but that is not exposed in the libarguscamerasrc gstreamer plugin (e.g. setting the the color correction matrix).
An basic example of how to use libargus with the C-API is provided in the tegra_multimedia_api under samples/09_camera_ jpeg_capture which is the basis for my test program. However even if I remove the JPEG thread of that example and just let the preview thread run, I can only get very few FPS and a very high delay in the playback of the camera image on the monitor.
My question is, why is using the C-API for libargus so much slower than the gstreamer plugin even though both should work similarly and second can I make the C-program as fast as the gstreamer pipeline?
Furthermore, what is the fastest way to get the result drawn to the screen? Is it via the NvEglRenderer? An X-server is not necessary for me.