In this pipeline nvargus-daemon uses 22.5%, other pipelines that convert the pixel format, limit framerate or any other basic task, nvargus-daemon can use up to 40% of the CPU.
I don’t understand why the usage is so high, my understanding is that Libargus has access to 2 ISPs, which should offload some of the work done on the CPU. Does anyone know why CPU usage is so high?
Some of the more senior people in my team believe that this CPU usage is still high for simply pulling frames from the ISP of the Xavier, and moving them to the DMA buffer and other locations. Would you be able to explain why the CPU usage is relatively high? Perhaps it can lowered/optimised?
GST_ARGUS: 2592 x 1944 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 16.000000; Exposure Range min 34000, max 550385000;
GST_ARGUS: 2592 x 1458 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 16.000000; Exposure Range min 34000, max 550385000;
GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 16.000000; Exposure Range min 22000, max 358733000;
GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 1
Output Stream W = 2592 H = 1458
seconds to Run = 0
Frame Rate = 29.999999
Would like to know all sensor modes and which sensor is running. And is it the result of running single camera? Or multiple cameras?
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 1440 x 1080 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 0.000000, max 480.000000; Exposure Range min 29000, max 15110711000;
GST_ARGUS: 704 x 540 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 0.000000, max 480.000000; Exposure Range min 29000, max 15110711000;
GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 0
Output Stream W = 1440 H = 1080
seconds to Run = 0
Frame Rate = 59.999999