Expected load for single camera with gst nvarguscamera around 20%?

I am currently seeing between 20% and 30% load on the whole system, as soon as I start reading from the camera using gstreamer. I just wanted to confirm, that this is as expected (I personally had expected much less load).

My setup:

I ran different types of pipelines, for example:

This results in around 15% load on all 4 CPUs as well as the GPU:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)NV12, framerate=(fraction)30/1' ! fakesink

On the other extreme, when sinking into a v4l2loopback device, I see around 25% load across the 4 CPUs and the GPU:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)60/1' ! nvvidconv ! 'video/x-raw, format=(string)BGRx, width=(int)640, height=(int)360' ! videoconvert ! 'video/x-raw, format=(string)BGR' ! v4l2sink device=/dev/video1

I measure the load as follows: I start tegrastats as a daemon, logging into a file every 250ms. After waiting for 15 seconds, I run one of the gstreamer pipelines for at least 5 minutes. After ending the pipeline, I keep logging for some more time. The reported load is the difference between not running and running the gtstreamer pipeline. Also, the CPU frequency goes up at the same time.

I was hoping for a much lower load for the CSI camera, as I was hoping to run some additional applications on top. Especially for the nvarguscamerasrc into the fakesink, I was hoping that this would not put much load on the CPUs.

Was my expectation wrong or do I have a mistake in my setup?

hello loeschef,

there are several must-have buffer processing, please refer to [Release 32.1 Development Guide]-> [Camera Development]-> Camera Architecture Stack about the details
please note that, your 2nd process that sinking into a v4l2loopback device, which involve ‘nvvidconv’ will take extra resources to perform the downscale.
may I know what’s your use-case.
thanks

Thank you for the quick response. I will try tomorrow and see if adding a buffer to nvarguscamerasrc will reduce the load.

Yes, I expected the second process to produce more load, but according to my measurements it adds not more than 10% load on the whole system, while already the first process has a load of 15%. This initial load for just reading the camera into the fakesink is what I was mainly wondering about.

My use-case is a low-latency camera setup with up to 2 cameras (preferably more than 60fps and low resolution – but that’s another issue), that leaves enough resources to run an existing legacy software. I am currently trying to find the most performant way getting the image into the legacy application.

While testing I came across the high load for just reading the image and was wondering, if that is generally an expected value for a single camera, particularly for the first process I mentioned in my post?

hello loeschef,

suggest you also contact Jetson Preferred Partner for camera solutions.
thanks

I read through the Camera Architecture Stack again, but only found the num-buffers option for nvarguscamerasrc related to buffers?

I played with different options for the nvarguscamerasrc, but all of them produced a similar load on the CPUs. For example, I tried to turn off as many features as possible. Interestingly, for the following process the GPU load stayed at 0 while CPU was still around 20:

gst-launch-1.0 nvarguscamerasrc wbmode=0 tnr-mode=0 ee-mode=0 aeantibanding=0 aelock=true awblock=true maxperf=true ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)60/1' ! fakesink

From how I read the diagram at the Camera Architecture Stack, the elements I am using (VI/ISP, Tegra Drivers, Camera Core, libargus, nvarguscamerasrc) are all green (NVIDIA). What aspect could I potentially get help from the Jetson Preferred Partners, as you suggested?

hello loeschef,

num-buffers option won’t effect CPU loading, it’s use-space commands to let nvarguscamerasrc know how many buffers need to process.

since you would like to have dual-camera with low-latency solutions, some of our Jetson Preferred Partners already have cameras and carriers available for usage.
for example, https://elinux.org/Jetson_AGX_Xavier#Ecosystem_Products_.26_Cameras
suggest you could also contact with them to have details.
thanks