I am working with nvarguscamerasrc element and I want to find out how to edit the queue-size or at least know if it is possible. I’m looking for the same thing as this post but for nvarguscamerasrc: reference
I want to edit it to test and see if I can reduce the latency in the camera capture.
I am using a Jetson Xavier AGX with Jetpack 5.0.2.
I found the MIN_BUFFERS and MAX_BUFFERS parameters in the “gstnvarguscamerasrc.cpp” file. But those parameters are not what I’m looking for. I don’t know, maybe it could be in the Libargus Camera API?
My question is if the queue-size property exists in nvarguscamerasrc. That property existed in the past for nvcamerasrc, so could you please tell me if it currently exists.
To provide more background, we are seeing huge latencies in the camera, around 4 frames of latency. In the past we fixed this with NVIDIA’s help, thanks to a custom nvcamerasrc binary provided, however, for the nvarguscamerasrc there is no way to control the camera latency.
So, our question is, is there a way to reduce the latency in Libargus? Can NVIDIA provide a new Libargus binary with latency improvement?
Hi,
Our current latency in the camera capture is in average 91 ms. Right now we are already using the jetson clocks and we are using a framerate of 120.
Hi @david.roman - curious if you made any progress on the queue-size question, or if you found argus_camera be produce the ‘same’ results as nvarguscamerasrc?
@ShaneCCC is argus_camera supposed to reveal the lowest possible latency settings when going through argus or are other optimizations possible on the libargus side? For example, like @david.roman asked-- a libargus binary with latency improvement? Any detail here would be helpful.
@ShaneCCC thank you. So in that overall glass-to-glass latency chain, the part allocated to Argus would be 4-5 frame times? In the case of grabbing 60 frames per second, Argus processing time would be somewhere within 66-83ms?
A couple of questions:
Are there settings on the host (jetson_clocks, or clock boost) that are required or can help with making this as fast as possible?
Is this a Argus hardware limitation, or due to libargus implementation choices (e.g., queues, buffers, for stability)?
Is there lower latency through Argus on the Orin series?
Thanks, @ShaneCCC. You mentioned that there’s currently no plan for faster Argus processing on Orin. Could latency be improved through software in principle (or is this again the same fundamental hardware limitation as you said for the AGX above)?
Suppose it’s software implement instead of hardware limitation
Do I understand correctly, that on the Orin series, the Argus hardware could do lower latency processing? On the Xavier, Argus hardware requires 4-5 frames for 3A and many other calculations, but on the Orin it could work faster?
For the v4l2 state that could get better performance due to don’t need to run any ALG.
Could you please explain? I did not understand what this means.
Is this a hardware limitation? I.e., even with different software (e.g., DriveOS), that 4-5 frame latency is what the AGX & Orin require for the Argus path? Or is that limitation not hardware, but in the Jetson SW?
I’m asking since you said “3. Current don’t have plan for it.” above, which implied to me it’s SW, not HW issue? Would appreciate a clarification.
Thanks, @ShaneCCC. Is there any more information about that software limitation you could share with those of us with a technical interest?
Achieving lower latency with Argus would open the AGX/Orin up for additional low-latency streaming use cases. Any way that us users can influence the roadmap and see if this can make it in? Thanks.