Lowest latency model and/or ways to reduce latency from camera to inference

Hello all,

I am very new to all of this and I am currently working on an object detection project and I am getting an average latency of around 300ms. Currently my project just uses a camera to get real time video then I try to put bounding boxes on objects. I tested this by running a timer on screen and taking a picture of the timer then calculating the difference between the two timers on screen to find latency. I am currently using yolov4 tiny on an Orin NX 16gb. I was wondering if there are any ways I can lower this time to ideally under 140ms.

Dear @jfluent,
Are you using TRT engine in inference? If so, what is the inference time noticed when using trtexec
If you are not using TRT engine, can you use trtexec to get the expected perf numbers.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.