Hello everyone.
I’m working on a real-time computer vision project. I’m currently using a Jetson TX1 board with the latest jetpack and a USB OV1064O camera (UVC compliant) delivering 1280x1080@30. Recently, I’ve started measuring latency with different pipelines. The way I measure latency is the following:
- Print a timestamp in milliseconds since the epoch before capturing a frame.
- Call the function for capturing a frame.
- Draw that timestamp over the image.
Then, I aim the camera to my monitor to get the timestamps being printed and afterwards I compare the recorded timestamp with the one drawn over the image.
My results are:
CODE FRAME TIME (ms) LATENCY (ms)
MIN MAX AVG MIN MAX AVG
OpenCV 17 44 25.3 130 198 168.4
gst-camera (jetson-inference) 19 27 21.7 99 114 105.2
v4l2cuda (tegra_multimedia_api) 27 37 30.1 84 116 106.5
I find it hard to believe that for most real-time applications 100ms (around 3 frames at 30fps) is acceptable.
Does this benchmark sound plausible to you?
Is there any way I could improve latency with this setup?
Thanks in advance.