We are experiencing relatively high latency for the CSI processing on the TX2. The devkit CSI Camera to nvoverlaysink has a glass-to-glass latency of about 80 millisecs (5 frames @60fps).
Typically we put a 60 fps video on the screen which loops and shows the frame counter - and then we film this movie using the camera.
At the same time - we take a photo of the screen which then shows both the original video frame number - and also the camera frame number (which will be lagging X frames behind).
If the camera is e.g. 7 frames behind - then this means 7/60 seconds latency - which equals 116ms latency from the camera glass to the display glass.
Hi Jimmy,
I guess you do not run the sensor mode at 720p120. Please replace and rebuild tegra_multimedia_api\samples\09_camera_jpeg_capture with attached main.cpp. And execute
This wiki page is intended to be used as a reference for the Tegra X1 (TX1) capture to display glass to glass latency using the simplest GStreamer pipeline. The tests were executed with the IMX274 camera sensor, for the the 1080p and 4K 60fps modes.The tests were done using a modified nvcamerasrc binary provided by Nvidia, that reduces the minimum allowed value of the queue-size property from 10 to 2 buffers. This binary was built for Jetpack 3.0 L4T 24.2.1. Similar test will be run on TX2.
Also, It expose some Glass to glass latency measurement reliable methods.
An error occurred during a connection to developer.ridgerun.com. Peer’s Certificate has been revoked. Error code: SEC_ERROR_REVOKED_CERTIFICATE
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
Please contact the website owners to inform them of this problem.
Yes, we are dealing with that Certificate problem. It mostly happens with Mozilla or Safari web browsers. Please try to access the page link with Google Chrome.
Sorry for that inconvenience. We are working on solving it.
DaneLLL, can you confirm which command you used exactly? And just to confirm you used the onboard camera and accessed it via command line from Ubuntu terminal? Thanks!
nvidia@tegra-ubuntu:~$ gst-launch-1.0 nvcamerasrc num-buffers=600 ! 'video/x-raw(memory:NVMM),width=1280,height=720,framerate=60/1' ! nvoverlaysink
Setting pipeline to PAUSED ...
Available Sensor modes :
2592 x 1944 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
2592 x 1458 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1280 x 720 FR=120.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
NvCameraSrc: Trying To Set Default Camera Resolution. Selected sensorModeIndex = 2 WxH = 1280x720 FrameRate = 120.000000 ...
Got EOS from element "pipeline0".
Execution ended after 0:00:10.158761040
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
At 30 Hz, each frame will take 33 milliseconds to capture, before it can be processed. Then it needs to be displayed, which will take between 0 and 33 milliseconds of queuing (depending on where scan-out is on the monitor when you’re done) plus 33 milliseconds to actually scan it out. Add 42 ms of display latency of your display, and the best possible case is 33 + 0 + 33 + 42 milliseconds, and the worst case is 33 + 33 + 33 + 42 milliseconds. So, the best achievable rate, assuming processing takes no time, would be between 109 and 141 milliseconds. You seem to see a one frame additional latency, because your rate is between 130 and 170 ms. That could be added by processing latency, or simply by using a triple-buffered output pipeline instead of double-buffered.
The numbers you report don’t seem to be concerning at all, they seem to be spot on for what’s expected at 30 Hz with the various involved subsystems.
To get lower latency, you need to up your hardware game significantly. You’d want to genlock your display to your camera. Additionally, you’d want to make sure you use as fast buffering as possible (direct mapped or double-buffered presentation.) Additionally, you’d want a display with close to zero display latency. Additionally, you’d want a very high frame rate. Even if you can only get 60 Hz for the camera capture, you might be able to get a display driven at 120 Hz to cut some of the latency down; ideally you’d want a 90 Hz or 120 Hz camera as well.
@honey_Patouceul I have played it with and without that command (as well as many other before). It doesn’t make a difference. :( but thanks!
@snarky, thanks a lot for your answer. Very insightful!
I had tried to use sensorModeIndex=2 (so with 120 fps) before. Result: ~210 ms end to end latency. Weird, no?
Any tips or pointer how to genlock monitor and camera?
Any tips on the faster buffering?
Point taken with the physical monitor. I’ll get a 120Hz one.
Overall, it just seems weird to me as previous comments in this thread pointed out much faster times with same setup (except that they might have used a monitor with higher update frequency)?