CSI latency is over 80 milliseconds...?

Thanks Dane,

This improved things a bit!

With this approach I’m measuring around 50-70 ms glass to glass.

Hi @njal

You may find interesting the following link:

This wiki page is intended to be used as a reference for the Tegra X1 (TX1) capture to display glass to glass latency using the simplest GStreamer pipeline. The tests were executed with the IMX274 camera sensor, for the the 1080p and 4K 60fps modes.The tests were done using a modified nvcamerasrc binary provided by Nvidia, that reduces the minimum allowed value of the queue-size property from 10 to 2 buffers. This binary was built for Jetpack 3.0 L4T 24.2.1. Similar test will be run on TX2.

Also, It expose some Glass to glass latency measurement reliable methods.

I hope this information helps you!

Best regards
-Daniel

Had trouble accessing

An error occurred during a connection to developer.ridgerun.com. Peer’s Certificate has been revoked. Error code: SEC_ERROR_REVOKED_CERTIFICATE

    The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
    Please contact the website owners to inform them of this problem.

@Jimmy

Yes, we are dealing with that Certificate problem. It mostly happens with Mozilla or Safari web browsers. Please try to access the page link with Google Chrome.

Sorry for that inconvenience. We are working on solving it.

DaneLLL

DaneLLL, can you confirm which command you used exactly? And just to confirm you used the onboard camera and accessed it via command line from Ubuntu terminal? Thanks!

Please refer to below gstreamer command:

nvidia@tegra-ubuntu:~$ gst-launch-1.0 nvcamerasrc num-buffers=600 ! 'video/x-raw(memory:NVMM),width=1280,height=720,framerate=60/1' ! nvoverlaysink
Setting pipeline to PAUSED ...

Available Sensor modes :
2592 x 1944 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
2592 x 1458 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1280 x 720 FR=120.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock

NvCameraSrc: Trying To Set Default Camera Resolution. Selected sensorModeIndex = 2 WxH = 1280x720 FrameRate = 120.000000 ...

Got EOS from element "pipeline0".
Execution ended after 0:00:10.158761040
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

@DaneLLL:
This is very weird. I still get ~ 130-170ms glass to glass latency.
Any ideas what could be up? E.g. I haven’t installed any special nvcamerasrc binary from Nvidia (e.g. as mentioned in Introduction paragraph here: NVIDIA Jetson TX1 TX2 Glass to Glass latency | Jetson TX1 TX2 Capture | RidgeRun) nor have I have installed any JetPack SDK

My setup:

  • HP Pavillion 32 Display: Monitor lag 42 ms; connected via HDMI to JTX2; HDMI cable ~ 1m
  • Jetson TX2 with onboard CSI camera
  • Gstreamer 1.6.0
  • I’m running jetson_clocks.sh
    Command I got around 130 ms with:
  • $ gst-launch-1.0 nvcamerasrc ! ‘video/x-raw(memory:NVMM), width=640, height=480, framerate=30/1’ ! nvvidconv flip-method=0 ! nvegltransform ! nveglglessink -e

At 30 Hz, each frame will take 33 milliseconds to capture, before it can be processed. Then it needs to be displayed, which will take between 0 and 33 milliseconds of queuing (depending on where scan-out is on the monitor when you’re done) plus 33 milliseconds to actually scan it out. Add 42 ms of display latency of your display, and the best possible case is 33 + 0 + 33 + 42 milliseconds, and the worst case is 33 + 33 + 33 + 42 milliseconds. So, the best achievable rate, assuming processing takes no time, would be between 109 and 141 milliseconds. You seem to see a one frame additional latency, because your rate is between 130 and 170 ms. That could be added by processing latency, or simply by using a triple-buffered output pipeline instead of double-buffered.
The numbers you report don’t seem to be concerning at all, they seem to be spot on for what’s expected at 30 Hz with the various involved subsystems.

To get lower latency, you need to up your hardware game significantly. You’d want to genlock your display to your camera. Additionally, you’d want to make sure you use as fast buffering as possible (direct mapped or double-buffered presentation.) Additionally, you’d want a display with close to zero display latency. Additionally, you’d want a very high frame rate. Even if you can only get 60 Hz for the camera capture, you might be able to get a display driven at 120 Hz to cut some of the latency down; ideally you’d want a 90 Hz or 120 Hz camera as well.

Not sure it improves a lot, but it seems you are using nvvidconv for nothing…you may try to remove it from the pipeline.

@honey_Patouceul I have played it with and without that command (as well as many other before). It doesn’t make a difference. :( but thanks!

@snarky, thanks a lot for your answer. Very insightful!

  1. I had tried to use sensorModeIndex=2 (so with 120 fps) before. Result: ~210 ms end to end latency. Weird, no?

  2. Any tips or pointer how to genlock monitor and camera?

  3. Any tips on the faster buffering?

  4. Point taken with the physical monitor. I’ll get a 120Hz one.

  5. Overall, it just seems weird to me as previous comments in this thread pointed out much faster times with same setup (except that they might have used a monitor with higher update frequency)?

Could you try this (disclaimer: I’m not on a Tegra to test ATM):

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), width=640, height=480, framerate=120/1' ! nvvidconv ! xvimagesink

Jimmy, thanks. That helped it reduce it to latency ~86ms-129ms

Seems the egltransform + nveglglessink was responsible for this extra latency ?
What if you use nvoverlaysink:

gst-launch-1.0 -e nvcamerasrc ! 'video/x-raw(memory:NVMM), width=640, height=480, framerate=120/1' ! nvoverlaysink

[EDIT: also note that 640x480 is not a natively supported mode of the sensor. You may also try to capture from a native mode, and if required convert with nvvidconv.]

Onboard camera has three sensor modes:

Available Sensor modes :
2592 x 1944 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
2592 x 1458 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1280 x 720 FR=120.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10

Please ensure 1280x720p120 is selected:

NvCameraSrc: Trying To Set Default Camera Resolution. Selected sensorModeIndex = 2 WxH = 1280x720 FrameRate = 120.000000 ...

Hi there, I posted a related question (what is the best camera setup for lowlatency) into this thread: https://devtalk.nvidia.com/default/topic/1029472/jetson-tx2/low-latency-camera-for-jetson-tx2/post/5266690/#5266690

Any help / answers are very much appreciated. Thanks so much

I was using jetson-util code from Nvidia (GitHub - dusty-nv/jetson-utils: C++/Python Linux utility wrappers for NVIDIA Jetson - camera, codecs, CUDA, GStreamer, HID, OpenGL/XGL) to display images captured with the Tx2 dev kit onboard camera.

When the gst camera was initialized, it print-out some config as:

Available Sensor modes :
2592 x 1944 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
2592 x 1458 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1280 x 720 FR=120.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10

However, I want just 640x360, so I create my c++ camera object as

myCamera = Nvidia_gstCamera(640, 360);

I can see the image was definitely 640*360. However, I really do not know which mode was initialized and what the frame rate was.

Could anyone help?

Hi,
On r32.2.1, it shows which sensor mode is picked in launching nvarguscamerasrc:

GST_ARGUS: Running with following settings:
   Camera index = 0
   Camera mode  = 2
   Output Stream W = 1280 H = 720
   seconds to Run    = 0
   Frame Rate = 120.000005

Looks like you use r28 release. Please share the version by executing ‘head -1 /etc/nv_tegra_release’

Yes, I am using r28:

# R28 (release), REVISION: 1.0, GCID: 9379712, BOARD: t186ref, EABI: aarch64, DATE: Thu Jul 20 07:59:31 UTC 2017

However, after change the launch string to the following, I got 60 FPS

nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)640, height=(int)360, format=(string)NV12, framerate=(fraction)120/1 ! nvvidconv flip-method=0 !

However, my application requires r28 so I can not upgrade. Can I ever achieve 120 FPS on r28?

Yes, I am using r28:

# R28 (release), REVISION: 1.0, GCID: 9379712, BOARD: t186ref, EABI: aarch64, DATE: Thu Jul 20 07:59:31 UTC 2017

However, after change the launch string to the following, I got 60 FPS

nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)640, height=(int)360, format=(string)NV12, framerate=(fraction)120/1 ! nvvidconv flip-method=0 !

However, my application requires r28 so I can not upgrade. Can I ever achieve 120 FPS on r28?

Hi @AutoCar,

You could try the following GStreamer pipeline to specify the 120 fps framerate in the nvcamerasrc properties:

DISPLAY=:0 gst-launch-1.0 nvcamerasrc fpsRange='120 120' ! "video/x-raw(memory:NVMM),width=640,height=360,format=NV12,framerate=120/1" ! nvoverlaysink

Another option is to capture at 1280 x 720 FR=120.000000 mode and use nvvidconv to perform a downscale from 1280x720 to 640x360 as the following pipeline:

DISPLAY=:0 gst-launch-1.0 nvcamerasrc fpsRange='120 120' ! "video/x-raw(memory:NVMM),width=1280,height=720,format=NV12,framerate=120/1" ! nvvidconv ! "video/x-raw(memory:NVMM),width=640,height=360,format=NV12,framerate=120/1" ! nvoverlaysink

You can use “perf” element to measure the framerate of the pipeline:

I hope the above information helps you.

1 Like