Jetson Inference Stereo Rasp Camera Lagging

Hi,

I want to record stereo video from my rasp cams via jetson inference library. this is my code.

import jetson.inference
import jetson.utils
import time

camera_sag = jetson.utils.videoSource("csi://0")      # '/dev/video0' for V4L2
camera_sol = jetson.utils.videoSource("csi://1")      # '/dev/video0' for V4L2
display_sol = jetson.utils.videoOutput("sol_kamera.mp4") # 'my_video.mp4' for file
display_sag = jetson.utils.videoOutput("sag_kamera.mp4") # 'my_video.mp4' for file

while display_sol.IsStreaming():
    start = time.time()
    img_sag = camera_sag.Capture()
    img_sol = camera_sol.Capture()
    display_sol.Render(img_sol)
    display_sag.Render(img_sag)
    print(time.time() - start)

then I watch the videos but one of them is longer than the other one (I do not know why)
And there are not synchronous. this is the sample video: jetson-forum-leftlagging - YouTube
I changed The code and started left camera first but nothing changed. Still left cam is lagging.

What causes this problem. Why one of the videos is longer then the other one?

Hi @muhammedsezer12, I’d recommend using the cudaOverlay() function from jetson-utils to combine both camera frames into one image, and then saving a single video instead of two. This is similar to how ZED camera combines both frames into one stream. That may provide you with better synchronization, although camera-level sync would be needed for true synchronization.

When you are processing the video, you can then use cudaCrop() to extract the left/right frames from the combined video stream.

But I do not want to combine these, I need to process them and matching template.
How can I get mostly synchronized images from jetson nano?
I will try combining then cropping but this can cause a bit fps drop i guess.

And what is the cause of this lagging? as I know capturing images is only 0.1 ms there should not be a lagging i think.

True synchronization would need to be done at the camera level. However if you were to use libargus API from C++ that may give you lower latency. My Python APIs aren’t particularly intended for multi-camera synchronization. They do however return the latest image from the queue, so you shouldn’t see that big of a lag on the camera side.

To determine if the lag is from the encoding, can you try to just display the video on the screen? Combine both left/right images into one frame with cudaOverlay(), and then use one videoOutput object that isn’t set to a video file (i.e. display = jetson.utils.videoOutput()). Is there lag when it is rendered to the display?

Do you use both images during your processing? If so, I recommend combining them to keep the synchronization as good as you can get it. Then during your processing, simply crop that big image into left/right and process them. Otherwise they may get unsynchronized during encoding/decoding of two independent compressed video files.

1 Like

Hi,
you were right (as always)
When I overlayed images problem solved.
But I do not understand. Why when overlayed problem solved?
what causes this problem? I’m taking images same only difference is overlaying or displaying them separately?

Is is problem for only video recording or is there a lag in python loop too?

Hi @muhammedsezer12, if you were displaying both images separately to same OpenGL window, they I don’t think you would see the lag / de-synchronization. I believe the de-synchronization is introduced while encoding two separate H.264/H.265 streams. I’m not sure if that is because how I am using the encoder through GStreamer and timestamping the images, or if it is simply an indeterminism from the encoding process (which uses motion vector fields and different types of temporal frames to perform the compression)