Same metadata in multi-camera / single session(in syncSensor)

Hi,
I’m trying to compare a timestamp and frame number to check a time syncronisation and frame drops.
However, in single session multi-camera case which is one capturesession with a vector of camera devices as an input argument, the metadatas are always same from each left and right cameras’ eglstreams.

and I found same case below.

Continuing the discussion from Argus: Getting timestamp for frames acquired by cuEGLStreamConsumerAcquireFrame:

  1. How can I get individual cameras metadata?

  2. and is there any better way to detect frame drops?

hello jahwan.oh,

it’s expected as mentioned by sample app,
i.e. // Create the capture session, AutoControl will be based on what the 1st device sees.

may I know which Jetpack release you’re working with?
there’s new sample app, syncStereo in the latest r35.4.1 release version.
this is public sample to provide timestamps for both the stereo streams. It is also demonstrate to detect the stereo mismatch based on timestamp difference.

1 Like

I’m using Jetpack5.1 L4T 35.2.1

I thought the syncstereo is only for specific camera model.

Anyways, I wanted to use CUeglFrame using CudaEGLStreamFrameAcquire, I believe it is better approach for the Cuda process. and I need syncronised frames, so I used single capture session.

Could you advise me some frame drop / or un-synced frame detection?

hello jahwan.oh,

please try getSensorTimestamp() for checking the VI HW (SoF/EoF) timestamp based-on tegra wide timestamp system counter (TSC), it’s the timestamp for the sensor (unit in nanoseconds).
you may see-also this topic for reference, Argus::ICaptureMetadata::getSensorTimestamp clock domain in L4T 32.4.4 - #10 by JerryChang

1 Like

JerryChang,

Thank you for the reply
However, in the single capture session(for synchronisation), they still gives the same data(or timestamps) for left and right cameras, which is expected as the comment from syncSensor example. // Create the capture session, AutoControl will be based on what the 1st device sees.

the reason why I use

  1. Single capture session: I want to use synchronised camera for left ans right

  2. CuEglFrame: device memory frame data

Q1) Do I understand these two above properly?

Q2) then in this condition, I cannot use metadata timestamp to check framedrop or frame sync check, right?

Thank you.

hello jahwan.oh,

  1. syncSensor sample is a software approach of software based synchronization,
    this example using multiple sensor per single capture session method, which means it’ll duplicate single capture request to these camera sensors.
    hence, you should expect these two capture results to be close enough.

  2. CuEglFrame is the CUDA EGL frame structure, since it’s connecting CUDA to EGLStream.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.