Hi,
I’m trying to compare a timestamp and frame number to check a time syncronisation and frame drops.
However, in single session multi-camera case which is one capturesession with a vector of camera devices as an input argument, the metadatas are always same from each left and right cameras’ eglstreams.
it’s expected as mentioned by sample app,
i.e. // Create the capture session, AutoControl will be based on what the 1st device sees.
may I know which Jetpack release you’re working with?
there’s new sample app, syncStereo in the latest r35.4.1 release version.
this is public sample to provide timestamps for both the stereo streams. It is also demonstrate to detect the stereo mismatch based on timestamp difference.
I thought the syncstereo is only for specific camera model.
Anyways, I wanted to use CUeglFrame using CudaEGLStreamFrameAcquire, I believe it is better approach for the Cuda process. and I need syncronised frames, so I used single capture session.
Could you advise me some frame drop / or un-synced frame detection?
Thank you for the reply
However, in the single capture session(for synchronisation), they still gives the same data(or timestamps) for left and right cameras, which is expected as the comment from syncSensor example. // Create the capture session, AutoControl will be based on what the 1st device sees.
the reason why I use
Single capture session: I want to use synchronised camera for left ans right
CuEglFrame: device memory frame data
Q1) Do I understand these two above properly?
Q2) then in this condition, I cannot use metadata timestamp to check framedrop or frame sync check, right?
syncSensor sample is a software approach of software based synchronization,
this example using multiple sensor per single capture session method, which means it’ll duplicate single capture request to these camera sensors.
hence, you should expect these two capture results to be close enough.
CuEglFrame is the CUDA EGL frame structure, since it’s connecting CUDA to EGLStream.