If the src_dmabuf_fds of NvBufferComposite reuse part of dmabuf_fd, it will appear one crack line on screen display;
If i do not reuse part of dmabuf_fd,it will not appear;
But because of the 4 camera is not sync,the camera which come first have to wait for the last,and cause some delay.
In order to reduce delay, as a new camera frame comes,update the src_dmabuf_fds of this camera,and reuse the other cameras dmabuf_fd,NvBufferComposite and enqueBuffer to DrmRender immediately,when my hands up and down there is one crack line appear on screen ，Why ?This should only cause the frame rate to increase.
4 capture thread get 4 nonsynchronous camera frame on dma method，and put frames to one DrmRender thread ;
the DrmRender thread first Composites the 4 camera DMA buffers to one output DMA buffer,and render.
It seem like the buffer is not synchronized before NvBufferComposite(). We would suggest allocate at least 4 NvBuffers for each source. And after frame data is captured, please call NvBufferMemSyncForDevice() to make sure the buffer is synchronized.
1 、NvBufferMemSyncForDevice and NvBufferMemSyncForCpu are all test,
First VIDIOC_DQBUF(UYVY)–>then NvBufferMemSyncForDevice /NvBufferMemSyncForCpu -->NvBufferTransform(ABGR32)
if this is the NvBufferMemSyncForDevice or NvBufferMemSyncForCpu problem,isn’t it supposed to happen all the time??? but There is no moving objects in front of the camera，it will not appear one crack line on screen.
Just like the video in the first post, the other three cameras were normal, with only the hand-waving camera appear one crack line on screen
2 、NvBufferSession ? how can i use it? any example?
Is your source USB cameras? Or YUV sensors theou CSI ports? The symptom is similar to using multiple USB cameras and hitting bandwidth constraint. Does it happen if you run 3 sources and do NvBufferComposite()?
For TX2 we have tried 12-source case in DeepStream SDK. It is implemented through NvBufferComposite(), so we would think the API should work fine.
tegra_multimedia_api\samples\13_multi_camera sample code of NvBufferComposite serially process video frame capture;
while my project camera is nonsyn, 4 capture thread get 4 nonsynchronous camera frame on dma method，and put frames to one DrmRender thread
Just like serial way,if i wait all camera dam_fd update,it also no crack line while moving;
The key point is NvBufferComposite while move and reuse part of src_dmabuf_fd
The steps are to run hardware converter at max clock. If you still observe the issue, we would think it is certain issue in source capturing frame data and buffers are not well synchronized. But if you feel like it is something wrong in our API, please make a patch to 13_multi_camera to demonstrate the case so that we can give it a try.
max clock already have set
Why is the general task a synchronization issue？it already called NvBufferMemSyncForDevice!!!
All my tests proved that the NvBufferComposite api bug
I can accept the delay to wait for the 4 camera all update,
but the bug of NvBufferComposite while move and reuse part of src_dmabuf_fd does exist！！！
So can you explain what’s normal in the static case？
why there are no moving objects in front of the camera，it will not appear one crack line on screen?
my code have already called NvBufferMemSyncForDevice， why do you still think the synchronized problem?
all my code in the post #7,it is clear, my capture thread have called NvBufferMemSyncForDevice for synchronized,it is wrong? The call failed?but the return value of NvBufferMemSyncForDevice always zero for success
Is it possible pThis->n_CompositeDmabufs[j] is not correctly set? In the sample each source allocastes 3 NvBuffers. For two sources, there are 6 NvBuffers, like source0_0, source0_1, source0_2, source1_0, source1_1, source1_2. Ideally if the two sources have identical frame rate, we would see like;
And NvBufferComposite() is done in rendering thread. Maybe you can try to do NvBufferComposite() right after the sources have done capture and called NvBufferMemSyncForDevice(), and send composited NvBuffer to rendering thread.