We capture 8 pal videos(UYVY) from csi interface,and in userspace we get the frame merged.
Before send these video display we have to do separation merged frame to each channels,
deInterlace, yuv convert to rgb, the cpu is busy about 96% each core,
and gpu is also busy about 90% used to render modules before display?
In our application we use omp libs, opencv to do rgb convert deinterlace,openGL to render.
We want to lower the cpu efficiency,is there some documents about this area?
how to arrange all the work at different cores will be optimised ?
Can dma be used to copy data in userspace on tx1 board?