i want to record 720P*8(30fps) video at the same time which video data comes from 8 cameras.
now my data flow is
i) get yuyv from camera via v4l2
ii) convert yuyv to yuv420 via L4T Multimedia API NvVideoConverter(is this implemented by GPU?)
iii) encode yuv420 to h265 via L4T Multimedia API NvVideoEncoder
iv) callback and write record file
it’s ok for tx1 to handle less than 4 cameras. framerate can be stable at 30fps. encode time is about 10ms per frame.
(time for dqBuffer + fill yuv data into NvBuffer + qBuffer)
but when camera increases to 6, the encode time instable and sometimes it’s as long as 40ms.
(also time for dqBuffer + fill yuv data into NvBuffer + qBuffer)
it’s even worse for 8 cameras working at the same time.
as a result, the framerate cann’t be 30fps.
so i wonder if there is a better scheme for video recording?
reduce the memcpy times between arm and audio/video process or GPU.