Sure, our use case is recording synchronized stereoscopic videos of traffic situations. The background is essentially static, so normal video compression works quite well already. However the left and right cameras also see a very similar scene, so multi view coding would ideally yield 2x compression. This means we have to store and transmit only half of the data, eventually reducing cost and increasing performance.
Hi,
So you would expect the left and right frames are cross-referred in encoding to have better compression rate. Is this correct? Would like to confirm this so that we can check if our MVC works in this way.
Yes exactly, that is the idea. There is an illustration in the German Wikipedia article about Multi Video Coding (MVC):
Imagine we only have Cam 1 and Cam 2. For us as users we don’t worry about the I/B/P-frame patterns. We just need the higher compression ratio. If you even have some more “basic” implementation, that would be great already.