We have not yet begun to achieve real-time audio decoding part, according to the design, should use gstreamer + ALSA interface.
In order to achieve video and audio synchronization, I was designed:
1, we need a reference timer, this reference time can be the audio timestamp, because the audio packet network transmission is relatively smooth, when the audio using ALSA playback, we will save the timestamp as the reference time.
2, when the audio is played again, we compare the time of the reference timer with the timestamp of the audio package, if the difference is not very large, we do not need to adjust the timer, if the difference is large, we re-adjust the timer, Make sure that the reference time timer is exactly the same as the audio timestamp
3, because the reference timer is based on the audio timestamp, so after the arrival of the audio package, without any delay and buffering, direct decoding to play
4, when the video packet arrives, we use this reference timer to synchronize the video, each time you want to call render(buffer->planes.fd) .We use this video frame’s timestamp and reference time for comparison,
If we find this is a future frame, we wait for a while,
If this is a past frame, directly rendering,
If that a long time has passed, discard it.
If mmapi designed the video and audio synchronization interface, I hope that through this interface, set the reference timer directly, and then I just pass the video timestamp to the videodecoder or renderer, mmapi can automatically help me to sync the video frame to the reference time
I think there should be a better way to synchronize, I just share my own way