I want to make sure I correctly sync a frame with its audio chunk when I get the elements from qAudioShared and qVideoShared. How can I achieve this? PTS seems to be completely useless, as it’s the same for both audio and video, even at desynchronization. If I perform a CPU stress test during runtime, the video becomes desynchronized (if I write 30 seconds of content on disk, the video actually speeds up, probably because of dropped frames). I don’t understand why PTS does not help me here. Any help is appreciated.
Thank you for the response. I try adding do-timestamp, but I have the same result, the PTS are the same for video and audio, even though the video is dropping frames and becomes desynchronized.
The first one uses raw RGB and it seems to have the same issues with my pipeline. The h264 one runs perfectly though. Maybe I need to write h264 to appsink and convert to RGB in Python? I’m not sure how to proceed further.
This is possible since RGB data is processed on CPU. The CPU capability may dominate performance. Please execute sudo jetson_clocks to run CPU cores at maximum clock. To see if it can achieve target performance.
Thank you so much for your help, your idea with writing to file helped me arrive at the RGB conclusion. Now I use turbojpeg to decode to RGB in Python and the desync seems to be gone on gstreamer side. We can close this.