Audio2Face audio sync issues with output data

I have some questions about validating audio sync using Audio2Face outputs prior to using the data in other DCCs.

Specifically, I’d like to be able to produce a movie file from Audio2Face results (even just for Mark) with stitched audio in the mp4 result. Currently I am able to export only USD cache geometry or JSON blendShape animation keys at 24FPS.

I’ve tried using Machinima sequencer to combine the Audio2Face USD geometry cache with the audio file into a movie, using Movie Capture, but that also seems to only produce a movie without stitched audio.

In order to use Audio2Face workflows on productions, I would need to validate the audio sync of results before exporting data from it to use on rigs in Maya.

I have successfully driven custom character blendShape solves in Audio2Face, exported that animation curve data to a JSON file, and then imported that JSON file back into Maya on rigs. It works, but I need the missing piece of sync validation because it does feel a bit off at times.

Are there any feature updates coming that might address this, or do you have any workflow suggestions to achieve this?