Hi @sdrnvirtual , thank you for your interest and testing of Audio2Face.
For now, the output is a cache so you can embed it with your rig but currently there is no easy way to retarget the animation to your rig. (There are many variables there, everyone uses different face rigs and we can’t guarantee the quality would be the same as A2F’s output as that will depend on your rig setup whether it can achieve the same motion and detail)
What you can do now is, have the cache as front of chain (like front of chain blendshape) and use that in front of your face rig. And then you can use your face rig on top to modify or augment the performance to certain degree. There are obvious limits to this approach and limits to how much you can change / tweak on top but this has been something tested and can work.
We have plans to support blendshapes and rig retargeting (if your rig controls uses scalar attributes) in upcoming releases. Stay tuned.