Hi,
so I’m following this method as documented by Advanced Skeleton on a rig originally built with this tool.
I am currently running into the issue that I am able to bring in the cache from the demo audio files supplied with Audio2Face just fine. However when I bring in my own audio file it computes it within audio2face and gives me a decent output within the app but if I export (using the exact same method as before), It has no animation when then imported into maya.
I wonder whether I’m missing something obvious like I have to re-bake the animation within A2F before bringing it into Maya? or something?
As I say all the demo files seem to work just fine, Its just new files. I’ve spoken to the Advanced Skeleton Team and they seem to reckon the issue is not on their end… I’d love to hear your take on what could be going wrong?
Hi @oskdan1 This issue is unusual and difficult to diagnose without the files. If the Audiofile was generating animation in A2F then there is no reason that wouldn’t export. Have you confirmed that the cache you exported contains the data you expect? Can you load the cache back onto the character in Create or Audio2Face and see the result you expect?
If you exported using the Select meshes - was the correct mesh selected?
In maya you are referencing demo.usda Have you tried exporting the Mark Mesh and applying the cache directly to that? Do you see the expected result that way?
So the method I’ve found to work is to export the USD geo cache of the base mesh as well as the maya cache data. then I’m importing to maya directly off my USD geo cache filme and not the demo file as I never got this technique to work on that demo file it seems to reference as basic…
If I bring the cache into a clean maya scene the demo facial anims work just fine and the face pops up with the appropriate animation applied but with my custom soundfile it seems to not work.
I could try packaging up the data and sending on to you for testing if that helps?