Does Audio2Face need tags to set up multi languages? Say a character switches from English to Japanese in the same sentence.
No. It is sound based. So no tag or phonemes or anything like that.
Currently the trained model can do different languages, but in our experience it does latin based language better. We have plans to improve other languages in near future.
OK thank you. I tried swapping languages multiple times mid sentence and it didn’t break.
Which language is the least reproducible? Do not want a ticking time bomb.
I have only tried
I cant speak Chinese or Korean so I am concerned about those.
p.s You can get some hilarious results when the audience claps during speeches. Not useful and really easy to deal with but seeing your character spaz out during a serious statement is just Hilarious!
Please post video of hilarious results! :D
No worries I kind of went all out on the stupid tests.
Here We have bohemian rhapsody
Recorded from this.
And the results from Audio2Face.
Found an error in the lips that mike and the SDF model have.
Note: I am actively trying to break Audio2Face. I would like you to know, there is no malice in this. I just want to know how far it can be pushed.
If I could solo each one of the you tubers acapellas and compare that would be a better real world test. However as a random how can I break A2F yeah I think I broke it. But it did not fail it carried on like a champ!
What went wrong. Compared to the ground truth it can not reproduce a similar performance.
Does this matter? I do not know.
It does produce a performance that is within bounds and does not break itself. So you can be certain that it will not do some thing strange. However its range is limited. Probably can be fixed with the emotion flags update.