Hy guys, hope you heard about open source 3D character creator program named “Makehuman”
For a long time I was trying to find out how I can use audio2face facial mimics on makehuman character in unity and I found nothing after my huge research, the last hope for me is this forum, maybe you guys know how I can use audio2face facial mimics on makehuman character in Unity?
Thank you a lot.
Hello @zanarevshatyan! I encourage you to join our Discord Community at discord.gg/nvidiaomniverse. We have 1,000’s of people that can help answer your questions as you embark on your creative journey!
Omniverse does not have a Unity Connector yet, but we are working on one and it will be released very soon! (Sorry, I don’t have any dates yet)
It sounds like you have made a character in Makehuman and you want to use Audio2Face to make blendshapes that can be used inside Unity? I don’t know much about Makehuman, but I do know it works with Blender. We have an Alpha Blender Branch available on the Omniverse Launcher that you can install. You can try importing your character into Blender, then you can import that into Omniverse which will create the USD files that you will need to use our Audio2Face application.
You can check out our tutorials available on the Omniverse Launcher under the Learn tab for some assistance in creating your blendshapes.
Unity does import USD files, so hopefully you can use the files that Audio2Face creates inside of Unity. Here is a link to their documentation: Unity - Manual: USD
You may also be able to find help with this in the 8K member Virtual Beings Facebook group.
Thanks for the detailed response, still waiting for the Unity Connector to make my life easier :D
I’ve also read this topic which is about exporting character from audio2face to Unity Integration: Omniverse Audio2Face
But it was about the older “no eyes” version, new full-face version has a lot of difference with exporting tabs and this is getting me confused on how I can export characer’s facial animation from audio2face to Unity or at least to Blender.
Also I’m a little confused with that blendshapes, do I understand correctly that I need to attach audio2face’s auto generated blendshapes to my character and only after then I can apply a facial animation information? I’ve just extracted usd with my character from audio2face to blender and I saw the size for it was about 400 mb. It is a really huge file size, but in other exporting tab in usd I’ve got several mb file, did I get it right that most likely the usd file with 400 mb was with all the heavy topology blendshapes that I need to apply to my full body character in Blender? And the other several mb file was a animation information? Cus I’m getting a bit scared about file sizes, in my game there will be a lot of dialogues I’ve already written for the first episode, and I hope every dialogue will not weight about 400 mb :)
There must be a animation information file that I can apply to my blendshaped character, I believe in it :)
In this case how can I do this in blender?
- How can I apply audio2face’s generated blendshapes from my character’s head to my full body character in blender.
- How can I apply animation information to my pre-blendshaped full body character that need to be small file size in blender.
- How I can deal with eyes and lower mouth animation in blender?
- Does audio2face generate blendshapes only for the static face mesh? Or it generates blendshapes based on the audio file?
- Does audio2face have ability to export separately blendshapes and animation information?
I really hope that for each face animation or audio file I don’t need to use separated 400 mb usd file.
Really sorry for this long article, but I’m struggling to understand this magic program your team created, which is not so easy to understand as I thought when I was starting.