Hi, I’m using makehuman (not metahuman) character, hoping to do face animation for it. For me it is hard to understand what is the right way to prepare my character in blender for exporting to audio2face, I’ve managed to separate objects that need to be separated such as eyes, eyebrows, teeth, etc. set the origin for them at the center of the mesh, also for exportion to audio2face I’m using usd branch for blender from nvidia omniverse. The issue I have is that I can’t export the characters head with the skin cus maybe I simply don’t understand how, maybe there are some checkboxes that I need to check in exportion settings using usd branch in blender? Here is the solid colored one character compared with the skinned one in blender
Hello @zanarevshatyan! I’ve shared your post with the dev team for further assistance.
I recommend that you join our discord server at discord.gg/nvidiaomniverse where we have 1,000’s of members, industry professionals, and developers (including the developer for the MakeHuman Omniverse extension) ready to assist you with any questions you may have!
This guy from Brasil is my friend, he is working on my game, I’v watched this video, but this one is for the older no eyes version of audio2face, as you can see in the video, he also don’t have material on his character when he imports it in audio2face program, I also need a workflow on to work with eyes, mouth and tongue animation when exporting it to unity, any news here for the unity exportion? Thanks.