VRoid Studio (VRoid Studio) is free software from Pixiv for creating 3D characters. You can export as a VRM file (which is really a GLB file that follows particular conventions). I renamed the .vrm file to .glb then converted to USD and loaded it into audio2face to see how it goes.
Here is the structure it imported. I had to pick a Mesh for audio2face, so I used “Face_baked”.
(VRoid Studio has several meshes in the face for things like eye lashes, the eyeballs, the inside of the mouth, etc.)
But I got an audio2face error about needing to split the mesh.
So I selected the mesh and used Mesh / Mesh Separate from the menu. But it came up with a stack trace.
Any suggestions for how to get around this problem? Is it a bug that I should report somewhere?
Yes, Audio2Face expects one continuous mesh. As suggested in the red message, you can select the head and split it using
Mesh -> Mesh Separate menu.
Here’s a tutorial on relatively complex asset from NVIDIA:
Camila Asset Pt 1: Overview with Omniverse Audio2Face and Reallusion Character Creator 4 - YouTube
Thanks for the reply, I am watching the videos now, but the problem I was hitting is the split operation from the menu crashes with a stacktrace (I included the screenshot of th stack trace in the original post).
Oh, sorry I missed that part of your message.
Could you kindly share your geometry so we can test it?
In the meantime, as a workaround you can use a DCC application to separate the face.
Thank you! I uploaded one of the default VRoid Studio starter models here:
I exported it from VRoid Studio as a VRM file then changed the file extension to .glb. I can then load it directly into audio2face (I had to adjust scale factor to x100 for some reason) but it works and you can adjust joint rotations etc by hand.
One of the reasons I use VRoid Studio is because it means I can create characters without learning DCC tools! ;-) You create characters with sliders and a 2D paint program - pretty easy. I might have a look at the USD APIs to see if I can split the mesh via code (which I would have to do later anyway). But I am still exploring, trying to work out if I can get NVIDIA Omniverse to do what I want, which includes things like hair physics and clothes cloth physics.
I don’t know much about the quality of the assets, only that they work in a range of other applications without problem. I do know if you import into Blender and export immediately, you don’t get the same result (the mesh is modified), which means you have to add any libraries of blendshapes first. So I would not be surprised if they do some “creative” things in the mesh…
Thanks for sharing your file.
I just separated the head in Maya. Unfortunately, the mesh is made of a lot of smaller pieces that have unmerged vertices which is not suitable for Audio2Face.
It might be worth contacting VRoid Studio and let them know about this issue.
Example (Inside of the mouth):
Thanks, I will let them know (but will have to see if they do anything about it - sigh).
I loaded the VRM file into Blender (there is a Blender VRM extension) then immediately exported it back out as GLB. That file I managed to do a “separate meshes” on, but it created 27 meshes (!) - and most are not visible (first 8 are okay, then the rest are all invisible. 5 of the not-visible meshes are the face skin (by looking at the material name referenced). So no dice there either. Blender may have fixed some issues, but not enough. (There are separate meshes for the ears for example.)
So then I tried deleting lots of stuff in the character (the eyes, the mouth, etc). I know enough Blender to achieve that! I then exported again and got it to load up in audio2face and be happy with the mesh! So I tried going through the fitting process.
Next problem was when I tried to play the audio file, I got an error of “waiting for Cuda semaphore” or similar. So I tried to save the workspace, exit audio2face, launch it again. I opened the saved project and it said “opening…” then the window disappeared (I assume it crashed). Retried, same again. Not sure if I can bring up any logs.
So I started again, imported, reassociated, yada, yada and it worked! Well, without any eyes, eyebrows, mouth parts etc. but light at the end of the tunnel.
I still have to work out how to merge it back into the original character. I want say 2 characters in a scene and then have them move and talk to each other using separate audio clips. Having /World/transfer_character and /World/a2fTemplates as global things does not feel like the right final end point (?).
I guess this can be fixed in Blender too. Google for how to merge vertices by distance. If this step is done, then
Mesh -> Mesh Separate should work
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.