I’m wondering if Audio2Face can be available as a REST based API where one sends in a voice file (mp3 for example), to receive the blend shapes (asynchronously, is fine)?
Hello @inac! Thank you for your question. I reached out to the Audio2Face team for an answer. I will post back here when I have more information.
Hi, I’m also interested in this as well. I was wondering if there has been an update to this. I also saw the release of the Rest API and Headless Mode, however, after watching the video I am still not sure if I am able to achieve the result of the original poster wanted to achieve. For me, I would just like to send a request from Unity using C# and receive the JSON list using the API alone, is this currently possible? Thank you.
Still awaiting as well… been a long time!
This video might help Audio2Face Headless and RestAPI Overview - YouTube
You should consider having a Unity plugin for runtime support in Unity
Do you mean something similiar to the Unreal Engine LiveLink
plugin which was released in Audio2Face 2023.1.1
?
no, for unity, there should be support for dynamic creation at runtime - not just in editor. for example https://assetstore.unity.com/packages/tools/animation/salsa-lipsync-suite-148442
Is your final goal feeding an audio file to Audio2Face and receiving BlendShape animation weights in Unity?
If so, then that’s what UE Livelink (Live Stream) plugin does. It’s an extension which can be used as a reference to build a similar one for Unity or any other app.
No, the difference is fully autonomously at runtime instead of sending blendshapes to the editor
Sorry if I’m misunderstanding you and thanks for your patience.
Do you mean you need to feed an audio clip to Audio2Face and receive the blendShape weights animation
in Unity in runtime (or realtime)? If so, then this is exactly what the Audio2Face LiveLink plugin for UE does.
If that’s not what you mean, can you please provide more information of what the Unity Plugin you have in mind does?
Hi Ehsan, the headless and RestAPI is missing the API to input audio and output blendshape weights. Will Omniverse Ace contain that in the future?
No, the goal is to have it be self-contained without needing a separate app. Unity lets you build your app to publish anywhere from iOS to Android to consoles etc.
In short: Unity app feeds in the audio input, Audio2Face Unity plugin provides blendshapes in Unity.
Also, it would be preferable for blendshapes to be more universal than anger or happiness - rather eyeUp etc - for example as defined in https://developer.apple.com/documentation/arkit/arfaceanchor/blendshapelocation
https://docs.omniverse.nvidia.com/audio2face/latest/user-manual/rest-api.html#headless-a2f
I don’t think it’s possible to build a standalone app similar to what you can do with Unity or UE in the public Audio2Face. That said, this might be possible using NVIDIA ACE.
Audio2Face uses as many blendShape targets as you like. The default blendShapes have 46 and 52 different face shape targets including eyeUp, etc.
Anger, happiness and other emotions are added to those ~50 blendshape targets.
I have built several standalone Unity apps that can do audio to blendshapes, so I can attest that it is absolutely possible. You can try out one recent app store app i made that does that here: https://apps.apple.com/us/app/shakespeare-vc/id6447998447 I used SALSA https://assetstore.unity.com/packages/tools/animation/salsa-lipsync-suite-148442
Sorry for the confusion. That’s what I meant. Audio2Face can not do what Unity and UE can.
Why not? it’s a REST based API that Unity can post data to and get data back?
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.