Hi, it’s great to have REST APIs for updating sounds and get new blendshapes, but these depend on an existing project with prebuilt character transfer and a2f pipeline for the custom characters, and could not transfer to a new character on its own. I wonder is there anyway we can also automate the character transfer process through APIs? (e.g. detect keypoint correspondence, wrap face parts, and output blendshapes)
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Is there a public API to automate setting up avatars for audio2face? | 2 | 347 | June 29, 2023 | |
A2F 2023.1 custom character questions | 2 | 451 | July 20, 2023 | |
Use Audio2Face as an API (C++) | 1 | 1634 | March 16, 2022 | |
Audio2Face Custom blendshape live link with metahuman | 2 | 19 | June 27, 2025 | |
Command line or some script for A2F. Possible? | 14 | 1748 | February 21, 2023 | |
Audio2Face blend shape streaming for any character in Unity3D | 1 | 890 | August 30, 2023 | |
Audio2Face Blender Workflow | 1 | 620 | December 22, 2023 | |
Public audio2face Python API for audio -> animation | 2 | 1395 | July 5, 2023 | |
May I use the Audio2Face with command line? | 5 | 516 | June 20, 2023 | |
Automation of avatar rendering | 3 | 442 | January 10, 2024 |