How to implement Audio2Face control in headless mode

In a November 10th posting on the topic of using Audio2Face in headless mode with the Unreal Engine metahuman and LiveLink plugins, there was a reference to a piece of python code that, upon running it, seemed to have no effect, with hints such as [Detail:Method Not Allowed], [Status:error, [message:/world/audio2face/streamlivelink is not valid]], [Status:error, [message:/world/audio2face/streamlivelink is not valid]. message:/world/audio2face/streamlivelink is not valid],[status:error,massage::/world/audio2face/player-Regular player not found]

Please, is it possible to have a more complete set of tutorials to show how to properly use the API to achieve an Unreal Engine packaged Metahuman model that calls audio2face and Live Link in headless mode.
If not, could there be more information on how to properly use the API to achieve the ability to control Audio2face in headless mode. Thanks!

Translated with DeepL

From the error it looks like the stage wasn’t loaded. Can you share the codes you’re running and the errors by taking a snapshot of the terminal? The terminal should show a success message if the stage was loaded properly.

Btw, are you referring to this post with the sample python code?

Following Audio2Face Headless and RestAPI Overview - YouTube will give a good understanding on the RestAPI usage.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.