Best way to lip sync a character in a Kit application using A2F?

Hi,

I’m trying to drive a character’s lip sync animation using Audio2Face, but having some difficulties with it.

The main program is an ordinary Kit application, so my initial instinct was to reference A2F extensions (e.g. omni.audio2face.core) as dependencies. However, it looks like they are not available from the official public repositories.

I tried to add the local extension directory of my A2F installation (i.e. ~/.local/share/ov/pkg/audio2face-2023.2.0/exts) to the search path, but it still fails with missing dependencies like omni.deform.shared.

So now I’m trying to use go for the Live Link approach instead, but I’m still stuck with several issues described below:

  1. It looks like the node to receive Live Link response is only available from the A2F application, which circles back to my initial problem.
  2. I don’t see sockets for reading or writing jaw transforms on the Live Link node. How can I animate the jaw, if they don’t support them?
  3. This is not directly related to the above, but aren’t the jaw transform values A2F reports based on absolute coordinates? The original Claire model doesn’t have a body while my character does, which has its own animations. How can I retarget the absolute, head-only transform matrix of the original jaws to its counterpart in my full-body model? I achieved it by using a rather convoluted custom OmniGraph setup, but I feel there should be a better way. I see there’s a jaws retarget node, but I can’t find any documentation about it. More importantly, how should I do it if I need to use a Live Link setup?

Any advice or tips would be appreciated. Thanks!

Audio 2 Face is no longer part of Omniverse, or Omniverse Forum Support. Answers to your questions can be asked on our Nvidia Developer Discord Channel here NVIDIA Developer and on our Nvidia Developer Forum at NVIDIA Developer Forums - NVIDIA Developer Forums

Thanks. But I got confused. Is Audio2Face, or at least its locally available version deprecated somehow?

I only started working with A2F after reading about it on the Omniverse web page several days ago. Although most references to it seem to have been removed recently, I can still read its documentation from docs.omniverse.nvidia.com.

Is there any announcement regarding the change that I can read? Also, in what category should I post such questions in the NVIDIA Developer Forums? I don’t see anything relevant to non-cloud usage of A2F there.

Audio 2 Face has moved out of Omniverse and over to our Digital Human / ACE Division. In light of this, the A2F Forum page has been closed and now:

  1. Developers can submit tickets through the NVIDIA AI Enterprise program NVIDIA Enterprise Customer Support
  2. Developers can discuss ACE through our NVIDIA Developer Discord Server NVIDIA Developer

Thanks for the clarification. Unfortunately, it looks increasingly doubtful to me if there’s any future for the free, locally installable version of A2F, now that every trace of it was removed from the internet.

I can understand the motivation for profit, but I don’t like how such a change can happen without any clear notice to existing users.

I would encourage any feedback you have on the Discord server and if you are Enterprise please file a ticket. The program is still available on the Launcher. We have not removed it. Just the support has moved.

I’m more or less fine without the forum support. But my biggest concern now is that I don’t have confidence that future versions of A2F will remain accessible in a locally installable format with the current license (i.e. free for non-commercial use).

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.