This is perhaps by design, but shouldn’t Movie Capture (inside Audio2Face) be able to capture a sequence that is not just a single image of the character’s face? I do not see any settings that would allow using one of the loaded audio files to drive lip animation either in render settings or in the Movie Capture dialog itself. Export works fine, but it would be useful to get an actual rendered sequence of the character talking for pre-viz, directly within Audio2Face. Playing audio drives lip movement, but that movement is completely lost when rendering to Movie Capture. It allows selection of frame range, but there is no way to populate those frames with animation and NVIDIA’s documentation is sparse at best.
Doing some more digging here. It would appear Movie Capture only works for non-driven characters. If you have set up a driver mesh in the Character Transfer panel, you can play audio through that driving character to affect the target mesh, but none of the target mesh’s movements will be recorded with Movie Capture. This is why the still face / image occurs. Movie Capture hasn’t been designed, apparently, to work with source-driven targets. Knowing this, is there any way to work around this problem, or is the only option to export and run pre-viz in a 3D tool (Maya, C4D, UE, etc.).
A little more testing demonstrates that even the driving meshes animation isn’t captured in Movie Capture. In fact, if you simply start Audio2Face, and render the default sample mesh / audio to Movie Capture, it will be exported without movement or audio. For all intents and purposes, it would seem this feature is currently not enabled, at least not in Audio2Face. I presume the only option is to export and render externally, the latter of which I have only gotten to work in UE for the moment. Simply exporting to USD and importing to Omniverse Create, for example — it will lose all the textures of the mesh.
Please put some attention on this feature. It would be useful to have directly in Audio2Face, to share renders prior to exporting them.
Hello @visionarymind111! I contacted the developer of our movie capture extension as well as the Audio2Face team about making the Movie Capture extension compatible with non-driven characters. (Internal Ticket #OM-42712)
Thank you for the quick response. Just to clarify — Movie Capture does not work for either driven or non-driven characters. I have confirmed that the default starter project is also unable to generate an image / video sequence.
Hi @visionarymind111 Audio2Face runs on audio input via the audio player - It currently rendered only one frame - because there is no frame data present in the stage. We are evaluating improvements that will allow Audio2Face to function with an interactive animation timeline. For now - exporting the cache and using it in Create or Machinima will allow you to render as a frame/movie sequence. You can also bring the cache back into Audio2Face and it will render the full sequence.
Yes currently Movie Capture doesn’t support audio capture unfortunately. We have discussion on this already and would like to support it. But we still have some problems to solve, like for some cases the rendering of a frame would take minutes or hours of time to get best visual results so we need to find a way to play and capture the sound of each frame and then sync the frames and sounds in the final movie.
Thank you for the clarification. Regarding a cache render, I have found difficulties syncing the original audio with the cache, even if the FPS are set to equal each other. Would you be able to share some advice on best practices for post-process sync?
I’m afraid we don’t have practical recommendations for how to do that for now. We have some ideas but we haven’t tried them yet due to other higher priority tasks so we don’t know how well those ideas will work. We may not have time to pick this feature up very recently and I will keep an eye on it.
Hi @visionarymind111 ,
Our usual workflow when we need to do high quality pathtrace render with a2f driven character is to have the audio clip recorded as a wav file, then use audio2face to export cache our with certain FPS from that audio file.
If the FPS matches, then the rendered frames from Movie capture should match too.
We then take this to video editing software to comp the audio file and the rendered frames together, and they should sync.