Any plans to create a plugin Device for MotionBuilder so we can Live Stream Audio2Face results?
I do a lot of Live Realtime performance capture and would love to be able to have audio control the lip sync on the characters live as it does in the Viewport in Omniverse.
Faceware and Facecap for ARKit both has devices that stream the live BlendShape values back into MotionBuilder that can be used to drive other characters etc. I would imagine you guys should be able to do this as well?
Cheers!
Thank you for your interest in Audio2Face and your question. I’ll pass on your live stream question to the devs. Thanks again!
@dave.knosis , thank you for your interest in Audio2Face. Great idea. We currently don’t have plans for this but we like to at some point. Agree this is good use case.
1 Like
Thank you for the response!
I’ve been working in the field of Motion Capture for over 16 years and I’m pretty happy with the live results from Audio2Face, however, currently they don’t really work for my pipeline. :(
Having a means to get that streaming data into MotionBuilder would be incredibly helpful and would likely also be a much easy approach to getting facial animation to a custom character.
Having this Audio2Face Live Stream Device in MotionBuilder also means then that I could do live realtime facial driven by audio directly to Unreal which would be great! :)
Cheers!
1 Like