Setup as follow:
- add Streaming Audio Player into the stage
- Open generic graph and connect the streaming audio player time (output) to the Audio2Face Core instance’s time attribute
- Also connect the streaming audio player time (output) to the StreamLiveLink instance’s time attribute
- Having Unreal engine setup with the Omniverse LiveLink Plugin and the appropriate steps followed.
- Start the unreal engine
- Start the test_client.py to stream audio wav
- Audio2Face App can hear the sound & the facial movement.
- Unreal Engine metahuman can see the facial movement. But there’s NO sound!
Kindly advised if the current version do not support Streaming Audio Player → Unreal Engine?