Audio2Face + Live Link + Pixel Streaming

i do everything in python, im making a example right now and will share all the files

Great!!!, looking forward to your example

i send a web-socket message from python to ue to tell it to download and play

so you use ue to build a server to listen the message from python?

1 Like

its uploading now but i use

WebSocket Server in Code Plugins - UE Marketplace

ue:

python:

full python code:
pythoncode.txt (12.2 KB)

you must move the StreamAtfWSQueGPT.py file to:
“destDir=%USERPROFILE%\AppData\Local\ov\pkg\audio2face-2023.1.1\exts\omni.audio2face.player\omni\audio2face\player\scripts\streaming_server”

then in UE i have the text input widget send the text thru the websocket; websocket in python takes that text,sends to chatgpt, i then break that into sentences, send those sentences to openwhisper for text to speech, save the file, send the file to audio2face, then send the filename to ue and tell it to play the file

u can put in the 2023.2.0 too; but i stopped using it cause loading usd files doesnt save/load correctly

heres it working: i have a paid ngrok domain and host the ue and python on my pc ( basement laptop if i want to leave it running )

the full build and project files are in the description of video

when u run to project ur see what ngork server it gave u:

1 Like

any updates on this?
One obvious thing is that SubmixListener does not use UE standard audio streams/or standard audio threads to process audio, so pixelstreaming does not know that this part of the audio exists. How is the repair progress?

Any updates on this? I am not expecting a fast released of a2f but any clarification on how to solve this at all would be great. Meanwhile I found this but was not able to implement - can you comment if this approach is on the right track?

Hi Folks,

I was doing some playing around, and I found that for me if I make a change to the source code things started working.

Open Plugins/ACEUnrealPlugin-5.3/ACE/Source/OmniverseLiveLink/Private/OmniverseSubmixListener.cpp from the plugins for your UE app.

Find the code (about line 51 in the UE5.3 version):

FAudioDeviceParams MainDeviceParams;
MainDeviceParams.Scope = EAudioDeviceScope::Shared;
MainDeviceParams.bIsNonRealtime = false;
//MainDeviceParams.AssociatedWorld = GWorld;
//MainDeviceParams.AudioModule = AudioModule;
AudioDeviceHandle = AudioDeviceManager->RequestAudioDevice(MainDeviceParams);

I commented out those 2 lines there. I believe that triggered RequestAudioDevice to find an existing device rather then creating a new device that was not mixed in the AudioMixer.

Rebuilt my project, and when running (at least on windows) it worked for me. Will be trying on Linux soon.

2 Likes

Hi Folks,
I was doing some playing around, and I found that for me if I make a change to the source code things started working.
Open Plugins/ACEUnrealPlugin-5.3/ACE/Source/OmniverseLiveLink/Private/OmniverseSubmixListener.cpp from the plugins for your UE app.
Find the code (about line 51 in the UE5.3 version):
[…]
I commented out those 2 lines there. I believe that triggered RequestAudioDevice to find an existing device rather then creating a new device that was not mixed in the AudioMixer.
Rebuilt my project, and when running (at least on windows) it worked for me. Will be trying on Linux soon.

Hi,
did you obtain a clear sound? I’ve tried a different solution and also yours, but in both cases I hear the audio but it’s too fast (almost like x3 in speed). Did you introduce a resampling feature yourself? Or did you use a 48Khz sampled audio (although Audio2Face expressely requires 16Khz)?

Regards,
Willy

I am working through the sample rate. I ran across docs that way it must be multiples of 16000, and i tried 16000 which works fine in a2f playback, but unreal plays way too fast. i thought i reset to 48000 (which also works in A2F playback) but it’s still not quite right after it passed through A2F, UnrealEngine, and pixel streaming. I assume that I didn’t get all my settings adjusted through the audio pipelines, but have not fully tracked it down yet.

This way helps me, thank you!

UPD: This my tip is wrong, (For use 16kHz audio sample rate I have to set it in UE project parameters (Project Settings->Platforms->[YouDesiredPlatform]->Audio->Audio Mixer Sample Rate)) Not working.

did you actually hear something at normal speed if you test the game in standalone mode via pixel streaming after setting the audio Mixer sample rate?

I did set the sample rate to 16000, then started the standalone game, started the STUN and TURN server, then opened a browser page and sent a clip through a2f microservice. No spoken word audio is heard from the browser tab.

Please provide a detailed explanation of the whole project settings and test environment, so that I can test it out.

You right, my fault. Sorry, my tip for 16kHz NOT working, it was single random result… Sorry

Great Thread!

I have developed a STT+chatGPT+TTS+Pixel Streaming inside UE5.3. Now using Audio2Face, I want to add facial expressions and lipsyc to my Metahuman character.

How can I stream audio from UE5 to A2F (this is my main problem) and StreamLiveLink and blendshapes back from A2F to UE5 ???

any update on this? Not looking for a fix, just some clarity why this happens would help - a lot of people seems to be stuck at this.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.