Unable to get A2X service to generate blendshapes or push results to UE 5.2/Metahuman

I’m unable to get the A2X service to work on an Ubuntu 22.04 VM with an L4 attached and the latest drivers.

I’m running UE 5.2 with a Metahuman loaded. LiveLink works (via a helpful thread in the forums), as tested with the simple_socket_sender.py script.

On the same Linux workstation, I followed this tutorial to pull and build the model in the Audio2FaceWithEmotion container, then pull and run the Animation Controller Container.

I’m running the Client App, with everything set to localhost (127.0.0.1) (see tutorial), but no matter what I do, I can’t get A2X to drive the blendshapes in UE.

I’ve tried all combinations of IP addresses in the ac_a2x_config.yaml config in the container, as well as in the Client app, no luck.

Does anyone know of an end-to-end tutorial that shows what you need to do to get the service working?

One final question: is the audio2face_pipeline:0.5.8 container only for the non-Metahuman Omniverse Avatar workflow?

Computer Specs (OS, GPU, GPU Driver): Google Cloud g2-standard-24 VM (with L4 attached), running HP/Teradici for desktop acceleration visualization.
Applications and Versions:
driver 535.154.05,
animation_controller_a2x_animgraph:0.1.16
audio2face_with_emotion:0.7.26
Reproducible Steps
Can provide a screen recording on demand.
Console Logs
Log File C:\Users\ [YOUR NAME] \ .nvidia-omniverse\logs
There is nothing useful in this directory (I only see launcher logs).