A2F API Data doesn't include HeadPitch/Yaw/Roll

Hi all. I use
https://build.nvidia.com/nvidia/audio2face-3d/api
this cloud api for a quick prototyping of some app that i’m doing.
https://github.com/NVIDIA/Audio2Face-3D-Samples/tree/main/scripts/audio2face_3d_api_client
This is the script i’m currently run. Successfully get the data as shown below.


 {
          "timeCode": 1.4,
          "blendShapes": {
            "EyeBlinkLeft": 0.0,
            "EyeLookDownLeft": 0.0,
            "EyeLookInLeft": 0.0,
            "EyeLookOutLeft": 0.0,
            "EyeLookUpLeft": 0.0,
            "EyeSquintLeft": 0.0,
            "EyeWideLeft": 0.021967057138681412,
            "EyeBlinkRight": 0.0,
            "EyeLookDownRight": 0.0,
            "EyeLookInRight": 0.0,
            "EyeLookOutRight": 0.0,
            "EyeLookUpRight": 0.0,
            "EyeSquintRight": 0.0,
            "EyeWideRight": 0.021478909999132156,
            "JawForward": 0.0,
            "JawLeft": 0.0,
            "JawRight": 0.016127485781908035,
            "JawOpen": 0.12611226737499237,
            "MouthClose": 0.0,
            "MouthFunnel": 0.0,
            "MouthPucker": 0.0,
            "MouthLeft": 0.0,
            "MouthRight": 0.0023712164256721735,
            "MouthSmileLeft": 0.14996175467967987,
            "MouthSmileRight": 0.15350854396820068,
            "MouthFrownLeft": 0.0,
            "MouthFrownRight": 0.0,
            "MouthDimpleLeft": 0.046330247074365616,
            "MouthDimpleRight": 0.044536903500556946,
            "MouthStretchLeft": 0.02274201065301895,
            "MouthStretchRight": 0.023362724110484123,
            "MouthRollLower": 0.15332220494747162,
            "MouthRollUpper": 0.0728776752948761,
            "MouthShrugLower": 0.0,
            "MouthShrugUpper": 0.0,
            "MouthPressLeft": 0.06760275363922119,
            "MouthPressRight": 0.07986801117658615,
            "MouthLowerDownLeft": 0.0,
            "MouthLowerDownRight": 0.0,
            "MouthUpperUpLeft": 0.0,
            "MouthUpperUpRight": 0.0,
            "BrowDownLeft": 0.11815637350082397,
            "BrowDownRight": 0.11482547968626022,
            "BrowInnerUp": 0.01118124183267355,
            "BrowOuterUpLeft": 0.0,
            "BrowOuterUpRight": 0.0,
            "CheekPuff": 0.0,
            "CheekSquintLeft": 0.0,
            "CheekSquintRight": 0.0,
            "NoseSneerLeft": 0.0,
            "NoseSneerRight": 0.0,
            "TongueOut": 0.0,
            "HeadRoll": 0.0,
            "HeadPitch": 0.0,
            "HeadYaw": 0.0,
            "TongueTipUp": 0.0,
            "TongueTipDown": 0.0018191784620285034,
            "TongueTipLeft": 0.005925044883042574,
            "TongueTipRight": 0.0,
            "TongueRollUp": 0.0,
            "TongueRollDown": 0.17305180430412292,
            "TongueRollLeft": 0.0,
            "TongueRollRight": 0.0,
            "TongueUp": 0.0,
            "TongueDown": 0.8406111001968384,
            "TongueLeft": 0.00554042961448431,
            "TongueRight": 0.0,
            "TongueIn": 0.06782355904579163,
            "TongueStretch": 0.0,
            "TongueWide": 0.0,
            "TongueNarrow": 0.6533129215240479
          }
        },

However, it does not include

https://docs.nvidia.com/ace/audio2face-3d-microservice/1.0/text/architecture/audio2face-ms.html

  • HeadRoll, * HeadPitch and * HeadYaw.

When i use A2F App directly wit Omniverse i can use head even with the mulipliers by disabling checkbox (i forgot the name but it was something like enable face idle or smth).

When i use A2F on Nvidia servers, it never gives me head data whatever config i use. On the Microservice page it says “The following blend shape values will always be 0:”. What does exactly it means - well actually what’s the solution? I can build my own A2F on my AWS account however if it’s still not gonna give me head data than it will be unsuitable for me.

I do some Unreal ■■■■ with metahuman magic with llm responses and all that however without face movements - only mimics looks kinda bad. I thought it was something about using custom control values in metahuman ar face or some animation-related issue however when i see there is no actually head data than i got my ooh moment.

Need a way to get head (even eyes too) movements too with api usage way.

Any direction or explanation (if possible with solution) would be a goal mine for me.

Thanks, have a nice day :)