Blendshape metahuman wrong representation

Hello,
I’m new to Audio2Face and could use some assistance. I’ve been trying to use the streamlivelink to connect my Metahuman character to Audio2Face. However, I’ve noticed that the generated emotions don’t appear as expected on my character’s face. So, I’m exploring how to enhance the results in Audio2Face.
Here’s what I’ve done so far: I performed a character transfer and applied some random skin mesh fitting to create a unique blendshape, just to experiment. I then exported this blendshape and added the usdSkel to my scene. I then used the A2F Data Conversion to create my BlendshapeSolve.
The issue I’m facing is that when I play the audio in Audio2Face, the face mesh looks weird (which is the desired effect). But when I connect it to my Metahuman character, the facial movements don’t match what I see in Audio2Face, and the mouth movement appears normal.
Could someone please guide me on how to achieve the same facial expressions in my Metahuman character as I see in Audio2Face?

Capture
Capture2

I will check with our Blendshapes expert and we will get back to you