I have followed the steps exactly as the following link. However, I cannot find any instance available with the getInstance API.
Do I need to set up the server or I can directly use local host.
Can you share how you run the code? I just tested using this code and it works as expected:
python c:/.../test_client.py "c:/.../some_audio.wav" "/World/audio2face/PlayerStreaming"
I run the exact same command as urs. I think there is some server connection issue. I have used the get instance command here:http://localhost:8011/docs#/Player/get_player_instances_A2F_Player_GetInstances_get,
but get no instance as the result. I have attache the return values below as well as a2f interface. please let me know where I have done wrong. thank you
just to make sure, are all the required libraries installed?
@xiuyi.qian Thank you for reporting.
I can confirm the issue that Player/GetInstances only return regular Audio player instance and not the Streaming Player instance.
We will investigate the fix for this.
@esusantolim Thanks for the reply. However, besides, the streaming node, I have also created the regular player. You can find it in the in the listed nodes in my screenshot, they are called player amd corefullface_01. Would you like to share the whole node graph? as I think there might be something wrong with my server connection or node linking. thanks
@xiuyi.qian in my case I just took the default scene and start adding the audio player nodes without connecting them anywhere.
But feel free if you want to share your scene for us to take a look.
I noticed you have instanced copies of the nodes as well which I don’t think is supported (those that has blue I on the node icon in Stage view)
Thanks for the reply! I disable the instanced check and still failed to get any instance from kit service.
Hi @xiuyi.qian ,
Can you record your screen and make a video to show your steps (let’s try with the default scene first) and also share your A2F log file for that session?
Does other functionality of the webUI works?
I think I have solved the issue. Thank you
That’s awesome! We would greatly appreciate it if you shared how you resolved this. It would be helpful for future instances.
I just messed up the link between the node. need to make sure the all of the nodes are correctly connected. For me I misconnect the import USD prim data with the set points mesh.
@Ehsan.HM sorry for one more quesiton, is it possible to control the play button of the A2F from the backend as I am currently creating the digital avatar and hope all of the procedure automatic. thank you
Unfortunately, not currently. But we’re discussing adding Play/Stop methods to RestAPI.
I found in the tutorial video that “exporting geometry cache” option may be can let it play?
Sorry, I don’t quite understand what you mean.
I think every time we export the cache, it automatically makes the a2f run the audio file in the pipeline.
@Ehsan.HM when I try to push the audio to the audio2face, the error occurs. I am not sure if the cache error has anything related to it. but I check the port default port of a2f: 50051, it says it is listening. and the streaming instance is linked correctly. Is there any other potential mistake I am having for it. thank you so much for the help
I just used the test_client.py to import the audio file. and the nucleus is running properly. Thanks again.