Hi, I’m developing a web service that streams facial animations in real time. I followed the REST API implementation tutorial using Python from the official documentation. Instead of using the fullface model, I imported the mark_arkit_solved_default.usd
model because the fullface model generated an error.
Since I’m using CoreFullface
, I’m getting an error with the following messages:
{‘status’: ‘OK’, ‘message’: ‘Set track to english_voice_male_p3_anger.wav’}
{‘status’: ‘OK’, ‘result’: {‘regular’: [‘/World/audio2face/Player’], ‘streaming’: }, ‘message’: ‘Suceeded to retrieve Player instances’}
{‘status’: ‘ERROR’, ‘message’: ‘Not a Valid A2F Core Instance: /World/audio2face/CoreFullface’}{‘status’: ‘ERROR’, ‘message’: ‘None of the export meshes is connected to an Audio2Face Instance’}
Could you clarify what type of Core instances I should be using with the ARKit model instead of CoreFullface
? I don’t have an NVIDIA RTX graphics card, which is why I’m relying on the headless REST API. Could this be causing the error?
Additionally, is there a way to call the REST API without running the headless .bat
application? Requiring additional software is a major limitation for user experience. I considered using WebRTC, gRPC, and REST API, but none seem to fully meet my needs. Lastly, how can I prevent the API from resetting every day?
Thanks in advance!