I want to use Isaac SDK as a “pose estimator” server for an augmented reality app that would communicate with the server through HTTP requests.
Question: is it possible at all?
the mobile phone sends an image of a pre-trained object to the server where Isaac is running.
the Isaac Pose CNN Decoder inference is run on the trained model (that was trained using Unity Sim)
The pose output is read from some application (How? I don’t know yet).
The pose data (rotation + position) is sent to the client mobile augmented reality application.
My question is: would this be possible (to create an integration of Isaac with a web-based API that would receive an image and return a pose)?
Thanks in advance.
PS: I already trained a network to recognize a coffee machine and it worked, now I want to do this in ARKit/ARCore and detect the object.