Can I use Isaac SDK as a pose estimator app integrated with a web application?


I want to use Isaac SDK as a “pose estimator” server for an augmented reality app that would communicate with the server through HTTP requests.

Question: is it possible at all?

For example:

  1. the mobile phone sends an image of a pre-trained object to the server where Isaac is running.

  2. the Isaac Pose CNN Decoder inference is run on the trained model (that was trained using Unity Sim)

  3. The pose output is read from some application (How? I don’t know yet).

  4. The pose data (rotation + position) is sent to the client mobile augmented reality application.

My question is: would this be possible (to create an integration of Isaac with a web-based API that would receive an image and return a pose)?

Thanks in advance.

PS: I already trained a network to recognize a coffee machine and it worked, now I want to do this in ARKit/ARCore and detect the object.

Just to illustrate the above question, this is a pose estimation proof of concept that I have successfully run.

Now I wanted to integrate this Isaac SDK inference with a Unity AR Foundation App that would send an image of the object through the web and receive the position estimated of the real object.


This is feasible, sure. One simple approach would be to implement an Isaac SDK codelet as a bridge that starts a server for an HTTP endpoint to receive requests with images, relays them on a channel so other nodes can receive them, waits for a response, and then responds to the HTTP request with the result.

1 Like

Thanks, it worked very well. I needed to make some adjustments because the mobile device has different screen resolution, but at the end it worked.