Hi all,
We’re building Omnia, a spatial software and data platform that captures real-world human behavior through XR (Apple Vision Pro, Meta Quest, and wearables) for use in training the next generation of embodied AI agents.
We’re currently developing a dataset of multi-step physical manipulation tasks performed by real users — things like repairs, tool use, product setup, and guided spatial tutorials. This data is structured to plug into simulation environments like Isaac Sim to support motion generation, imitation learning, affordance training, and more.
🚀 What We’re Exploring :
• Best formats (JSON, USD, ROS, etc.) for importing into simulation
• How to structure real-world behavior episodes for sim2real
• Labeling needs (object, affordance, gaze, error recovery)
• Use cases: fine-tuning agents, scene replay, benchmark creation
• Whether labs would benefit from on-demand real-world task recording with Vision Pro
🔗 We’ve created a short survey to gather feedback from researchers, devs, and simulation teams:
👉 https://forms.gle/4LRrAg9JLdP9zUs49
If you’re building in Isaac Sim or working on robotic task training, we’d love your input — or feel free to drop a reply if you’re open to collaborating.
and please reach out to contact@theomnia.io
Thanks,