We are doing research with simulating humans around robots in residential homes.
Our current workflow is to use to a Linux computer running Isaac Sim and ROS2 for the robotics side as ROS2 works natively on Linux. We started looking into the use of digital humans in Isaac sim and we can use them, but we want to make the digital humans more dynamic and we came across Machinima’s pose tracker, which is perfect for this.
One limitation we noticed though is that–unless something is wrong on our side–Machinima on Linux doesn’t support pose tracking. Using Nucleus, we thought maybe we could use a Windows computer running Machinima, connect to a live session on the Linux computer and have the robot in Isaac monitor the digital human being controlled through Machinima’s pose tracker, however, the digital human’s poses aren’t being shown in Isaac while Machinima is working on the Windows computer.
So my question is, is it possible to “livestream” human poses through a live session like this?
If not, is there another way we could achieve this? Like through a connector or would we need to create our own extension to achieve this, if it’s possible?
Essentially, what we are trying to do is be able to control a digital human by capturing real poses in real time to observe how our robot responds in the simulation. We are trying to see if we can avoid a “capture-record-export” kind of workflow as that eliminates the “real-time dynamic” we need.
Thanks for any feedback!