Pretty mind blowing stuff from the Omniverse crew. A little rough around the edges, but so much potential!
Testing the AI powered pose tracker in #nvidia #MadeinMachinima, feeding it some sample videos from karate katas to kpop dancers. The model / rig is the sample SOL provided by nvidia. For a super new tech, this works amazingly well, can’t wait to see future iterations. Running all in real-time on my dell workstations with rtx3070. You can also stream video directly from a webcam, which is something I’m yet to test.
Hi @JC_sculpture! Thank you for sharing this! I have passed this along to the development team!
1 Like
no worries,
I feel like with some of these, slowing down the footage might actually help posetracker to sample better?
Especially with improvements in AI and optical flow. I know real-time solve is a goal, but having more options to clean up the video source for a better solve would be very handy for off-line non realtime content creation.
and my previous suggestions of a volume constraint to aid solving and foot collision would be super nice. You can see from the video sources most of them have very clear perspective markers ( floor / walls ) which would help posetracker to solve Z / depth movements I feel.