Originally published at: https://developer.nvidia.com/blog/nvidia-ai-generating-motion-capture-animation-without-hardware-or-motion-data/
NVIDIA researchers developed a framework to build motion capture animation without the use of hardware or motion data by simply using video capture and AI.
NVDA Shareholder with an idea for development:
Holograms
:Picture this and then imagine…A special occasion
is coming up, (Birthday,anniverary,wedding,graduation)
and you make a reservation for dinner at a personalized
venue with multiple choices for your entertainment.
An evening at this dinner event goes like this:
A pre selected famous figure chosen by you (a hologram of
John Wayne) guides you to your parking space.
Upon entering the venue, a Hostess greets you and asks for your
reservation name which includes your list of options
for the evening. A hologram of Frankenstein appears and
escorts you to your dinner room of choice.
A hologram of Star Wars R2D2 shows you to your table.
You have chosen the Storm room venue. A hologram of an
approaching thunderstorm begins to appear and fills the
room with clouds, thunder and lightning complete with
sound effects.
A hologram of a much larger than life flying hummingbird slowly
materializes above your table to take your order.
After dinner you all decide to stop by the wild west bar
for a drink. Various holograms of wild west characters having
gun fights appear in different areas of the room.
You sit down at the bar with a full mirrored back bar and in
the mirror various figures (chosen by you beforehand) appear
in the mirror on the bar stool next to you.
Imagine:
The ultimate 3D experience…Hologram fantasies of choice.
nvidia is one of very few company’s with the technological
abilities to make this happen.
Sincerely,
Bryan Mailliard
B &C Mailliard
480 694 1367
Looks great! Any plans to release source code or a implementation of this Motion Capture research?
Hey @Sephiroth_FF, thanks for jumping into the forums! Currently the researchers are not planning on releasing the source code, the method currently relies on an unreleased project that will be presented a bit later this year. Keep an eye out for new research from the authors and a potential source code release afterwards!
I have been watching for updates for this project on github and on the research project page. Any idea when we will get updates and a source code release? Thanks!
Unfortunately, no additional plans have been made to progress with releasing the code. I will update this thread when I have any news related to this project.
I was wondering if any news in this area. I was curious about the approach of keyframe poses for characters and using AI to fill the in betweens. Some of the Robotics presentations at the recent GTC looked promising (e.g. the Google and then Disney presentations). I was particularly interested in the Disney one as it had more “personality” in the characters.
Any suggested reading or progress in this area?
Regarding camera pose estimation and more recent works, I suggest you check out some of the latest research from our Toronto AI lab: Learning Physically Simulated Tennis Skills from Broadcast Videos | NVIDIA Toronto AI Lab