Digital Humans

Greetings.

I would like to ask you something.

I am very new in the subject of design and applications of this technology for the use of attention to people; and recently, we have visualized the possibility of interfacing the development of our A.I., that helps both young people and adults who suffer from bullying, through the use of Digital Humans.

Given this, we consider it extremely important to generate a personalized interface, which does not create a prejudice on the part of the user, and that can provide a way to express everything that ails them.

The central issue is that although we have a lot of experience creating A.I. assistants, who not only attend chatbots, but for the last 7 years, attend hotlines, this new stage has become very complicated for us, and it is time to consult those who do know about the subject.

Specifically, is it possible to create an interface with Omniverse, capable of interacting in real time by voice (TTS /STT) with a training developed for example in IBM Watson Assistant?

I really appreciate any guidance, help, material, or whatever can help us to have a clear path to follow.

Greetings to all from Costa Rica.

Hello @vsolano! I have contacted the dev team to assist you! Thank you for reaching out to us!

1 Like

Hi Wendy;

Thank you very much for attending my query, I hope it is something possible to do, since the social importance of this development is enormous.

Greetings.

Hello @vsolano. Some interesting goals you have.

Short answer is yes you can do this in Omniverse. One additional piece that would make it easier for this is allowing streaming TTS to drive Audio2Face so your digital avatar can be driven with TTS directly out of the box. Its a feature we are adding now and will release in a later version of the product.

We don’t have an easy how to to achieve what you are asking, but we do have most of the parts in terms of having a “interacting in real time by voice (TTS /STT) with a training developed for example in IBM Watson Assistant?”. We offer Riva for conversational AI part and you can use Audio2Face to show the facial animation of the generated voice result of Riva.

Hope that helps and let us know more details if you have further questions.

1 Like

Hello Siyuen;

Thank you very much for the reply, and I really appreciate your comments.

About what I initially queried, I manage to understand that most of the pieces of the puzzle exist, but still not 100%.

However, I am quite inexperienced in the subject of 3D design, and although I have made some “attempts”, I would like to achieve this goal, with such an incredible technology as Omniverse.

I hope it’s not too much trouble, but I would like to ask for some additional guidance or documentation that I could start with, and thus move forward while waiting for the new version of Audio2Face.

Again, thank you very much for everything, and greetings from Costa Rica.

Definitely, here are all the tutorial videos for Audio2Face that goes into almost every aspect of the app.

And here is the A2F doc. The new 2021.3 is already out and you can see updated material here and also posts on the forum about the new release.
https://docs.omniverse.nvidia.com/app_audio2face/app_audio2face/overview.html

1 Like

nice article. Its hard to imagine our world without IT technology. Especially we have interest in implementing RPA in healthcare industry into humans life. Such software can do all the basic things people do through the keyboards: it facilitates interaction with applications and processes essential data to optimize workflow. Robots are our future.

1 Like

You are absolutely right, in our case, we have developed from the most basic (chatbots), evolving to telephone support, both for outbound and inbound, for 7 years, tremendously improving the user experience, and now for a couple of years, with business relationships for the generation of digital humans.

So now we want to venture and work with Nvidia’s avatar platform, and take a new leap in advances that can be implemented in every area of everyday life.

Thank you very much, and I am sorry and apologize for not writing sooner.

I have read all the documentation you sent, and it is awesome!!!. but now I would like to know if there is any kind of tutorial, or detailed and specialized guide, for the avatar generation that was presented this week.

Again, thanks for everything and great weekend.

@vsolano , there are currently not much doc for character generation. this is still a big topic and we introduced some neural based approach. You can take a look at this and even test out the demo with your own picture to see how it may work for the face-vid2vid avatar part.

(there is a demo link in the page)
https://nvlabs.github.io/face-vid2vid/