Hi All
From the past few months i once again actively tried using Omniverse platform
I wanted to incorporate omniverse tools into my workflow since it does help automate and faster render.
For now the format that i can create is Vlog VIdeo
Here’s two video that i made
The first one is pretty rough, since i need to figure out how to export the animation from Audio2Face and Audio2Gesture to Blender. But once i know like in the second one, the process is pretty… i could say okay, not that smooth 😅😅. The voice is also synthetic, using turtoise TTS, implemented as an addon for Blender by CEB Studios.
For the rendering i try to explore more of the option and try to found the best quality possible using Realtime Engine. I think i can say i kinda found it, still need a lot of RnD. There’s not much tutorial using Realtime. Most is either using path trace or Machinima (I using Create BTW, much more easy).
But unfortunately, i’m still get bothered with the texture setting. Yes it does export most of it straight from Blender no Problem, but sometimes to match the render in Blender Eevee it does need some tinkering. Especially the opacity since my character dress is a bit transparent, but when i tried to setting it up manually (since the exported material DOES NOT HAVE opacity value) it didn’t go as expected, just pooof gone, not even transparent. I even use opacity texture but not working as expected, maybe you guys have tutorial? or guide? (the documentation doesn’t help at all)
But overal i still love the software, will use it more in the future for my project. I have one to create Cover Video that want to use Omniverse mostly for the lighting and render.
Thank you!