Audio2Face RTX-Interactive (Path Tracing ) is not Working

Hi Team,
I am exploring A2F 2022.1.2 version .
Once A2F pipeline is created with 3d mesh, I faced rendering issue.
When I am changing Realtime to Interactive (Path Tracing ) The 3d object is Not rendering ,even after long time and it is still buffering .
Please help me with the solution.
Also suggest me Is there any particular render settings available ?
Reference Screenshot attached.

1 Like

Hello @jebastin.raja! I’ve contacted the dev team to take a look at this. Can you attach a copy of your logs from here: C:\Users\<USERNAME>\.nvidia-omniverse\logs\Kit\Audio2Face just in case is contains information that may be helpful to the team.

Hi, I am Uploaded the log file with you. Please Check and let me know.

kit_20221118_113628.log (921.3 KB)

Hello @jebastin.raja

The idea of a2f is not to render stuff out in path traced.

The workflow is that you export you talking face as a cache and then combine that in Machinima to full body animation.

Like here, see the very last video where I show the full process:

After you have watched the above video, here is a video about the FULL process +
Exporting from A2F to Machinima starts from 16:40

@jebastin.raja thank you for reporting this issue.
I confirm that in a future release of Audio2Face, this issue when the renderer is set to RTX - Interactive (Path Tracing) will no longer occur.
In the mean-time, you can use RTX Real-Time mode to avoid this.

1 Like

Hi @jebastin.raja ,
Thank you for reporting and sorry for the inconvenience.
This issue was reported before and fix will be available on our next release as @PhilippeR replied.
But that issue aside, as @pekka.varis pointed out, currently A2F workflow doesn’t work that well with pathtrace, esp also because our audio player is not tied Movie Capture that uses global timeline. You can export out the timesample cache, so that at that point you are no longer using the A2F solve, and you should be able to use PT for rendering those caches.


@jebastin.raja btw what I export the usd cache and then connect the points to full body character - Then I should be able to render inside A2face, right?

Only downside would be the use of a2gesture missing, since its in Machinima…