I am using Blender on a Linux platform with Omniverse add-ons. Since I couldn’t install Blender from the Omniverse Launcher, I installed the Linux version from Blender Builds - blender.org. Then, I manually downloaded and added the Omniverse add-ons from NVIDIA Omniverse GitHub.
However, when I provide a JSON file inside the animation cache path and import an animation clip, the keyframes do not appear.
i have another windows os in which i downloaded blender from omniverse launcher in that i am importing same json file and i am getting key frames .
thank you.
@vbrisebois @Richard3D
1 Like
Is there any guide or documentation available that explains how to achieve the same functionality of Omniverse Blender in a Linux build? Specifically, I want to know how to properly install and configure Omniverse add-ons in Blender on Linux. Where can I find the necessary resources or instructions for this?
1 Like
I am also want to do the same. Want to achieve the same functionality as the blender from the omniverse launcher in my blender of the linux version.
Thanks in advance for any kind of help.
I have purchased an Omniverse Enterprise license, and I’ve created an API that uses Audio2Face and Blender to process animation data.
Now, I want to host this API on a remote server, so I can invoke it via a public URL.
For this, the server needs to have:
- Windows OS (since blender is not supported on Linux),
- Omniverse Launcher installed,
- Audio2Face and Blender installed via the Launcher.
My question is:
Does NVIDIA provide any Windows-based cloud/server hosting solution with gpu support where I can install Omniverse Launcher, Audio2Face, and Blender, and deploy my API?
Yes we support our own GDN hosting service for deploying web-based GPU accelerated content. Graphics Delivery Network (GDN) for Interactive 3D Experiences | NVIDIA
1 Like
Thank you @Richard3D for your response.
I’ll look into the GDN documentation you provided.
I wanted to ask—are we able to use NVIDIA Virtual Workstation, where we can install the Omniverse Launcher, Audio2Face, Blender (via the launcher), and also run our Python project that invokes both Audio2Face and Blender, so that from a remote location I just need to call my API?
My Requirement:
I am working on a Python project that integrates Blender and Audio2Face. In this project, I have created a REST API. When this API is called, it triggers a sequence of background processes.
First, Audio2Face runs in headless mode (using the --no-window
flag) to process a USD file and export a JSON file. Then, Blender is also launched in headless mode to read that JSON file and perform further processing.
The final output generated by Blender is then saved to NVIDIA Nucleus.
The entire workflow runs without any UI, and the only action needed is to call the REST API remotely to trigger the whole process.
I am sorry, but we really do not support Audio2Face here on this forum…
We’ve Moved NVIDIA A2F Support! We have new places to better help you get the support you need.
- Developers can submit tickets through the NVIDIA AI Enterprise program NVIDIA Enterprise Customer Support
- Developers can discuss ACE through our NVIDIA Developer Discord Server NVIDIA Developer