Set up cloud rendering using AWS/Farm/Queue


I am about to get started with setting up cloud instances on AWS to run Omniverse. As a first step, I would like to set up a single AWS GPU instance configured as a render node that I can use when running movie capture on my workstation.

Could someone give me some directions on how to approach this? Some specific questions are:

  • Recommended instance types and AMI (I would like to avoid to use anything from AWS Marketplace)
  • I generally like to administrate cloud servers using Linux/Terminal. Is it possible/recommended to do a headless setup without any Desktop included? Or do I need
  • Which components do I need on the instance: Drivers, Docker??, Farm Queue, Farm Agent, Nucleus??
  • Which component is responsible for the rendering itself, do I need any Kit application to do that?
  • Any other tips or suggestions?

Thanks a lot

I have to add that part of the reason I want to delegate computation to the cloud is not only the rendering, but to a higher amount the physical simulation. That’s why I have another question:

Does setting up farm queue and using movie capture help here, or will the physical simulation continue to be computed on my local workstation?

Hi Bruno. Checking with the team for you.

Hi Bruno,

I am very happy to see that you are planning on using Farm for more than just rendering as that is what we hoped people would start using it for.

  • For our AWS instances of OV Farm we use a Ubuntu 18.04 base image with a packer build, using Ansible, to turn it into a scheduling instance (aka Queue). That AMI is currently not publicly available but to give you an idea of how you could approach it. I’ll see if we can share some of the playbooks.
    Instance type wise we are currently using the Compute optimized instances. Size of them will depend a bit on the amount of agents you plan on hooking up but the services are more CPU than memory bound.

  • Yes, that is possible. By default, in the Launcher version we ship with the UI so people know it has been launched but for larger and automated deployments we recommend going headless. In the install directory of the Queue, under the apps folder there is a file which removes the need of needing a GPU for the scheduling instances. In there you should also see that there are .kit files to run each of the services that form the scheduling portion individually. This is what we do internally to scale and distribute the load across multiple instances. (This was the intent of Farm and we ‘shoehorned’ it a bit into the Queue/Agent pattern we have the launcher but ideally, for larger and redundancy we’d recommend running them as individual services frontend with a reverse proxy like NGINX. We have some documentation in the works to make this work but if you are comfortable with the terminal it should be fairly straightforward.

  • For the scheduling portion, you’d only need Kit + the farm queue extensions, for the actual running of the job you’d need Kit, the agent, drivers as well as the application that you plan on running. At the moment we do not quite ship docker containers just yet publicly but those are in the works along with helm charts to run on Kubernetes.

  • By default the rendering is done by Create, the agent services launch an instance of Create and inject a few extensions to allow it to communicate back with the agent. (This is not a requirement, the agent can technically run any application).

  • If you plan on going larger and distributed we’d recommend spinning up a Postgres or MySQL/MariaDB instance for the task service as well as a Redis instance to handle the agents and Opensearch or Elasticsearch to handle the logs. Those facilities should be shipped with OV Farm and they can be configured via the settings in the .kit files.
    If this is of interest let us know and we can share some additional details there.

I hope that helps to get you going but let us know if we can help with anything else.

Thanks, I will try this after my holidays.