Simple render USD scene to PNG

In the spirit of the Kit quick start guide, it would be awesome to have a quick tutorial / example that introduces simply using python to render a USD scene to a PNG. Even better if I have a list of poses for a camera, how could I render them out?

Seems like this should be trivial. I’m 6 hours into to watching videos, poking around, reading forums and I haven’t made anything yet. Having trouble getting those first 30 lines of relevant material together and working.

Hi,

Hope you are doing well and thank you for reaching out.

I have attached a quick example on how to render a USD scene and generating a PNG that will hopefully help you with tackling some of the other ideas you might have.

This uses the Rendering Service which in turn uses the Capture Extension within Kit.
The Capture Extension is also what is used by the embedded, but not scriptable, Movie Maker (under the menu Rendering → Movie Maker)

You can get the Rendering service extension from the Extension Manager (Window → Extension Manager, search for Rendering Service) once you have it installed and enabled you can use the following snippet from the Script Editor ( Window → Script Editor):

import asyncio
import json

from omni.services.core import main as _main
from omni.services import client

settings = """{
  "usd_file": "omniverse://localhost/NVIDIA/Samples/Astronaut/Astronaut.usd",
  "render_start_delay": 10,
  "render_stage_load_timeout": 50,
  "render_settings": {
    "camera": "/Root/LongView",
    "range_type": 0,
    "capture_every_nth_frames": 1,
    "fps": "24",
    "start_frame": 1,
    "end_frame": 1,
    "start_time": 0,
    "end_time": 0,
    "res_width": 1920,
    "res_height": 1080,
    "render_preset": 0,
    "debug_material_type": 0,
    "spp_per_iteration": 0,
    "path_trace_spp": 64,
    "ptmb_subframes_per_frame": 5,
    "ptmb_fso": 0,
    "ptmb_fsc": 0.5,
    "output_folder": "C:/render_output",
    "file_name": "Capture",
    "file_name_num_pattern": ".####",
    "file_type": ".png",
    "save_alpha": false,
    "hdr_output": false
  },
  "load_async": true
}"""

def main():
  render_settings = json.loads(settings)
  _services = client.AsyncClient("local://", _main.get_app())

  asyncio.ensure_future(_services.render.run(**render_settings))

main()

The above script will try and render the default Astronaut scene and use the /Root/LongView Camera.
It will render frame 1 at 64 (spp_per_iteration) samples in Pathtraced mode (render_preset) and write it out to a Folder on the C drive. This can also be a path on Nucleus or Linux.

You can also run this from the commandline once you have installed the rendering service.
If you save the snippet to a file called do_render.py for example you can then run:
/omni.create.next.kit.bat (.sh for Linux) --enable omni.services.render --exec “do_render.py”

You can also use this service from outside Create (Create would need to be running but you can call into it from external applications).
If you were to browse to http://localhost:8011/docs after you launch Create and after enabling the rendering service you will see that there is a /render/run endpoint that you can call directly from the web browser or from any other script that knows how to make a HTTP Post request and by passing it the settings dictionary.

ie:

import json
import requests

settings = """{
  "usd_file": "omniverse://localhost/NVIDIA/Samples/Astronaut/Astronaut.usd",
  "render_start_delay": 10,
  "render_stage_load_timeout": 50,
  "render_settings": {
    "camera": "/Root/LongView",
    "range_type": 0,
    "capture_every_nth_frames": 1,
    "fps": "24",
    "start_frame": 1,
    "end_frame": 1,
    "start_time": 0,
    "end_time": 0,
    "res_width": 1920,
    "res_height": 1080,
    "render_preset": 0,
    "debug_material_type": 0,
    "spp_per_iteration": 0,
    "path_trace_spp": 64,
    "ptmb_subframes_per_frame": 5,
    "ptmb_fso": 0,
    "ptmb_fsc": 0.5,
    "output_folder": "C:/render_output",
    "file_name": "Capture",
    "file_name_num_pattern": ".####",
    "file_type": ".png",
    "save_alpha": false,
    "hdr_output": false
  },
  "load_async": true
}"""

def main():
  render_settings = json.loads(settings)
  # This will wait until the render is done:
  resp = requests.post(http://localhost:8011/render/run, json=settings)
  print(resp.status_code)

Hope that helps and please let me or @dfagnou know if you have any further questions.

Thanks,
Jozef

1 Like

One Line Summary: Thank you. I am still unable to render a frame and need a little more help.

Update:
With Kit version 100.1
I did not find a “Rendering Service” extension.
But, I was able to find extensions:

  1. omni.services.core
  2. omni.services.client
    And enable them.

Next Omniverse doesn’t ship with requests module. So, a simple command line: pip install requests did the trick.

Cleaning up whitespace and tabs and I got to:
404 error
Then I changed to the port my nucleus port and got 405 error.

Then I realized I was conflating your instructions. And, I was convinced that there must be a rendering service.

Sure enough… …found rendering service in Create’s extensions list.
I tried your instructions for the frame running in ScriptEditor from Create.
AFAIK executes without error and no frame appears to be rendered.

Reran a few times… I see an error:
“Task exception was never retreived
future: <Task finished coro…”

(Doesn’t appear to be a way to copy and paste from Script Editor output. This is officially a feature request.)