Simple render USD scene to PNG

In the spirit of the Kit quick start guide, it would be awesome to have a quick tutorial / example that introduces simply using python to render a USD scene to a PNG. Even better if I have a list of poses for a camera, how could I render them out?

Seems like this should be trivial. I’m 6 hours into to watching videos, poking around, reading forums and I haven’t made anything yet. Having trouble getting those first 30 lines of relevant material together and working.

1 Like

Hi,

Hope you are doing well and thank you for reaching out.

I have attached a quick example on how to render a USD scene and generating a PNG that will hopefully help you with tackling some of the other ideas you might have.

This uses the Rendering Service which in turn uses the Capture Extension within Kit.
The Capture Extension is also what is used by the embedded, but not scriptable, Movie Maker (under the menu Rendering → Movie Maker)

You can get the Rendering service extension from the Extension Manager (Window → Extension Manager, search for Rendering Service) once you have it installed and enabled you can use the following snippet from the Script Editor ( Window → Script Editor):

import asyncio
import json

from omni.services.core import main as _main
from omni.services import client

settings = """{
  "usd_file": "omniverse://localhost/NVIDIA/Samples/Astronaut/Astronaut.usd",
  "render_start_delay": 10,
  "render_stage_load_timeout": 50,
  "render_settings": {
    "camera": "/Root/LongView",
    "range_type": 0,
    "capture_every_nth_frames": 1,
    "fps": "24",
    "start_frame": 1,
    "end_frame": 1,
    "start_time": 0,
    "end_time": 0,
    "res_width": 1920,
    "res_height": 1080,
    "render_preset": 0,
    "debug_material_type": 0,
    "spp_per_iteration": 0,
    "path_trace_spp": 64,
    "ptmb_subframes_per_frame": 5,
    "ptmb_fso": 0,
    "ptmb_fsc": 0.5,
    "output_folder": "C:/render_output",
    "file_name": "Capture",
    "file_name_num_pattern": ".####",
    "file_type": ".png",
    "save_alpha": false,
    "hdr_output": false
  },
  "load_async": true
}"""

def main():
  render_settings = json.loads(settings)
  _services = client.AsyncClient("local://", _main.get_app())

  asyncio.ensure_future(_services.render.run(**render_settings))

main()

The above script will try and render the default Astronaut scene and use the /Root/LongView Camera.
It will render frame 1 at 64 (spp_per_iteration) samples in Pathtraced mode (render_preset) and write it out to a Folder on the C drive. This can also be a path on Nucleus or Linux.

You can also run this from the commandline once you have installed the rendering service.
If you save the snippet to a file called do_render.py for example you can then run:
/omni.create.next.kit.bat (.sh for Linux) --enable omni.services.render --exec “do_render.py”

You can also use this service from outside Create (Create would need to be running but you can call into it from external applications).
If you were to browse to http://localhost:8011/docs after you launch Create and after enabling the rendering service you will see that there is a /render/run endpoint that you can call directly from the web browser or from any other script that knows how to make a HTTP Post request and by passing it the settings dictionary.

ie:

import json
import requests

settings = """{
  "usd_file": "omniverse://localhost/NVIDIA/Samples/Astronaut/Astronaut.usd",
  "render_start_delay": 10,
  "render_stage_load_timeout": 50,
  "render_settings": {
    "camera": "/Root/LongView",
    "range_type": 0,
    "capture_every_nth_frames": 1,
    "fps": "24",
    "start_frame": 1,
    "end_frame": 1,
    "start_time": 0,
    "end_time": 0,
    "res_width": 1920,
    "res_height": 1080,
    "render_preset": 0,
    "debug_material_type": 0,
    "spp_per_iteration": 0,
    "path_trace_spp": 64,
    "ptmb_subframes_per_frame": 5,
    "ptmb_fso": 0,
    "ptmb_fsc": 0.5,
    "output_folder": "C:/render_output",
    "file_name": "Capture",
    "file_name_num_pattern": ".####",
    "file_type": ".png",
    "save_alpha": false,
    "hdr_output": false
  },
  "load_async": true
}"""

def main():
  render_settings = json.loads(settings)
  # This will wait until the render is done:
  resp = requests.post(http://localhost:8011/render/run, json=settings)
  print(resp.status_code)

Hope that helps and please let me or @dfagnou know if you have any further questions.

Thanks,
Jozef

1 Like

One Line Summary: Thank you. I am still unable to render a frame and need a little more help.

Update:
With Kit version 100.1
I did not find a “Rendering Service” extension.
But, I was able to find extensions:

  1. omni.services.core
  2. omni.services.client
    And enable them.

Next Omniverse doesn’t ship with requests module. So, a simple command line: pip install requests did the trick.

Cleaning up whitespace and tabs and I got to:
404 error
Then I changed to the port my nucleus port and got 405 error.

Then I realized I was conflating your instructions. And, I was convinced that there must be a rendering service.

Sure enough… …found rendering service in Create’s extensions list.
I tried your instructions for the frame running in ScriptEditor from Create.
AFAIK executes without error and no frame appears to be rendered.

Reran a few times… I see an error:
“Task exception was never retreived
future: <Task finished coro…”

(Doesn’t appear to be a way to copy and paste from Script Editor output. This is officially a feature request.)

Hi,
I’m having a similar purpose to generate preview thumbnail from a .usd.
I’ve tried your snippet but it threw out error: ServiceNotFoundError: /render/run connot be found.
I think this should be a version problem, I’m using Create 2021.3.8 and related Kit is 102.1.1
Also I have already enable all the extensions have an ‘rendering’ in it, but there is no extension named “Rendering Service” , and localhost:8011 is not availabel at all.

Any update for this rendering service right now?

One more question, I also want to show the preview of some animation file, since the simplest way I think to do that is find a way to display an related .gif file, but I don’t know how to figure it out using omni.ui, could you plase give me some advice or an example will be great.

Thanks a lot !

Hello @CeaserWayneCSW! Thanks for your interest in Omniverse! I need to reach out to the team to answer your first question. I’ll post back here when I have more information.

As for your second question, I posted some help in your other forum post: Display dynamic image(gif). I have asked the development team for further help on this as well.

Thank you!
I have noticed the reply of my second question, will try the advice and expecting the first one^^

@jeenbergen
Thank you for your replay. I am still unable to render a frame
With Create 2021.3.8, I did not find a “Rendering Service” extension.
I use the following snippet from the Script Editor and run, the console show

2022-02-23 02:49:08  [Error] [asyncio] [e:\nvidiaov\create-2021.3.8\kit\python\lib\asyncio\base_events.py:1619] Task exception was never retrieved
future: <Task finished coro=<BaseStandin.__call__() done, defined at e:\nvidiaov\create-2021.3.8\kit\extscore\omni.services.transport.client.base\omni\services\transport\client\base\consumer.py:29> exception=ServiceNotFoundError('/render/run cannot be found. Make sure the url is correct and the right arguements are passed')>
Traceback (most recent call last):
  File "e:\nvidiaov\create-2021.3.8\kit\extscore\omni.services.transport.client.base\omni\services\transport\client\base\consumer.py", line 34, in __call__
    return await self._transport(uri, *args, __method__=__method__, **kwargs)
  File "e:\nvidiaov\create-2021.3.8\kit\extscore\omni.services.client\omni\services\client\transports\local.py", line 71, in __call__
    f"{uri} cannot be found. Make sure the url is correct and the right arguements are passed"
omni.services.transport.client.base.exceptions.ServiceNotFoundError: /render/run cannot be found. Make sure the url is correct and the right arguements are passed
import asyncio
import json

from omni.services.core import main as _main
from omni.services import client

settings = """{
  "usd_file": "omniverse://localhost/NVIDIA/Samples/Astronaut/Astronaut.usd",
  "render_start_delay": 10,
  "render_stage_load_timeout": 50,
  "render_settings": {
    "camera": "/Root/LongView",
    "range_type": 0,
    "capture_every_nth_frames": 1,
    "fps": "24",
    "start_frame": 1,
    "end_frame": 1,
    "start_time": 0,
    "end_time": 0,
    "res_width": 1920,
    "res_height": 1080,
    "render_preset": 0,
    "debug_material_type": 0,
    "spp_per_iteration": 0,
    "path_trace_spp": 64,
    "ptmb_subframes_per_frame": 5,
    "ptmb_fso": 0,
    "ptmb_fsc": 0.5,
    "output_folder": "E:/NvidiaOV/TestOut",
    "file_name": "Capture",
    "file_name_num_pattern": ".####",
    "file_type": ".png",
    "save_alpha": false,
    "hdr_output": false
  },
  "load_async": true
}"""

def main():
  render_settings = json.loads(settings)
  _services = client.AsyncClient("local://", _main.get_app())

  asyncio.ensure_future(_services.render.run(**render_settings))

main()

How can i fix this? Could you plase give me some advice?
Thanks

Hi,

Apologies for the delay in the reply.
We have actually moved that service out of Create itself but made it available via Omniverse Farm.
Within the jobs directory of the agent you will find the render service which you would be able to run interactively similarly to above but we’d recommend running with OV Farm instead. Documentation on OV Farm can be found here:
https://docs.omniverse.nvidia.com/app_farm/app_farm/overview.html

Hope that helps.
Thanks,
Jozef

1 Like

Hi, jeenbergen
Thank you for your prompt reply.
I will try OV farm.
I will reply here regardless of the outcome.
Thanks a bunch

Hi
After i installed OV Farm, it works perfectly, THANKS
By the way, Movie Capturer can only select one camera, how can i use multi camera render by OV Fram?
If can’t, can i use Kit write a extension support multi camera render? Which documents do I need to read?
I would appreciate an early reply

Sure @mebstyne. It’s really amazing to find tons of videos on other subjects, but on this aspect as primary as making a tuto on how to simply produce a .png image of the viewport and that as simply as possible in python inside CODE or CREATE … nothing.

And when you look in the existing codes, you have the feeling that you are inside a “gas factory” with Omniverse and that any learning will be extremely long and tedious … but I think it’s precisely the opposite feeling that NVIDIA would like in order to facilitate the development of a developer ecosystem around Omniverse.

Perhaps NVIDIA should be more concerned with the step-by-step tutorial aspect, taking inspiration from the Forum requests. But again, @WendyGram Wendy’s delay in getting a real response/support from the team is also a bad omen for getting into serious development with Omniverse.

We understand that they all know how to run at Nvidia, but we, as beginners with their system, would just like to walk as fast as possible without drowning in a glass of water.

And we can hope that unlike as with others solutions, it’s not necessary to create an Enterprise account from the beginning with NVIDIA in order that it can share some Python snippets with the newcomers developers in order to help and to accelerate adoption or to always get answered that consist to necessary to wait for a new much more user-friendly solution in upcoming releases of Omniverse.

In summary, Omniverse is very complete and very well architected and thought for the development of extensions, but it lacks this tutorial on the development of extensions which allows you to immerse yourself step by step and finally to the deepest in the ocean of capabilities available and which then would be the reference that is always updated when there are new versions of Omniverse. (Otherwise there is currently only the possibility to get used to it by unpacking the codes of the available extensions, which however is not too bad given the quantity and diversity of the extensions already available but requires more time.)

@didjich
You took the words right out of my mouth.

@user76666 So you can see too that it’s not worth giving our test/experience feedback … written on the 5th and still no feedback on the 21st … bad impression for a first user experience. Too busy with marketing?