Need help on developing a simple render server in Kit

Hello,
I want to develop a rendering server, that would accept a request through websocket, configure camera, lighting etc. on scene, render a frame using the RTX path tracer, and then send it back to the requesting client.
So my question is, what’s the best way to actually do those things, what extensions and functions should I use, specifically for:

  • Loading a USD scene from a file
  • Receiving/Sending websocket packets (should I just use the Python’s websockets module?)
  • Rendering a frame, getting access to the actual uncompressed bytes of the image so that I can send it through websocket. Ideally it should be possible to access an uncomplete frame, i.e. without waiting for all the samples per pixel to render.
  • Setting the camera on scene, position, rotation, etc.

In the future ideally I would also use the WebRTC livestream, but it doesn’t work for me (same error as in here), I’m left with websocket stream which is very laggy and unusable.

If there are any examples of doing similar things, it would be very useful as well.

Hello @jakubwardyn! Thanks for reaching out. Take a look at our documentation here and let me know if this answers your questions!

https://docs.omniverse.nvidia.com/app_agent-and-queue/app_agent-and-queue/overview.html

Not really, I was looking for examples and advice for using the kit api, I kind of found it in the tests provided with kit (especially omni.rtx.tests), but the API docs are still very lacking, I can’t find any documentation about “omni.kit.viewport” (not viewport_widget).
I’m also wondering how to set the camera’s view matrix (I need to set roll too, which is why target/pos is not enough)
Is it possible to write a Kit extension in C++? I’m writing a realtime application and Python already seems too slow for the task.

Ok @jakubwardyn! I will get a kit developer to jump in an help answer your questions.

Hi @jakubwardyn !

As of today, we don’t have a published set of documentation for Native C++ Extension the same way we do for Python Extensions. We do however, have support and documentation for Omniverse Native Interfaces (ONI), which allows you to write C++ code (along with associated Python bindings!). This approach may be of interest to you if you are looking into assembling existing workflows to building your own: https://docs.omniverse.nvidia.com/py/kit/source/extensions/omni.example.greet/docs/index.html

Regarding the workflow you described, you may be interested to look at the micro-service architecture that Omniverse Agent and Omniverse Queue offer, which @WendyGram wisely alluded to: https://docs.omniverse.nvidia.com/app_agent-and-queue/app_agent-and-queue/overview.html Once installed, you will find that Omniverse Queue exposes APIs allowing you to send requests to execute tasks in the background, and exposed locally to http://localhost:8222/docs

More specifically, Create’s rendering micro-service may be relevant to your use case, and comes bundled in Omniverse Agent under the omni.services.render Extension. It leverages the omni.kit.capture features to offer batch rendering capabilities, and its sources are available for you to consult and extend at your will.

Regarding streaming, we are aware of a number of issues currently affecting the performance of the experience and looking at providing a more adequate solution. In the meantime, looking at the sources of the omni.services.streamclient.websocket Extension will provide you with information as to how to build your own stream decoding solution, if that is the requirement of your use case.

Thank you for your feedback!

Hi,

  • I wish do a headless render of RTX scenes offscreen and stream the data to our web frontend for real-time simulation purposes. Similar to what @jakubwardyn. Effectivelly turn off all kit services except for rendering the 3d Scene.

  • Is it possible to do so usign a C++ API.

  • Is it possible to do so without using the rest of omniverse and simply link to a library that can render USD on NVIDIA RTX cards.