Fullscreen render viewport

I’d like to have the omniverse render viewport on a second screen (e.g. on a projector) with specific FPS, resolution, etc. The position of the camera view is live updated via python script.
At the moment I simply move the Create window to the second screen, specify the resolution in the viewport settings and press F11.

Is this the preferred way to go? Or how could I create a little python kit app that live renders the viewport with certain settings specified in the script?

thank you and best regards

+1 I would also like to know this

Hi rust,

How are you controlling your camera python script? Are you responding to keyboard/mouse events? Or is it updating with parameters to a function call?


you can script all the steps that set fps, resolution, and switching to full screen
I am not sure you can using scripts to move the Kit Window to the other monitor itself i will need to check
Psudo code :
import carb.settings
settings = carb.settings.get_settings()
settings.set("/app/renderer/resolution/width", 1920)
settings.set("/app/renderer/resolution/height", 1080)

the full screen menu actions and steps are in
search for
def _on_fullscreen(self):

if you are gonna use F11 mode, then that is really enough

you could also look at the omni.create.kit file and see how you could deactivate some of the extensions you don’t need . but as discussion if you go full screen anyway you wont gain much

as for controlling the camera from scripts you simply need to update the USD prim that is the active Camera.

any more questions please let me know

Hi dfagnou,

Thank you for the code snippet, that works perfectly!

Some other questions for the fullscreen view:
How would I turn off the grid, axis frame and light symbol via script? I assume with omni.ui?
In fullscreen mode, I’d also like to show text on the bottom, as an overlay to the image. I’d also like to update this text.

thank you

Hi adharder,

I actually use VR controllers as input (with openvr), the renderview is projected with a beamer onto a canvas. In a separate process I calculate a new camera frame and put it onto a queue. The script for updating the the camera looks like this:

camera = stage.GetPrimAtPath('/Root/Camera')
while True:
        frame = await queue.get()
        translate, rotateZYX = translate_and_rotateZYX_from_frame(frame)

However, I am struggeling to get the camera movement smooth. It seems as there are too many frames streamed and the renderview does not get updated after every frame. Input and output need to better synchronized. Maybe the process needs to be reverted: get a new frame only once the renderer is finished, or create a camera animation to tween between frames on the fly so that it is correctly buffered?
Any input on this is welcome ;)

Ok cool. I am interested in VR integration, but I have to admit my strategy is to wait for support from the omniverse team :)

Waiting for the rendered frame sounds sensible. You might find inspiration in the View xr extension ( $HOME\AppData\Local\ov\pkg\view 2020.3.31_exts\omni.kit.xr\omni\kit\xr) Which integrates cloudXR. CloudXR is a streaming service that acts as a broker between an HMD over the network and openVR

this is the grid


some of the menu

some are more tricky

as the above are bitewise your need to assemble
static const ShowFlags kShowFlagNone = 0;
static const ShowFlags kShowFlagFps = 1 << 0;
static const ShowFlags kShowFlagAxis = 1 << 1;
static const ShowFlags kShowFlagLayer = 1 << 2;
static const ShowFlags kShowFlagResolution = 1 << 3;
static const ShowFlags kShowFlagTimeline = 1 << 4;
static const ShowFlags kShowFlagCamera = 1 << 5;
static const ShowFlags kShowFlagGrid = 1 << 6;
static const ShowFlags kShowFlagSelectionOutline = 1 << 7;
static const ShowFlags kShowFlagLight = 1 << 8;
static const ShowFlags kShowFlagSkeleton = 1 << 9;
static const ShowFlags kShowFlagMesh = 1 << 10;
static const ShowFlags kShowFlagPathTracingResults = 1 << 11;

notes the /persistent/* setting mean they will persist accross sessions