I’d like to have the omniverse render viewport on a second screen (e.g. on a projector) with specific FPS, resolution, etc. The position of the camera view is live updated via python script.
At the moment I simply move the Create window to the second screen, specify the resolution in the viewport settings and press F11.
Is this the preferred way to go? Or how could I create a little python kit app that live renders the viewport with certain settings specified in the script?
you can script all the steps that set fps, resolution, and switching to full screen
I am not sure you can using scripts to move the Kit Window to the other monitor itself i will need to check
Psudo code :
import carb.settings
settings = carb.settings.get_settings()
settings.set(“/app/renderer/resolution/width”, 1920)
settings.set(“/app/renderer/resolution/height”, 1080)
the full screen menu actions and steps are in
…\python\extensions-bundled\omni\kit\builtin\menu.py
search for
def _on_fullscreen(self):
if you are gonna use F11 mode, then that is really enough
you could also look at the omni.create.kit file and see how you could deactivate some of the extensions you don’t need . but as discussion if you go full screen anyway you wont gain much
as for controlling the camera from scripts you simply need to update the USD prim that is the active Camera.
Thank you for the code snippet, that works perfectly!
Some other questions for the fullscreen view:
How would I turn off the grid, axis frame and light symbol via script? I assume with omni.ui?
In fullscreen mode, I’d also like to show text on the bottom, as an overlay to the image. I’d also like to update this text.
I actually use VR controllers as input (with openvr), the renderview is projected with a beamer onto a canvas. In a separate process I calculate a new camera frame and put it onto a queue. The script for updating the the camera looks like this:
camera = stage.GetPrimAtPath('/Root/Camera')
while True:
omni.client.usd_live_wait_for_pending_updates()
frame = await queue.get()
translate, rotateZYX = translate_and_rotateZYX_from_frame(frame)
camera.GetAttribute('xformOp:translate').Set(translate)
camera.GetAttribute('xformOp:rotateZYX').Set(rotateZYX)
stage.Save()
omni.client.usd_live_process()
queue.task_done()
However, I am struggeling to get the camera movement smooth. It seems as there are too many frames streamed and the renderview does not get updated after every frame. Input and output need to better synchronized. Maybe the process needs to be reverted: get a new frame only once the renderer is finished, or create a camera animation to tween between frames on the fly so that it is correctly buffered?
Any input on this is welcome ;)
Ok cool. I am interested in VR integration, but I have to admit my strategy is to wait for support from the omniverse team :)
Waiting for the rendered frame sounds sensible. You might find inspiration in the View xr extension ( $HOME\AppData\Local\ov\pkg\view 2020.3.31_exts\omni.kit.xr\omni\kit\xr) Which integrates cloudXR. CloudXR is a streaming service that acts as a broker between an HMD over the network and openVR
this is the grid
/persistent/app/viewport/grid/enabled
axis
/persistent/app/viewport/grid/showOrigin
some of the menu
/app/viewport/showSettingMenu
/app/viewport/showCameraMenu
/app/viewport/showRendererMenu
/app/viewport/showHideMenu
/app/viewport/showLayerMenu
some are more tricky
/persistent/app/viewport/displayOptions
Hi @dfagnou can you confirm that "/app/viewport/grid/enabled": False still works? I can change it (and it gets changed) but that does not produce the desired effect!
@dfagnou
Anyone having the same problem setting the line width to zero has the same effect ( "/persistent/app/viewport/grid/lineWidth": 0.
Also, I’ve to mention that what is rendered by the ROS cameras is what is being showed by the viewport WITH selections/grids/anything else. It’s been a pain disabling everything and remembering this…