Export frames from viewport to OpenCv or python in realtime

Hello, I would like to know if there is any way to export the frames generated in the viewport, using sdk python.

Hello @usuario_a1711! Welcome to the community! I informed the dev team of your question. I will post here when I hear back!

Hi @usuario_a1711 ,
I’d suggest you to use our Movie Capture under Rendering menu as it provides more option.
But here is little script to save a viewport capture out.

import omni.kit
import asyncio

screen_capture_path = "D:\\test2.png" 
async def capture_viewport(path):
	viewport_ldr_rp = omni.kit.viewport.get_viewport_interface().get_viewport_window(None).get_drawable_ldr_resource()
	renderer = omni.renderer_capture.acquire_renderer_capture_interface()
	renderer.capture_next_frame_rp_resource(path, viewport_ldr_rp)
	await app.get_app().next_update_async()
	renderer.wait_async_capture()

asyncio.ensure_future(capture_viewport(screen_capture_path))

There could be a better way to do this if you ask our friends in the Developer section of the forum :)

1 Like

Hi @esusantolim I have a question about this. However, it might be more related to IsaacSim team.

What I want to do is generating some data. With the movie capture I’ve seen that it can take around 4s per frame to render in 4K. However, when I run the SyntheticDataRecorder/ROS Bridge, it is almost a real-time computation. How is that possible? How can I achieve the rendering as given by the datarecorder while still generating all the wanted information (depth, bounding boxes…). Can I do that in python? Or at least can you tell me what’s the difference?

Hi @eliabntt94 ,
Sorry for the late reply.
I think the question is best for the IsaacSimn team. I’ll forward your question to the team.

@WendyGram , is there a way to move this thread to the correct section?

This python sample shows a few api’s to access synthetic data via python directly.

https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/tutorial_replicator_offline_generation.html

Hi @Hammad_M,
I must have missed this _next function. I’ll try it probably today or on Monday.

I was able to use the render manual stepping. However, I am now confronting this issue https://forums.developer.nvidia.com/t/bug-help-needed-rendering-not-following-expected-behaviour/203711.
I was able to solve it with a sleeping step between the two toggles. Neither Ros component ticking nor the groundtruth generator helper through the synthetic data helper, without the sleep, are consistently seeing the correct data. However, I do not completely like the approach. Also, some ghosting sometimes still remains on the generated picture. For the ghosting I believe it is just a matter of correcting the rendering settings. However, I would expect that when I call render() before going on the frame is completely rendered and published and this seems not being the case.

With a much simpler scene, where the FPS (the one you can show in the UI) is near realtime, this problem seems to not be there. Is “sleeping” the only way to ensure that the frame is completely rendered?