Hello, I would like to add RTX-real-time rendering using a camera in Omniverse to a plane mesh in real time. So, I implemented it as follows
The currently implemented method is to draw in the viewport using ViewportWidget, save it using FileCapture, and automatically load it by connecting the material and texture files.
If implemented in this way, saving the image seems to be very slow.
Is there a way to draw using a texture buffer?
async def capture(self, viewport_api):
self.do = True
buffer = False
while self.do:
stage = omni.usd.get_context().get_stage()
if not stage.GetPrimAtPath(f"/World/{self.name}").IsValid():
print(1)
self.destroy()
break
if buffer:
capture = viewport_api.schedule_capture(ByteCapture(self.on_capture_complete))
else:
capture = viewport_api.schedule_capture(FileCapture(f"{self.image_dir}/camera_{self.name}.png"))
capture_aovs = await capture.wait_for_result()
await asyncio.sleep(self.frame_update_time)
def capture_ready(self):
self.window = ui.Window(f"Camera: {self.name}", width=self.resolution[0], height=self.resolution[1]+20)
with self.window.frame:
self.viewport_widget = ViewportWidget(camera_path=f'/World/{self.name}', resolution=self.resolution)
viewport_api = self.viewport_widget.viewport_api
asyncio.ensure_future(self.capture(viewport_api))
This is a really bad idea. You are essentially creating an infinite rendering loop and expecting it to work “faster”. I am surprised you have not melted your machine. You are asking the RTX renderer, to render its self and then past that texture, back into itself, in a loop, all in realtime. I would strongly advise against this.
I have to ask though. Why ? Why do you need a realtime viewport texture of your viewport, in your viewport ?
Hello
We want to draw the screen seen from a specific camera on a Plane Mesh.
This is not Viewport to Viewport.
I would like to connect the buffer of the rendering screen shown in the camera directly to the Plane Mesh without exporting it as a texture.
Something similar to this is a feature called “Unity’s RenderTexture”.
The reason is for one user to act on a camera within Omniverse XR, and for other users to see the camera being acted upon within Omniverse.
So, it may not be possible with RTX PathTracer, but I think it is possible with Realtime.
However, if you say that the direction Omniverse is pursuing is different, please provide a different opinion.
Thank you
Hello
Did this answer your question?
I appreciate the extra detail, however my response is the same. I think any “feedback” loop is a really bad idea. Especially in XR. You will kill your pipeline performance. I am familiar with RenderTexture, but I am unaware of this equivalency in omniverse. Even if it were possible, yes, it would certainly be a bad idea in path tracing mode.
What kind of frequency of update are you expecting ? 1 fps, 10 fps, 30 fps, etc ? Or is this more like 1 frame a minute ? You need this plane texture to update completely in realtime ?
I will see if there is anything like this in our system but I really don’t understand the workflow.
Hello.
A loop on a plane is enough at around 24~30fps.
I didn’t realize that omniverse would return to pathtracer even in realtime mode.
But I found an example of implementing DynamicTexture based on opencv. What do you think about this issue and the example?
We think this will be possible to some extent. Is this a bad idea?
Thank you
It sounds promising. Good luck to you on this. I am going to close this thread because I have no support for this.