Creating a real-time digital twin

I would like to create a digital twin that can insert objects into the scene. I will have clients connecting to a server where the twin will be running. The clients will send the necessary object information to the server, which will add the objects into the scene in real-time.

Essentially, the digital twin should be capable of setting up the scene from scratch based on data provided by a client. Assume that I have physical sensors to perceive the real-world and an application which transforms the data into arguments with which I call USD functions.

I would appreciate any pointers on how I can get started with this project and which associated omniverse extensions I might need to create the project.

This is quite a large and complex question, but I will try to give some overview.

  1. You need to create the base usd file of your digital twin. So if this is to be a house, you need to model, texture and light that house and build in in Omniverse Composer as a usd file. This is the majority of the work.
  2. You need to set up a central public server to host Omniverse and Nucleus on, that customers can connect to. This could be on AWS for example. It is fairly straight forward but it does cost money. Another option would be hosting your digital twin on our Nvidia GDN.
  3. You have users connect to your hosted Omniverse App or USD Composer, through Nucleus on the remote server. They can upload their own cad models and content to Nucleus and then drag and drop these assets into your digital twin.
1 Like

Thank you for your prompt response.

I have accomplished 1 and 2, but I do have some questions regarding 3.

Is it not possible for the Omniverse app/USD Composer to automatically upload objects onto a pre-existing scene and render them in real-time (instead of draggin-and-dropping into the USD Composer) by synchronizing with a separate process that accepts input from clients? In other words, is Omniverse restrictive in how a USD file can be modified in real-time: through the use of the USD Composer’s tools only?

Yes it would be possible either with a synchronized live workflow approach or just some clever file structure. For the second, you can load in a client layer as “client.usd” that could start off blank. Then the client could load in their own cad model and save it in client.usd. Then as soon as they save that file locally, you master digital twin would get a notification to update and their model would appear at the 0,0,0 point of where they had imported their cad model, and where you had aligned the client layer.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.