Hi everyone! I’m working on a digital twin project on a manufacturing factory in Omniverse USD Composer and Isaac Sim. I need to stream the camera feed placed around the virtual factory to a websocket or somewhere that some Metropolis modules can pick up the camera stream and do further processing. I couldn’t find a definite solution for streaming camera feed in Isaac Sim. From the information I gathered, it looks like there are 2 ways,
- One is to use Replicator to save images from the cameras and then display them sequentially for it to look like a video;
- Two is to use ROS2 to stream the camera feed.
Would you guys be able to recommend the materials and the standard ways to best do this monitoring part in a digital twin application? I have done the first method with Replicator and most of the time it is overloading the GPU, depending on the resolution and the rate of saving and streaming the saved image as a continuous video in real time.
Isaac Sim Version
4.5.0
Operating System
Windows 11
GPU Information
You can try using Isaac Sim’s ROS2 bridge to publish headless camera data to topics.
Additionally, you may take a look at the web-viewer-sample to monitor the digital twin in a browser if your application was built via isaacsim-app-template or kit-app-template.
Thank you @VickNV!
I’ve tried using ROS2 bridge to publish camera data to topics and it’s working.
My task is to do multi-view object detection with Metropolis and maybe Video Search and Summarization VSS feature of Metropolis, which is to identify and track multiple objects from multiple camera angles in the factory, similar to the videos showing Omniverse digital twins and Mega. The cameras are both statically placed to get the overview of the factory and dynamically placed on robots. Is it recommended to publish camera data via ROS2 with multiple cameras like the example image attached? Would the processing overhead be too extensive if using ROS2? If I want to integrate Metropolis, so Metropolis would subscribe to the camera topic to perform real-time processing on the camera feed?
Or would you recommend that the web-viewer-sample https://github.com/NVIDIA-Omniverse/web-viewer-sample is a good template for me to modify based on my requirements? Would some more code, libraries (like the viewport API Viewport API — Omniverse Kit), and extensions be involved in the process if I want to do something like the attached image?
For Omniverse digital twin development, is it recommended that Metropolis modules are extensions, plugins, or on the same cloud server as the Omniverse/Isaac Sim digital twin? Are there ways to stream the camera feed via websocket or other methods so that Metropolis modules elsewhere can easily access it? I would appreciate it if you could point me to the materials relating to my problems, like the connection/communication between Omniverse and Metropolis for digital twin development, since I couldn’t find direct answers.