Hello community,
I’m reaching out as a non-developer who’s hit a roadblock trying to find advice/directions where to start or the right technical partner for a highly specialized interior-design project.
I’ve spoken to several teams and platforms but haven’t been able to find anyone with the right (Omniverse) expertise to tie together LiDAR imports, AI-driven scene layout, and RTX-quality rendering with USD-based workflows. I believe Omniverse’s USD foundation and RTX path-tracing renderer are the ideal backbone for this solution.
Project Overview:
- We have fully scanned interior spaces (LiDAR/Matterport ) and a custom library of scanned 3D furniture (also LiDAR) .
- Our goal is to build a platform where an AI “designer” automatically places these real-scale furniture models into the scanned rooms—either by interpreting simple text prompts (e.g. “Scandinavian style”) or using predefined layout templates.
- Website visitor also can select furniture (always from specific list) and place them to the rooms (Matterport scanned)
- AI must not generate random rooms or random furniture.
- The final output must be photo-realistic: either rendered still images or an interactive 3D viewer streamed to a web browser.
- Maintaining true-to-scale dimensions and correct placement (avoiding collisions) is critical.
Would anyone on your Omniverse team or in your partner ecosystem be interested in taking on this engagement? I’d be grateful for any referrals or guidance you can offer.
I look forward to the possibility of collaborating with Omniverse experts to bring this vision to life.