Hi, naive question but beside taking the most powerful GPU available on a data center, is there research on distributing the load over multiple instances?
not that I know of. we’re clearly focused on a single machine generating frames at high framerate, low latency. and usually the applications themselves aren’t something that can be easily distributed.
Thanks for the clarification. I don’t believe it is easy, especially with latency in mind, but I’m wondering if in terms of getting a perspective on what is yet to come for photorealism (despite thinking the term is problematic) it is an interesting venue to explore.
Can you please point toward fundamental, theoretical or not, research highlighting the limitation of the current architecture, either in terms of hardware or software, preventing this type of rendering?