in your press release (https://developer.nvidia.com/blog/rapidly-generate-3d-assets-for-virtual-worlds-with-generative-ai/) there is a phrase: “GET3D is a new generative AI model that generates 3D shapes with topology, rich geometric details, and textures from data like 2D images, text, and numbers.” - so, it seems GET3D can somehow generate a 3D model from a picture… I mean something like that: at the Inference stage, GET3D takes a png image as a parameter, and in response generates not a bunch of random 3D models, but a 3D model similar to png picture. Is it possible to use GET3D this way, if yes - how exactly (it would be great to get any docs or advices)…
Does GET3D have image-guided shape generation feature?
Hi, Thanks for the questions! Due to time constraints, we are not working on image-guided shape generation for GET3D, we might explore that in the future.
thanks for the answer!
is there any understanding when there will be a version with the ability to upload an image?
We do not have a specific timeline for it but will be actively looking into this
From the point of view of technology, is it not possible to upload a picture and get a 3D model from it now? or is there such an option, but not available to us (developers)?
Hi, we do not have this option at the moment, and we would encourage developers/researchers to try PTI inversion if having one image as input and wants to generate 3D shape from the image GitHub - danielroich/PTI: Official Implementation for "Pivotal Tuning for Latent-based editing of Real Images" (ACM TOG 2022) https://arxiv.org/abs/2106.05744
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.