Can the Kaolin model output coordinates and posture?

Please tell me about kaolin.
For example, suppose you use matryoshka data to train kaolin.
If Matryoshka is shown in the image taken by the camera, can I get the coordinates, posture, and 3D data of Matryoshka?

Is it even more accurate to use the point cloud (RGBD) acquired by Kinect V2 or D415 as the input image?

Hello Sruti, thank you for your interest in Kaolin!

I have a few questions to be able to better answer your request.

  1. Could you identify which Matryoshka data you’re referring to?
  2. Kaolin is a library to accelerate 3D Deep Learning research. Which model are you interested in training with this data?

Thank you for your reply.

I misunderstood.
I don’t want to do research on “inverse rendering”
To clear the challenges of 3D Object detection
I thought that Kaolin might be usable, so I asked a question.

There is no data for Matryoshka yet.
I wanted to test it first, so
As simple as possible
I was thinking of testing Matryoshka.
I found this interesting and asked without thinking too much.

sory, I’m not good at English…
Is it the answer?
Please point out if it is strange.

Regards,
jlafleche

Unfortunately, Kaolin App does not currently support skeletal joints ground truth generation, but that is a great suggestion for future work, thank you! Please keep an eye out on future updates as we work to improve its data generation capabilities!