Synthetic Data Recording - transform between two objects

I’m working on object pose detection - and am creating a dataset in the linemod format so that I can train EfficientPose. Similar to the post Syntheitic data recording for BBox3D I’ve had to do some ‘changes’ to get my data. Basically ‘all’ is working except for the transform between my camera and my object. I need this to a) visualize the bbox3D - where I’m at now, and b) drive the model (it needs to transform the CAD object.)

I can see how to get the world poses (I think) for my two objects:

        cam_pose,cam_trans,cam_rot = self.getWorld("/World/Camera/BotCamera")
        pal_pose,pal_trans,pal_rot = self.getWorld("/World/Euro_Pallet1")

        rel_pose = pal_pose - cam_pose   # ????

getWorld defined at end… this is from code somewhere in here I found…

Forgive me for my weakness here - but how do I get the relative pose - from the camera to the object? Above (subtraction) works for transformations (row 4 of pose), but fails for the rotation component. I really want to create a rotation matrix that nicely rotates around the x, y, z axis.


    def getWorld(self,prim_path):
        timeline = omni.timeline.get_timeline_interface()
        timecode = timeline.get_current_time() * timeline.get_time_codes_per_seconds()
        stage = omni.usd.get_context().get_stage()
        curr_prim = stage.GetPrimAtPath(prim_path)
        pose = omni.usd.utils.get_world_transform_matrix(curr_prim, timecode)
        trans = pose.ExtractTranslation()
        trans = np.array(trans)
        abs_rotation = Gf.Rotation.DecomposeRotation3(pose, Gf.Vec3d.XAxis(), Gf.Vec3d.YAxis(), Gf.Vec3d.ZAxis(), 1.0)
        abs_rotation = [
        abs_rotation = np.array(abs_rotation)
        return pose,pose.ExtractTranslation(),abs_rotation

Okay, getting there. Rookie mistake. Should be:

inv_cam_pose = np.linalg.inv(cam_pose)
rel_pose =,pal_pose)


Hi Peter,

Did you get the pose detection working?


Still working on it. Very close. Using the EfficientPose model.

Key stumbling point at this exact point is getting the data and pose from Isaac SIM to make the model happy. Getting closer - hoping to be retraining the model within a week. (First time I trained, though w/ bad conversion from Isaac SIM to the model - showed very promising results.)

Great. I will stay tuned :)

Making (great?) progress. The following image shows the machine learning model detecting an object (in this case, a pallet) as well as it’s pose. Green is ground truth, blue is prediction. A little cherry picked as the model is only 5% through training at this point. So all looking good!

Key elements are:

  • Isaac SIM to create synthetic environment
  • Python code to run the SIM and capture the data. The provided code had to be modified a reasonable amount.
  • Conversion code to put data in the format for EfficientPose (close to a standard), though this also requires some conversion on pose/translation
  • Training EfficientPose model.

Probably worthy of a Medium article - lots of lessons learned in there!

1 Like

This is great.
Peter, You can publish your pose estimation training code as an extension if you want, and share it with others.
I know many are interested in it.

1 Like