[Warning] [omni.physx.tensors.plugin] DEPRECATED: Please use getGeneralizedMassMatrices instead

Hello :)
I updated my Isaacsim to 4.5 because I got some bugs with the tiled rendering of cameras in 4.2. Now after adjusting my code I get this warning:" [omni.physx.tensors.plugin] DEPRECATED: Please use getGeneralizedMassMatrices instead " like every physic step. Is this warning critical? What can I do ? I use an sligthly adapted version of the isaaclab factory environment and there is no explicit usage of omni.physx.tensors.plugin.
Also intersting: When I pause the simulation in gui, I continiue to get this warning-

Thanks in advance :)

Isaac Sim Version

4.5.0
4.2.0
4.1.0

Operating System

Ubuntu 22.04
Ubuntu 20.04
Windows 11
Windows 10
Other (please specify):

GPU Information

  • Model RTX 4060TI 16GB:
  • Driver Version: Driver Version: 550.54.14 CUDA Version: 12.4

Additional Information

The warning seems to be triggered, whenever this function is used:

def _compute_intermediate_values(self, dt):
print(“_compute_intermediate_values”)
self.fixed_pos = self._fixed_asset.data.root_pos_w - self.scene.env_origins
self.fixed_quat = self._fixed_asset.data.root_quat_w

    self.held_pos = self._held_asset.data.root_pos_w - self.scene.env_origins
    self.held_quat = self._held_asset.data.root_quat_w

    self.fingertip_midpoint_pos = self._robot.data.body_pos_w[:, self.fingertip_body_idx] - self.scene.env_origins
    self.fingertip_midpoint_quat = self._robot.data.body_quat_w[:, self.fingertip_body_idx]
    self.fingertip_midpoint_linvel = self._robot.data.body_lin_vel_w[:, self.fingertip_body_idx]
    self.fingertip_midpoint_angvel = self._robot.data.body_ang_vel_w[:, self.fingertip_body_idx]   

    self.cam_pos = self._robot.data.body_pos_w[:, self.hand_body_idx] # self.scene.env_origins
    self.cam_pos += self.cam_pos_local
    self.cam_quat = torch_utils.quat_mul(self._robot.data.body_quat_w[:, self.hand_body_idx], self.cam_quat_local)
    
    #delete_prim('/World/envs/vis')
    #cfg_f = FRAME_MARKER_CFG.replace(prim_path="/World/envs/vis")
    #frame_markers = VisualizationMarkers(cfg_f)
    #frame_markers.visualize(translations=self.cam_pos,orientations=self.cam_quat)
    #w_frame_markers = VisualizationMarkers(cfg_f)
    #w_frame_markers.visualize(translations=self.held_pos,orientations=self.held_quat)

    self._wrist_camera.set_world_poses(positions= self.cam_pos,orientations=self.cam_quat, convention =  'world')

    self.hand_grasp_pos = self.fingertip_midpoint_pos.clone()
    #self.hand_grasp_pos[:,2] = self.hand_grasp_pos[:,2] + 0.06
    self.hand_grasp_quat = self.fingertip_midpoint_quat.clone()

    jacobians = self._robot.root_physx_view.get_jacobians()

    self.left_finger_jacobian = jacobians[:, self.left_finger_body_idx - 1, 0:6, 0:7]
    self.right_finger_jacobian = jacobians[:, self.right_finger_body_idx - 1, 0:6, 0:7]
    self.fingertip_midpoint_jacobian = (self.left_finger_jacobian + self.right_finger_jacobian) * 0.5
    self.arm_mass_matrix = self._robot.root_physx_view.get_mass_matrices()[:, 0:7, 0:7]
    self.joint_pos = self._robot.data.joint_pos.clone()
    self.joint_vel = self._robot.data.joint_vel.clone()

    # Finite-differencing results in more reliable velocity estimates.
    self.ee_linvel_fd = (self.fingertip_midpoint_pos - self.prev_fingertip_pos) / dt
    self.prev_fingertip_pos = self.fingertip_midpoint_pos.clone()

    # Add state differences if velocity isn't being added.
    rot_diff_quat = torch_utils.quat_mul(
        self.fingertip_midpoint_quat, torch_utils.quat_conjugate(self.prev_fingertip_quat)
    )
    rot_diff_quat *= torch.sign(rot_diff_quat[:, 0]).unsqueeze(-1)
    rot_diff_aa = axis_angle_from_quat(rot_diff_quat)
    self.ee_angvel_fd = rot_diff_aa / dt
    self.prev_fingertip_quat = self.fingertip_midpoint_quat.clone()

    joint_diff = self.joint_pos[:, 0:7] - self.prev_joint_pos
    self.joint_vel_fd = joint_diff / dt
    self.prev_joint_pos = self.joint_pos[:, 0:7].clone()

    #if self.print_stuff: print("Update: ", self.held_pos[0,:])
    # Keypoint tensors.
    self.held_base_quat[:], self.held_base_pos[:] = torch_utils.tf_combine(
        self.held_quat, self.held_pos, self.held_base_quat_local, self.held_base_pos_local
    )

    self.hand_base_quat[:],self.hand_base_pos[:] = torch_utils.tf_combine(
         self.hand_grasp_quat, self.hand_grasp_pos,self.hand_grasp_quat_local,self.hand_grasp_pos_local
    )

    #in this context the target_held == the held base as the gripper should close on the the held object
    self.target_held_base_quat = self.held_base_quat
    self.target_held_base_pos = self.held_base_pos


    # Compute pos of keypoints on held asset, and fixed asset in world frame
    for idx, keypoint_offset in enumerate(self.keypoint_offsets):
        self.keypoints_held[:, idx] = torch_utils.tf_combine(
            self.held_base_quat, 
            self.held_base_pos, #+ self.scene.env_origins,
            self.identity_quat, 
            keypoint_offset.repeat(self.num_envs, 1)
        )[1]
        self.keypoints_hand[:, idx] = torch_utils.tf_combine(
            self.hand_base_quat ,
            self.hand_base_pos, #+self.scene.env_origins,
            self.identity_quat,
            keypoint_offset.repeat(self.num_envs, 1),
        )[1]

    if self.debug and False:
        self.draw.clear_points()
        self.draw.clear_lines()

        """num_keypoints = len(self.keypoint_offsets)
        all_held_points = self.keypoints_held #+ self.scene.env_origins
        all_held_points = all_held_points.view(-1, 3).cpu().tolist()  # Alle Environment-Keypoints
        #all_held_points = self.keypoints_held.view(-1, 3).cpu().tolist() 
        all_hand_points = self.keypoints_hand #+ self.scene.env_origins
        all_hand_points = all_hand_points.view(-1, 3).cpu().tolist()
        #all_hand_points = self.keypoints_hand.view(-1, 3).cpu().tolist() 
        # 2. Farb- und Größenlisten generieren
        held_colors = [(1, 0, 0, 1)] * (self.num_envs * num_keypoints)  # Rot fĂĽr alle Held-Keypoints
        hand_colors = [(1, 1, 0, 1)] * (self.num_envs * num_keypoints)  # Gelb fĂĽr Hand-Keypoints
        sizes = [10] * (self.num_envs * num_keypoints)
        # 3. Batch-Zeichnung aller Punkte
        self.draw.draw_points(all_held_points, held_colors, sizes)
        self.draw.draw_points(all_hand_points, hand_colors, sizes)"""
    

    #keep the noisy action frame up to date which helps clipping the movments
    _, fixed_tip_pos = torch_utils.tf_combine(
        self.held_quat, self.held_pos, self.identity_quat, self.held_base_pos_local)
    self.fixed_pos_obs_frame[:] = fixed_tip_pos
    self.fixed_pos_action_frame[:] = self.fixed_pos_obs_frame + self.init_fixed_pos_obs_noise
    
    # get keypoint distances
    self.keypoint_dist = torch.norm(self.keypoints_hand - self.keypoints_held, p=2, dim=-1).mean(-1)
    self.last_update_timestamp = self._robot._data._sim_timestamp

Error Messages

2025-03-26 20:22:19 [19,768ms] [Warning] [omni.physx.tensors.plugin] DEPRECATED: Please use getGeneralizedMassMatrices instead.
2025-03-26 20:22:19 [19,785ms] [Warning] [omni.physx.tensors.plugin] DEPRECATED: Please use getGeneralizedMassMatrices instead.
2025-03-26 20:22:19 [19,801ms] [Warning] [omni.physx.tensors.plugin] DEPRECATED: Please use getGeneralizedMassMatrices instead.
2025-03-26 20:22:19 [19,818ms] [Warning] [omni.physx.tensors.plugin] DEPRECATED: Please use getGeneralizedMassMatrices instead.

The answer was:
update: self.arm_mass_matrix = self._robot.root_physx_view.get_mass_matrices()[:, 0:7, 0:7]
to: self.arm_mass_matrix = self._robot.root_physx_view.get_generalized_mass_matrices()[:, 0:7, 0:7]

1 Like

Hi @ujixa, that is correct and glad you found the answer. Also, thanks for sharing it!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.