Isaac Sim Version
4.5.0
Operating System
Ubuntu 22.04
Topic Description
I have trained a locomotion policy with the Unitree Go2 robot, following the example in velocity_env_cfg.py
, and obtained the policy.pt
and env.yaml
files in the exported results. Then, I tried to follow the Deploying Policies in Isaac Sim documentation and examples in standalone_examples.api.isaacsim.robot.policy.examples
and the policy controller example in exts.isaacsim.robot.policy.examples.robots
.
However, inside the _compute_observation()
method, there are no examples of observation terms from the height scanner, which is included in my PolicyCfg in Isaac Lab. So far, I haven’t found any documentation showing this kind of example. I would like to know if it’s possible to deploy the policy in IsaacSim with height scanner observations in a simple way.
Here is the _compute_observation from anymal configuration.
def _compute_observation(self, command):
"""
Computes the the observation vector for the policy
Argument:
command (np.ndarray) -- the robot command (v_x, v_y, w_z)
Returns:
np.ndarray -- The observation vector.
"""
lin_vel_I = self.robot.get_linear_velocity()
ang_vel_I = self.robot.get_angular_velocity()
pos_IB, q_IB = self.robot.get_world_pose()
R_IB = quat_to_rot_matrix(q_IB)
R_BI = R_IB.transpose()
lin_vel_b = np.matmul(R_BI, lin_vel_I)
ang_vel_b = np.matmul(R_BI, ang_vel_I)
gravity_b = np.matmul(R_BI, np.array([0.0, 0.0, -1.0]))
# height_map
obs = np.zeros(48)
# Base lin vel
obs[:3] = lin_vel_b
# Base ang vel
obs[3:6] = ang_vel_b
# Gravity
obs[6:9] = gravity_b
# Command
obs[9:12] = command
# Joint states
current_joint_pos = self.robot.get_joint_positions()
current_joint_vel = self.robot.get_joint_velocities()
obs[12:24] = current_joint_pos - self.default_pos
obs[24:36] = current_joint_vel
# Previous Action
obs[36:48] = self._previous_action
return obs