Using IMU in RL learning training for a walking biped

Hi, I have implemented a nice bipedal walking rl program for a custom-built robot on isaacGYmn Preview 3 but now wanted to add an IMU (ideally MPU 6050) into the rl training using omniverse isaac sim.

I’ve got my robot up and running in Isaac sim now. I initially tried to add a contact sensor for the feet which wasn’t useful as I couldn’t find a way to implement the sensor into my rl pipeline across my 512 actors due to no way of making a tensorised view? so went with getting the information from rigid body net contact forces.

How do I implement the IMU across all the actors so it can be one of the observations for the rl learning?

Untitled ‑ Made with FlexClip (4)

Hi there, you can check out the RigidContactView APIs (Core [omni.isaac.core] — isaac_sim 2022.2.1-beta.29 documentation) for vectorized APIs on retrieving net contact forces on bodies.

Thanks Kelly
I will look at the rigid contact view implementation - is this a new feature of v2022.2? its hard to tell as there doent seem to be version specific documentation? But I solved this already as stated in my original post.
But how do i use the IMU for rl training data in a tensorised form so I can feed it into the observations for training for all my 512 robots? - that was my question

Hi @sujitvasanth - To use the IMU for RL training data in a tensorized form, you would need to extract the IMU sensor data from each robot and convert it into a tensor that can be used as input for your RL model. Here’s a general approach on how you can do this:

  1. Extract IMU Data: For each robot, you would need to extract the IMU sensor data. In Isaac Sim, you can use the omni.isaac.sensor extension to simulate the IMU sensor and extract the linear acceleration and angular velocity of the robot.

thanks @rthaker rthaker for the quick response

  • I think there may by a truncation in your response as point 1 and point 2 are the same please can you update??.. I have experimented and yes - can extract a single IMU data within rl as intended… but how to tensorize the entire 512 instances data to a single tensor all on gpu? could you show an example/gist of how to do this across all the scenes please? prefer to use pytorch
    presumably the data is already all on the gpu so ?just needs a view?

Hi @sujitvasanth - Apologies. Here is the updated response:

  1. Add IMU Sensor to Robots: First, you need to add an IMU sensor to each of your robots. In Isaac Sim, you can simulate the IMU sensor by extracting the linear acceleration and angular velocity of the body from the physics engine and apply local transformation of the sensor to simulate the sensor output via the omni.isaac.sensor extension.

  2. Retrieve Sensor Readings : To retrieve readings from the added sensors, you can create an ArticulationView and use the get_force_sensor_forces() API, which will return a buffer of dimension (num_articulations, num_sensors_per_articulation, 6)

  3. Tensorize the Data : Once you have the sensor readings, you can convert them into a tensor form suitable for feeding into your RL model. You can use libraries like NumPy or PyTorch for this. For example, if you’re using PyTorch

  4. Feed into Observations for Training: Now that you have your sensor readings in tensor form, you can feed them into your RL model as observations for training. The exact way to do this will depend on the specifics of your RL model and training setup.

thanks but you have a sensor extension that specifically caters for imu’s Isaac Sensor Extension [omni.isaac.sensor] — isaac_sim 2022.2.1-beta.29 documentation
I have done the approach you have suggested previously in isaacgym Raw accelerometer, gyroscope, magnetometer - #6 by sujitvasanth)

So I have already attached one of these ?IMU sensors to my robot usd asset in omniversw but wanted to know how to tensorise them so can be used in training data…please be specific on how to do this in pytorch?

Hi there, the IMU from the omni.isaac.sensor extension does not have tensorized support. For tensorized APIs, we provide the RigidContactView class for retrieving contact forces.

your simulated IMU seems more advanced than just taking the forces cting on a rigid body - thats important for later SIM to real implement of the resultant neural network

Something like this:

Pull from the IMU interfaces

def get_acc_readings(IMU, num_envs):
sensor_paths = [f"/World/envs/env_{i}/pinky/base_link/Imu_Sensor" for i in range( num_envs)]
lin_acc_readings =
ang_vel_readings =
# Iterate over sensor paths
for path in sensor_paths:
# Get sensor reading with custom interpolation function
reading = IMU.get_sensor_reading(path)

    # Append linear accelerometer and angular velocity readings
    lin_acc_readings.append((reading.lin_acc_x, reading.lin_acc_y, reading.lin_acc_z))
    ang_vel_readings.append((reading.ang_vel_x, reading.ang_vel_y, reading.ang_vel_z))

lin_acc_tensor = torch.tensor(lin_acc_readings)

return lin_acc_tensor

Concatenate individual readings

    flattened_lin_acc =[0], dim=0)
    flattened_ang_vel =[1], dim=0)

    # Reshape to have 8 rows (one for each environment) and 3 columns (x, y, z)
    lin_acc_readings_tensor = flattened_lin_acc.view(self._num_envs, 3)
    ang_vel_readings_tensor = flattened_ang_vel.view(self._num_envs, 3)