[Bug report]Error while using robot assembler extensions on cuda device

Hi, I’m trying to use robot assembler in Isaac Lab which is running Isaac Sim on the cuda device with torch backend.
When I tried to assemble robots, it throws following errors. It seems the extension is not designed to be used on cuda?

  File "/isaac-sim/exts/omni.isaac.robot_assembler/omni/isaac/robot_assembler/robot_assembler.py", line 446, in assemble_articulations
    assemblage = self.assemble_rigid_bodies(
  File "/isaac-sim/exts/omni.isaac.robot_assembler/omni/isaac/robot_assembler/robot_assembler.py", line 377, in assemble_rigid_bodies
    self._move_obj_b_to_local_pos(
  File "/isaac-sim/exts/omni.isaac.robot_assembler/omni/isaac/robot_assembler/robot_assembler.py", line 555, in _move_obj_b_to_local_pos
    a_rot = quats_to_rot_matrices(a_orient)
  File "/isaac-sim/exts/omni.isaac.core/omni/isaac/core/utils/numpy/rotations.py", line 120, in quats_to_rot_matrices
    rot = Rotation.from_quat(q)
  File "_rotation.pyx", line 637, in scipy.spatial.transform._rotation.Rotation.from_quat
  File "_rotation.pyx", line 514, in scipy.spatial.transform._rotation.Rotation.__init__
  File "/isaac-sim/exts/omni.isaac.ml_archive/pip_prebundle/torch/_tensor.py", line 1064, in __array__
    return self.numpy().astype(dtype, copy=False)
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

This can be reproduced in Isaac Sim with world = World(device="cuda:0", backend="torch") settings.
(Tested with Isacc Sim 4.0.0)

After digging into this issue further, I’ve found the reason for the crashing. In the method _move_obj_b_to_local_pos in the robot_assembler.py file, there is a line which is a_trans, a_orient = XFormPrim(base_mount_path).get_world_pose(). This line will return either numpy array or torch tensor, depending on the backend. And the next line(a_rot = quats_to_rot_matrices(a_orient)) does not expect a_orient to be tensor, so it crashes.

I was able to avoid the crash with the following changes. I don’t think this is a proper fix, but at least I could use the extension. If anyone needs this feature right now, you can modify robot_assembler.py as follows.

a_trans, a_orient = XFormPrim(base_mount_path).get_world_pose()
# add these 3 lines
if isinstance(a_trans, torch.Tensor):
    a_trans = a_trans.cpu().numpy()
    a_orient = a_orient.cpu().numpy()
t_bc, q_bc = XFormPrim(attach_mount_path).get_local_pose()
# add these 3 lines
if isinstance(t_bc, torch.Tensor):
    t_bc = t_bc.cpu().numpy()
    q_bc = q_bc.cpu().numpy()

Please someone look into this bug and fix it as soon as possible.