Bug get_world_from_local() inside the torch backend transformation.py

I have recently noticed a bug inside the get_world_from_local() implementation, inside the torch backend. I am implementing a reinforcement learning task with omniverse gym and i used a RigidPrimView to randomize the starting position of a prim inside my environment. During development i noticed, that utilizing the set_local_poses() function of the RigidPrimView did not correctly update the position of primitives in respect to their parents translation. After some debugging i tracked the problem down to the torch backend implementation of the get_world_from_local() function. Here is the original implementation:

def get_world_from_local(parent_transforms, translations, orientations, device):
    calculated_positions = create_zeros_tensor(shape=[translations.shape[0], 3], dtype="float32", device=device)
    calculated_orientations = create_zeros_tensor(shape=[translations.shape[0], 4], dtype="float32", device=device)
    my_local_transforms = tf_matrices_from_poses(translations=translations, orientations=orientations, device=device)
    # TODO: vectorize this
    for i in range(translations.shape[0]):
        world_transform = torch.matmul(parent_transforms[i], my_local_transforms[i])
        transform = Gf.Transform()
        transform.SetMatrix(Gf.Matrix4d(torch.transpose(world_transform, 0, 1).tolist()))
        calculated_positions[i] = torch.tensor(transform.GetTranslation(), dtype=torch.float32, device=device)
        calculated_orientations[i] = gf_quat_to_tensor(transform.GetRotation().GetQuat())
    return calculated_positions, calculated_orientations

This implementation of the function is missing a transpose operation inside the first line inside the for loop, so here is my fix:

def get_world_from_local(parent_transforms, translations, orientations, device):
    calculated_positions = create_zeros_tensor(shape=[translations.shape[0], 3], dtype="float32", device=device)
    calculated_orientations = create_zeros_tensor(shape=[translations.shape[0], 4], dtype="float32", device=device)
    my_local_transforms = tf_matrices_from_poses(translations=translations, orientations=orientations, device=device)
    # TODO: vectorize this
    for i in range(translations.shape[0]):
        world_transform = torch.matmul(torch.transpose(parent_transforms[i], 0, 1), my_local_transforms[i])
        transform = Gf.Transform()
        transform.SetMatrix(Gf.Matrix4d(torch.transpose(world_transform, 0, 1).tolist()))
        calculated_positions[i] = torch.tensor(transform.GetTranslation(), dtype=torch.float32, device=device)
        calculated_orientations[i] = gf_quat_to_tensor(transform.GetRotation().GetQuat())
    return calculated_positions, calculated_orientations

This seems to fix the problem for me, but I did not yet have the time available to do a full analysis of the issue.

As far as I can tell the omni.isaac.core package is not available through github, where I would have been able to open an issue/a pull request, right? Can someone please confirm this bug and my suggested fix?