AnymalTerrain adaptation to new robot does not work (OmniIsaacGym)

I am using the GitHub - NVIDIA-Omniverse/OmniIsaacGymEnvs: Reinforcement Learning Environments for Omniverse Isaac Gym Framework right now. Running the example “AnymalTerrain” works perfectly. I therefore tried adapting the Task so that I am not using the Anymal Robot but rather the Unitree A1. However when I do so I get the following error:

> RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)

This error occurs in line 335 of file https://github.com/NVIDIA-Omniverse/OmniIsaacGymEnvs/blob/main/omniisaacgymenvs/tasks/anymal_terrain.py (Adapted to my Unitree A1 Robot; again, there is no such problem with the Example File “anymal_terrain.py”). The error occurs in “quat_rotate_inverse”.

I tried printing “self.base_quat” and “self.base_velocities”, obtaining the following error:

> RuntimeError: numel: integer multiplication overflow

The assignment statement does not yield any error message.

By “adapting” I mean that I changed nothing . I just changed the usd Files and renamed everything from “anymal” to “unitree” where appropriate. My Unitree A1 Files (config, robot, view and task files) are identical to AnymalTerrain; except for that my A1 is not an instanceable file.

I have the same problem with the Unitree Go1 robot. I even tried to make the needed objects instanceable but to no avail.

When printing the base_quat object, I get the following error.

…/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [36,0,0], thread: [64,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.

RuntimeError: CUDA error: device-side assert triggered

Do did you reach any solution for this issue ?

I solved the error. It has to do with a variable “count” in the RigidPrimView class that is the parent of AnmalView