AssertionError

,

I changed the numEnvs from 1024 to 2 and run the following command.

… python rlg_train.py --task Ant

And then, I got the following error.
assert(self.batch_size % self.minibatch_size == 0)
AssertionError
As far as I understand, I should also change or adapt batch_size and minibatch_size to get rid of this this error. But, I could not find these two variables in any file. Does anybody know where I can find them?

Thanks in advance!

I get the same error as well, when changing num_envs and running any RL example.
I am on a completely fresh install of Ubuntu 20.04, so I don’t really understand what’s wrong.

As for the batch_size and minibatch_size, check out this thread, maybe this works for you?: Gym cuda error: running out of memory - #4 by toni.sm

-Anton

Minibatch is in the PPO YAML. I’ve had the error before, it seems somewhat inaccurate - it usually appears when I drastically reduce the number of environments as though it doesn’t understand that 2 goes into any minibatch that 1024 goes into.

When I’ve reduced the minibatch it works, though if its too close to the number of environments it appears to cause pauses in the physics rendering (I just made a post about the pauses this morning and figured out this relationship as I looked into it more).

This is a requirement of the rl_games RL library. Make sure that your the parameters in the training yaml config file (under isaacgymenvs/cfg/train) satisfies the assertion requirement of self.batch_size % self.minibatch_size == 0. batch_size is computed as horizon_length * num_envs.

HI was able to solve this simply on my gtx 1660 Ti by dividing the minibatch and env nums both by 16;

I think the problem is the implementation of rl_games doesnt tolerate extreme ratios of batch size to minibatch size
add these to the relevant yaml files as below

AnymalTerrainPPO.yaml (1.8 KB)
horizon_length: 24
minibatch_size: 1024

AnymalTerrain.yaml (4.1 KB)
env:
numEnvs: ${resolve_default:256,${…num_envs}}

You can also simpifly the terrain to a 4x4 block
mapLength: 6.
mapWidth: 6.
numLevels: 4
numTerrains: 4

I rewrote the terrain curriculum gerneration and tweaked terrain curriculum graduation to be optimised for a 4x4 layout (the robots all start on flat ground and when completing the levels move back to a randomised flat section so it prioritises good flat walking as well as terrains

anymal_terrain.py (37.6 KB)

It takes about 5 minutes to train if foward velocity is prioritised in the command randomisations as per the default example or about 10 minites if you increase the command velocities to linear_x=-1 to 1, liinear_y=-1 to 1, yaw to -3.14 to 3.14.

In my version I also removed the height sensors which reduced the number of observations pre environment to 48 which also needs changing in the yaml (i think the uploaded version has this updated)