Any friend is studying the example Anymal and AnymalTerrain? we can discuess together under the post.
I am studying it now,how can I contact you?
I’m trying to make quadruped project right now,
I tried git hub Leggedrobotics(Anymal Terrain) by nikitardian
Can I join?
Here is what I practice, I’m really want to study this area
I would love if you guys could share the experience. I am having a really hard time to make it work.
@CaptainMaoli
@user38580
@MinW00
Hi I have it working without problems
my setup
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.48.07
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.13.0-52-generic-x86_64-with-glibc2.29
[pip3] numpy==1.19.5
[pip3] torch==1.12.0+cu116
[pip3] torchaudio==0.12.0+cu116
[pip3] torchvision==0.13.0+cu116
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
from terminal
cd Desktop/legged_gym-master/legged_gym/scripts
python train.py --task=anymal_c_rough
press V in the display to speed up training
python play.py --task=anymal_c_rough
I am planning to make a youtuber video about legged gyym but there is this one also Set up Isaac Gym with Legged Robots: Reinforcement Learning - YouTube
please post any questions you have.
I found it worked better with a 12Gb GPU, 24Gb computer RAM and fast CPU
sincerly
Sujit Vasanth
Anyone know about the max epoch num of AnymalTerrain?
Is it
episodeLength_s
/ dt
= 4000 ?
But it can only reach about 1000. Thank you!
Hi Kumiko perhaps I am being too simplistic but you can set the max_epochs variable in
AnymalTerrainPPO.yaml
It is set by default to 1500 and when it saved my model on completon of training the last checkpoint was ‘runs/AnymalTerrain/nn/last_AnymalTerrainep1501rew[13.42].pth’
and the command line stated MAX EPOCHS NUM!
For anyone struggling to run anymal terrain on a lower spec device I was able to run easily on gtx 1660 Ti which has 6Gb graphics memory and 16 gb of internal memory
To do this:
reduce minibatch and env nums both by dividing by 16;
I think the problem is the implementation of rl_games doesnt tolerate extreme ratios of batch size to minibatch size
add these to the relevant yaml files as below
AnymalTerrainPPO.yaml (1.8 KB)
horizon_length: 24
minibatch_size: 1024
AnymalTerrain.yaml (4.1 KB)
env:
numEnvs: ${resolve_default:256,${…num_envs}}
You can also simpifly the terrain to a 4x4 block
mapLength: 6.
mapWidth: 6.
numLevels: 4
numTerrains: 4
I rewrote the terrain curriculum gerneration and tweaked terrain curriculum graduation to be optimised for a 4x4 layout (the robots all start on flat ground and when completing the levels move back to a randomised flat section so it prioritises good flat walking as well as terrains
anymal_terrain.py (37.6 KB)
It takes about 5 minutes to train if foward velocity is prioritised in the command randomisations as per the default example or about 10 minites if you increase the command velocities to linear_x=-1 to 1, liinear_y=-1 to 1, yaw to -3.14 to 3.14.
In my version I also removed the height sensors which reduced the number of observations per environment to 48 which also needs changing in the yaml (i think the uploaded version has this updated)
Thanks a lot!