I spent some time changing the legged_gym and rsl_rl (PPO implementation) libraries to what I can understand better, I combined both libraries and simply moved all the files to a single folder and changed their dependencies and related import parts of each file so that they can reside in a same folder without any error. It’s configured for only one single task, you just need to copy the URDF file and meshes to the specified folder, and also change the name and number of DOFs in the legged_robot_config file accordingly. Since all the files are in a same folder, it doesn’t need any setup.
There are 18 files involved in creating the simulation and training the RL algorithm.
If it seems helpful, send me a message and I will share the files with you.