I would like to make the training of the agent from franka.py dependent of variables from ppo.py (for instance num_learning_iterations). What would be the best way to utilize the variables from ppo.py in other task scripts such as franka.py? Is there an example script that I can rely on?
You can access variables from ppo.py via the VecTask class in vec_task.py. For example, you can modify ppo.py to set a variable in your task with self.vec_env.task.num_learning_iterations = num_learning_iterations. Or alternatively, you can also give your task access to the PPO object directly in python/rlgpu/train.py’s train() by setting task.ppo = ppo, or task.num_learning_iterations = ppo_iterations.