How to save/load the trained agent for a RL problem?

Hi all,

how does one efficiently save/load a trained agent? Somehow, the existing examples do not cover this part.

Hi @hyungjoo237,

That depends on how you trained it. If you used the built-in rl-pytorch framework with the script, you can simply pass the --resume= flag along with an epoch number, along with the --test flag to indicate that it shouldn’t try to train further.

If you’re using rl_games and the script, use --checkpoint= and explicitly pass one of the networks it saves in the nn directory, rather than using --resume. You’ll still want to use --test as well though.

I also typically pass --num_envs and use a lower number of environments so that things run and render faster.

Take care,