please take a look at this website:
In the code, right above the sentence “We should see our cartpole policy running!”, inference is being run on the trained neural network in the line:
> action, _states = model.predict(obs)
I am using GitHub - NVIDIA-Omniverse/OmniIsaacGymEnvs: Reinforcement Learning Environments for Omniverse Isaac Gym to train a neural network, which does not use Stable Baselines. Instead it uses: GitHub - Denys88/rl_games: RL implementations.
I have not found a way to run inference manually on a neural network that was trained with this framework. I know that you can run inference by setting test=True in the command line and giving a checkpoint, however I want to do in similar to the way it is in the code above, so in the code itself .The reason I want to do that is because I want to infer different neural networks in dependence of the current state of the environment. Ideally I will have for example 3 neural networks, model_0, model_1, model_2, so that they can be inferred in the same program.