GPU acceleration of an Reinforced learning algorithm

I have like 7 shallow pre-trained neural networks in PyTorch saved in .pt format. I am using Q-learning algorithm which basically updates a lookup table based on predictions from a neural network. My current code is using a for loop inside which it randomly loads a model and gets predictions to update the lookup table using the learning update rule. I have to repeat it like half a million times. Using python can I run this code on my GPU to make it faster?

Any suggestions would be helpful.

Thanks and Regards,