GPU usage very "doscontinuous" while training with a high batch size (Tensorflow)

(yeah, maybe the title is too long)
I’m using nvidia-system-monitor gui to monitor my gpu usage (it’s an mx130)
While training SqueezeNet with a “”“high”"" batch size (20), the usage graph gets… I don’t know, look yourself:

I’m still getting my way into DL, this is probably a so-called “noobie” question, but I was curious about what’s actually happening.
(SqueezeNet v1.1 by the way)