TensorFlow memory usage and extra processes/threads. How can we prevent so many resources from being used?

I have some issues with tensorflow on a couple of Jetson Nanos I picked up. I am using python 3 and tensorflow 2. The issue is that when I load my program, it spawns between 15 and 20 processes, all using a large amount of RAM, almost 3gb on one of the nanos and going only up to 1GB on the other. Even 1GB seems steep as my models are only about 2MB (3 of them). below is what they look like.

My question is how can we tweak tensorflow’s multi proc/ threading features? In 2 there are no longer sessions and I am unsure on how to configure these machines (2 nanos and a xavier) to run my neural networks without using a horrible amount of RAM and starting so many processes. Any help is greatly appreciated. It is curious that I installed the same versions of libraries on both Nanos and one takes 1177MB and the other 287MB. This is using tensorflow 1.15 and keras 2.3.1. Something is different in the CUDA or tensorflow. I installed the Jetpack the same way with the same image on both and deployed the same user code. Any ideas anyone? Thanks!!!

I have verified that no other code is affecting this. It is the CUDA and TensorFlow libraries that are spinning up the processes and using all of the RAM.