Use all graphics memory in DGX Station

I have bought a DGX Station, total of 128 GB graphics memory, but whenever I try to train a Tensorflow / Keras model using more than 32 GB of memory (i.e. 1 Tesla V100 card) it crashes. How do I enable using the large memory pool? I have tried

gpu_options.experimental.use_unified_memory = True

but it still fails.

You need to parallelize training over the GPUs using something like Horovod https://github.com/uber/horovod or TF’s distributed strategies https://www.tensorflow.org/guide/distributed_training.