Treat the multi-GPU as single GPU and memory size


Do you have any way to use the multi-GPU as a single GPU and memory size for machine learning with Tensorflow framework?
So, if used the DGX-Station, there are four GPUs(V100 32GB). In this case, user will use this WS as 128GB, 20,480 cuda and 2,560 tensor cores.

Best regards.

There is no transparent way to make TensorFlow tranparently span all 4 GPUs like a single logical device. You would need to explicitly assign various sections of your graph to different GPUs using with tf.device() scopes.