Tesla K80 dual GPU

I’m working on a VM with multiple GPU nodes, each node has an NVIDIA Tesla K80 dual GPU with 24 GB of GPU RAM (eash gpu has 12GB RAM).
I need to run a machine learning model on 1 gpu with 24 GB of RAM, so I want to know if there’s any possible configuration for the gpu card to make “nvidia-smi” detect it as 1 gpu with single large shared memory?