Out of CUDA memory when running torch project.

When I run my project about learning machine with Torch I get error ‘cuda out of memory’ even my machine got 2x 1080 Ti 11 Gb Rams with SLI card together.
I monitored the VGA memory usage while running the project by ‘nvidia-smi’ (Ubuntu 16.04). It said the first VGA card memory is (nearly) full but the second one is (nearly) empty. So I wonder why they not shared memory together or do/where I need to modify the code?

Thank you very much.

you need to modify your torch code if you want to use 2 GPUs.

just google “pytorch multiple gpus” and start reading

Either Torch or the application using Torch is responsible for splitting work across multiple GPUs in a system.

Does Torch have configuration settings that allow programmers to specify the use of multiple GPUs? Does it offer a “set device” API call that allows applications to specify which particular GPU following API calls should affect?

If you don’t know the answers to these questions, the fastest way to make forward progress is probably to either do an internet search for these topics, or ask on a forum / mailing list dedicated to Torch.

[Ooops, should have pressed F5 first]