I currently have one h100 gpu, if I introduce another h100 and nvlink 2 h100s together, will it be possible to infer 70B or 80B models? Will it increase the available vram pool? I want to take huggingface’s model and put it on a langchain and use it.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Can I increase GPU memory using Nvlink? | 0 | 639 | October 6, 2020 | |
Is the protocol support for nvlink complete for the 3090 graphics card? | 0 | 505 | July 29, 2023 | |
Using A100 and H100 in the same server | 0 | 535 | June 5, 2024 | |
Mixing A40s and A100 in the same server | 4 | 541 | November 8, 2023 | |
Is it possible to use two H100 GPUs in a node? | 0 | 256 | August 8, 2023 | |
Optimal multi-GPU system | 2 | 648 | September 7, 2017 | |
GV100 x6 | 0 | 503 | August 1, 2019 | |
A100 Nvlink on Dual CPU System | 1 | 891 | January 28, 2022 | |
Unified programming model across multiple GPUs | 0 | 1775 | January 25, 2018 | |
NvLink (V100) | 4 | 1945 | October 12, 2021 |