Can I connect 2 H100 GPUs with NVLINK to infer the 70B model?

I currently have one h100 gpu, if I introduce another h100 and nvlink 2 h100s together, will it be possible to infer 70B or 80B models? Will it increase the available vram pool? I want to take huggingface’s model and put it on a langchain and use it.