Aunch NVIDIA NIM (llama3-8b-instruct) for LLMs locally

We Followed Getting Started - NVIDIA Docs document step by step (both API catalog and NGC approach) to launch NVIDIA NIM (llama3-8b-instruct) for LLMs locally but facing below error:

Hi @rawnak.kumar – a couple debugging steps:

  • Can you ensure that the server you are deploying on has network access, and specifically can reach ngc.nvidia.com over the network?
  • Can you try launching the NIM container again? Occasionally ngc.nvidia.com has transient network errors that disappear on a retry

Hey @rawnak.kumar, I saw your other post in the VIA forum – have you looked at the “Serving models from Local Assets” section of this page? It might help with your usecase.

1.Our server is able to access ngc.nvidia.com
2. We tried many times connecting it but it’s throwing same error.