Aunch NVIDIA NIM (llama3-8b-instruct) for LLMs locally

Hey @rawnak.kumar, I saw your other post in the VIA forum – have you looked at the “Serving models from Local Assets” section of this page? It might help with your usecase.