I have it up and running with a front end chatbox that is sending messages and getting responses. It seems like everything documented is really focused on Docker running locally. I would be interested in finding docs to get access to more of the utilities in this NIM using openshift.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| NIM does not support llama-3.1-8b-instruct and llama-3.1-70b-instruct on GH200 On-Prem deployment | 1 | 252 | November 7, 2024 | |
| Aunch NVIDIA NIM (llama3-8b-instruct) for LLMs locally | 3 | 133 | November 8, 2024 | |
| Nemollm-inference-microservice failed to deploy | 1 | 184 | October 22, 2024 | |
| NIM with llama-3-8b models stuck without any error | 0 | 151 | November 15, 2024 | |
| Not getting response for this model since yesterday meta/llama-3.1-405b-instruct model | 0 | 73 | December 13, 2024 | |
| NIM Llama3 8B Instruct - Running container with "CUDA_ERROR_NO_DEVICE" | 1 | 62 | March 28, 2025 | |
| Access large models (405B) with NIM after using all credits for the build.nvidia.com endpoints | 3 | 236 | August 29, 2024 | |
| Result of nvidia nims in openai SDK and API inconsistent | 0 | 34 | January 7, 2025 | |
| Multi-LoRA with LLAMA 3 NIM is not listed in API | 2 | 120 | August 21, 2024 | |
| Livestream Thursday, July 17 : Simplify Deployment for a World of LLMs with NVIDIA NIM | 0 | 46 | July 14, 2025 |
