I’m trying to configure a kubernetes manifest yaml file to use the llama-3_2-nv-embedqa-1b-v2 image while behind a corporate firewall. I am able to download the image, but not start the container. I set the environment variables HTTP_PROXY and HTTPS_PROXY, but unsure how to configure the certificate correctly. When the container spins up, it attempts to download the models but getting this error: InvalidCertificate(UnknownIssuer). Any help would be appreciated.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Running NIM llama-3_1-8b-instruct fails in On-Prem deployment | 7 | 332 | April 10, 2025 | |
Why cant I download the llama models? | 0 | 33 | February 2, 2025 | |
401 unauthorized access | 12 | 93 | April 28, 2025 | |
API connect | 1 | 132 | September 20, 2024 | |
Nemollm-inference-microservice failed to deploy | 1 | 162 | October 22, 2024 | |
NGC ElasticSearch helm container fails to install | 1 | 577 | June 11, 2020 | |
LoRA swapping inference Llama-3.1-8b-instruct | Exception: lora format could not be determined | 4 | 156 | October 22, 2024 | |
Reusing a stored model (llama-3.1-8b-instruct) with a proper profile | 0 | 163 | October 30, 2024 | |
NIM Llama3 8B Instruct - Running container with "CUDA_ERROR_NO_DEVICE" | 1 | 48 | March 28, 2025 | |
Kubernetes installation on tx2 fails | 4 | 2299 | October 18, 2021 |