I’m trying to configure a kubernetes manifest yaml file to use the llama-3_2-nv-embedqa-1b-v2 image while behind a corporate firewall. I am able to download the image, but not start the container. I set the environment variables HTTP_PROXY and HTTPS_PROXY, but unsure how to configure the certificate correctly. When the container spins up, it attempts to download the models but getting this error: InvalidCertificate(UnknownIssuer). Any help would be appreciated.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Running NIM llama-3_1-8b-instruct fails in On-Prem deployment | 7 | 295 | April 10, 2025 | |
Why cant I download the llama models? | 0 | 30 | February 2, 2025 | |
401 unauthorized access | 12 | 58 | April 28, 2025 | |
API connect | 1 | 112 | September 20, 2024 | |
Nemollm-inference-microservice failed to deploy | 1 | 150 | October 22, 2024 | |
NGC ElasticSearch helm container fails to install | 1 | 576 | June 11, 2020 | |
LoRA swapping inference Llama-3.1-8b-instruct | Exception: lora format could not be determined | 4 | 141 | October 22, 2024 | |
Reusing a stored model (llama-3.1-8b-instruct) with a proper profile | 0 | 150 | October 30, 2024 | |
NIM Llama3 8B Instruct - Running container with "CUDA_ERROR_NO_DEVICE" | 1 | 34 | March 28, 2025 | |
Llama-2-finetune clone project failed | 7 | 233 | December 8, 2023 |