Error Fetching config.json When Running NIM Llama 3.2 1B Container

Hi NVIDIA Support Team,

I’m currently trying to run the NVIDIA Inference Microservice (NIM) container for meta/llama-3.2-1b-instruct using the following command:

docker run -it --rm \
  --gpus all \
  --shm-size=16GB \
  -e NGC_API_KEY \
  -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
  -u $(id -u) \
  -p 8000:8000 \
  nvcr.io/nim/meta/llama-3.2-1b-instruct:latest

The container starts, but I’m getting the following repeated warning:

WARNING Error in fetching config.json from NGC

The container gets stuck trying to fetch config.json metadata from NGC, and the inference service never starts.

What I’ve verified so far:

  1. API key is valid : the container image is successfully pulled from nvcr.io
  2. Network access is not blocked : I can access https://api.ngc.nvidia.com via curl and browser from the same machine
  3. API Key only includes access to: Secrets Manager, Public API Endpoints, NGC Catalog

Appreciate any help on what could be wrong or missing to get this running properly.
Thanks a lot!

Hi @dhea.fajriati,

Can you share the full output from the log, starting from once the Inference server tries to launch?

Thanks.

Sophie

Hi @sophwats, thank you for your reply

Below is the log output,

datawizard@datawizard-data:~/nvidia$ docker run -it --rm     --gpus all     --shm-size=16GB     -e NGC_API_KEY     -v "$LOCAL_NIM_CACHE:/opt/nim/.cache"     -u $(id -u)     -p 8000:8000     nvcr.io/nim/meta/llama-3.2-1b-instruct:latest

===========================================
== NVIDIA Inference Microservice LLM NIM ==
===========================================

NVIDIA Inference Microservice LLM NIM Version 1.10.1
Model: meta/llama-3.2-1b-instruct

Container image Copyright (c) 2016-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

The NIM container is governed by the NVIDIA Software License Agreement (https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement/) and Product-Specific Terms for AI Products (https://www.nvidia.com/en-us/agreements/enterprise-software/product-specific-terms-for-ai-products/)


A copy of this license can be found under /opt/nim/LICENSE.


The use of this model is governed by the NVIDIA Community Model License (found at https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-community-models-license/).

ADDITIONAL INFORMATION: : Llama 3.2 Community License Agreement (https://www.llama.com/llama3_2/license/). Built with Llama.

INFO 07-25 15:06:50 [__init__.py:256] Automatically detected platform cuda.
WARNING 2025-07-25 15:18:10.976 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 15:27:24.561 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 15:34:00.928 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 15:40:08.773 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 15:48:55.670 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 15:59:20.50 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 16:13:11.492 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 16:23:28.371 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 16:36:12.123 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 16:46:20.483 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 16:46:48.96 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 16:56:10.80 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 16:56:51.365 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 17:19:20.676 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 18:08:54.372 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 18:56:29.284 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 19:47:08.516 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 20:39:50.628 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 21:18:00.292 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 22:09:08.196 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 22:54:48.420 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-25 23:31:48.452 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 00:12:13.284 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 00:56:39.780 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 01:41:59.524 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 02:29:50.820 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 03:08:25.60 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 03:35:02.601 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 03:42:10.628 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 03:57:52.10 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 04:14:14.732 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 04:17:21.599 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 04:33:42.140 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 04:47:15.30 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 04:55:55.842 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 04:56:09.650 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 05:00:01.211 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 05:14:17.835 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 05:29:12.310 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 05:32:49.119 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 05:49:32.48 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 06:05:54.468 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 06:07:12.914 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 06:16:02.462 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 06:16:27.356 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 06:31:42.858 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 06:35:45.842 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 06:47:51.466 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 06:58:17.172 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 07:06:44.459 utils.py:31] Error in fetching config.json from NGC
WARNING 2025-07-26 07:07:07.865 utils.py:31] Error in fetching config.json from NGC
INFO 2025-07-26 07:07:07.865 ngc_profile.py:360] Running NIM without LoRA. Only looking for compatible profiles that do not support LoRA.
INFO 2025-07-26 07:07:07.865 ngc_profile.py:362] Detected 1 compatible profile(s).
INFO 2025-07-26 07:07:07.866 ngc_injector.py:158] Valid profile: 4f904d571fe60ff24695b5ee2aa42da58cb460787a968f1e8a09f5a7e862728d (vllm-bf16-tp1-pp1) on GPUs [0]
INFO 2025-07-26 07:07:07.866 ngc_injector.py:322] Selected profile: 4f904d571fe60ff24695b5ee2aa42da58cb460787a968f1e8a09f5a7e862728d (vllm-bf16-tp1-pp1)
INFO 2025-07-26 07:07:07.867 ngc_injector.py:330] Profile metadata: feat_lora: false
INFO 2025-07-26 07:07:07.867 ngc_injector.py:330] Profile metadata: llm_engine: vllm
INFO 2025-07-26 07:07:07.867 ngc_injector.py:330] Profile metadata: pp: 1
INFO 2025-07-26 07:07:07.867 ngc_injector.py:330] Profile metadata: precision: bf16
INFO 2025-07-26 07:07:07.867 ngc_injector.py:330] Profile metadata: tp: 1
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/opt/nim/llm/nim_llm_sdk/entrypoints/launch.py", line 649, in <module>
    asyncio.run(main())
  File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/opt/nim/llm/nim_llm_sdk/entrypoints/launch.py", line 512, in main
    inference_env = prepare_environment()
                    ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/nim/llm/nim_llm_sdk/entrypoints/args.py", line 214, in prepare_environment
    engine_args, extracted_name = inject_ngc_hub(engine_args)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/nim/llm/nim_llm_sdk/hub/ngc_injector.py", line 353, in inject_ngc_hub
    engine_args = prepare_workspace_from_workspace(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/nim/llm/nim_llm_sdk/hub/ngc_injector.py", line 183, in prepare_workspace_from_workspace
    return prepare_workspace_from_repo(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/nim/llm/nim_llm_sdk/hub/ngc_injector.py", line 203, in prepare_workspace_from_repo
    cached = repo.get_all()
             ^^^^^^^^^^^^^^
Exception: ConnectionError: Check your ability to access the remote source and any network/dns/firewall/proxy settings. Details: reqwest::Error { kind: Request, url: "https://xfiles.ngc.nvidia.com/org/nim/team/meta/models/llama-3.2-1b-instruct/versions/hf-9213176-tool_calling/files/generation_config.json?ssec-algo=AES256&versionId=B8sLN3vF7wHO5xH2V2fpsGeE_ndN3goU&ssec-key=DAEoow%2F%2B%2FgMrF%2FrrGq4Xt1wVX8yyMAxGCQTlVopB3vlR%2BPqduySV3nCmuax22xRaxMPI%2BU3pRNXnqMOuEhe77fMvVJcvv08Srs9Z%2FpF9mkwRI23DJe6TPm93uGsdDEvnV4LW7oDY1bOdGIqwf%2FZ1XIAs%2FtFqbIpU27RmvinJVE3UoXiLaQ5GK%2BHMzGUXrsRKdVTOlT2cCFfyOj35EZzrlEwT82H4Ph23X5tNPd7A27zDvLd9Y9eAXTjitPK09KirX6vpYb8G4EhM%2BLSC%2FsbK0z4ZUkX1Hmt93abKscyMWLQJsRhBYNK%2F3gDTbdzLv6alX4KN4yZiDCfHv5QuocRHh%2FpZqRwlfNEW%2Bww%2Bh5gg0%2B7fh%2FQQGurts9Avgws8cIBSRSoWxqDJIM1LclN%2BZxfaa%2BWLbrKUJ5lWZX9oKrxS8KJjwrB2lp86smm67T4YSnbwk51LABUWBkd%2FX%2FOgcwH8pwQyBx27CgxFnTk5zEBd3HydrrLYoFDojLR%2B6jq5rFvT&kid=bXJrLWU3OGM1M2FhZjE4YzRiNmJiNjlkYmRhZjcxNjA3YWEw&ssec-enabled=true&Expires=1753542412&Signature=xZ2VlhYwUTcaHZnpcGkLHOZ0l9JhwZ0-IgxJYZaeBVdoQdcDSGJyahWz-CeDEz5wYdsyDFYC00ybrJPmMcAWQsVayuWCp7t-ZiA7WM5shueRepm7EuKb9-vPlmTkUSD7~ul2SkIDrP7v2b0kfrywY3b2cZ1HfF9HxuuK64Uu7FeTJNtEkmHgvUrlCymKHKQdngrwZ4u2LYGHQO86nvTX~qIYUGe7G5YpBaLCNWyr5kT1~O64lc6QvKoDjSTfTXs2bwaAht~rsWjoJpQ7NsDRzUB83-2B-7j5q9oAgoUi8XBqZwP8zJTW8rlwwKwZPSJS4N9fTwSrVLz4BloyqjMSyA__&Key-Pair-Id=KCX06E8E9L60W", source: hyper_util::client::legacy::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidCertificate(UnknownIssuer) } }) }
sys:1: RuntimeWarning: coroutine 'main.<locals>.shutdown' was never awaited

Thanks for sharing, I’ve reached out to the team to see how we can resolve this.

Best,

Sophie

Please can you replace -e NGC_API_KEY within your docker command to -e NGC_API_KEY=$NGC_API_KEY, and share the output again?

Thanks,

Sophie

Hi @sophwats,

I replaced -e NGC_API_KEY with -e NGC_API_KEY=$NGC_API_KEY, and still got WARNING Error in fetching config.json from NGC

docker run -it --rm     --gpus all     --shm-size=16GB     -e NGC_API_KEY=$NGC_API_KEY     -v "$LOCAL_NIM_CACHE:/opt/nim/.cache"     -u 0     -p 8000:8000     nvcr.io/nim/meta/llama-3.2-1b-instruct:latest

===========================================
== NVIDIA Inference Microservice LLM NIM ==
===========================================

NVIDIA Inference Microservice LLM NIM Version 1.10.1
Model: meta/llama-3.2-1b-instruct

Container image Copyright (c) 2016-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

The NIM container is governed by the NVIDIA Software License Agreement (https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement/) and Product-Specific Terms for AI Products (https://www.nvidia.com/en-us/agreements/enterprise-software/product-specific-terms-for-ai-products/)


A copy of this license can be found under /opt/nim/LICENSE.


The use of this model is governed by the NVIDIA Community Model License (found at https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-community-models-license/).

ADDITIONAL INFORMATION: : Llama 3.2 Community License Agreement (https://www.llama.com/llama3_2/license/). Built with Llama.

INFO 07-29 09:15:17 [__init__.py:256] Automatically detected platform cuda.
WARNING 2025-07-29 09:15:22.371 utils.py:31] Error in fetching config.json from NGC

Thanks - is the rest of the message the same? Please can we get the full output again?

Sophie

Please can you also try adding list-model-profiles to the end of your docker command, and sharing the full output? Thanks!

docker run -it --rm --gpus all --shm-size=16GB -e NGC_API_KEY=$NGC_API_KEY -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" -u 0 -p 8000:8000 nvcr.io/nim/meta/llama-3.2-1b-instruct:latest list-docker-profiles

Hi @sophwats,

I’ve identified the issue. The office Wi-Fi is blocking some of the links that the NIM container tries to access during startup. I confirmed this because the container runs fine when I connect through my mobile hotspot.

Could you please share a list of the URLs or domains that the container needs to reach? I’d like to forward them to our network team so they can whitelist the necessary endpoints

Hi @sophwats,

So I got an error when accessing this link. I can download it from my terminal and browser in the same engine, but not from inside the container. I think it’s relates to an SSL certificate issue in the container. I import certificate manually when the container is running, then I can download the link

https://xfiles.ngc.nvidia.com/org/nim/team/meta/models/llama-3.2-1b-instruct/versions/hf-9213176-tool_calling/files/generation_config.json?ssec-algo=AES256&versionId=B8sLN3vF7wHO5xH2V2fpsGeE_ndN3goU&ssec-key=J3HTtMzq1jyfJs2ae22sI99X3rdZsF1RGp%2Bpoy4Mqpb74SOe%2F9kYnJIRMYY24IpFq4c%2BFu1Yq3sQmvjpFejugf77oj%2Bthl%2BcPfQf6AJTshYglA0Fp1NFxB4jRuYLUiUe%2BbNCamzrwK58oh25hxxT9YX0O1O9J%2Bb7OyYRf%2BFG75k%2FKthV2%2BUdAU%2F6onM7emUzEG0laBiffmbVXwxSljXipyfZBaPurmUEmiSTniiOb9%2BY8GTzRQ2yzQmmYfAuZLB9p8xZfUsHW0YfvIATli8WEBemFpW4V38NfIXYVR3c%2FPbE4q6bsOfFttM3nJguIWHZWLyjuMrWcMFfvqfkpf7Uh04o42yt1urkc%2FlFKJMn22gY2abv4GCoPp9VkGYYH6uFfs93K9sxV%2BX%2FB%2BTV27ktuSmLeWp6k0RL2yYntX5Ad9tvns1WcHlp2hwgaHMNVvJiZkQdaYfXBklK9oXnJQUWI%2BU%2BIN4Xw%2BRfzswJVJSucOLPg%2BnWmuN9%2F8XmI0mK4FNI&kid=bXJrLWU3OGM1M2FhZjE4YzRiNmJiNjlkYmRhZjcxNjA3YWEw&ssec-enabled=true&Expires=1754040510&Signature=sIxvqzJCoYmfz6hHPb1uWo0C3XYBQ78gl7zBF~74ClOJ7NOVFJyHb7xGmA-3liZazHF3y7VJ6eNNMPtKynBnPasAj4qddR~mbLyC0wmi6N2-8ZGCfkjTT-jEjDnDNu4Xw9fvJhy0BIZbO-ngcsR6Eq36ab8sOB0QTAs85-50XIwQiJivimlE2OZjh4v6K2cRR3O4bCp6YDTbRSzufsh2U34W2g34dDjjdcVSL7hcW7tpmN5wcLwpGMP4frEpvjNUxIgP404BVZ6ef9-UORoAfA9uWW6gCG2smZb7yP3viG0H9rrlNpY4zCAzsX3mO~G3y7wfnx2qt5heZLG8uMOuIQ__&Key-Pair-Id=KCX06E8E9L60W

But, how can I make it work directly in a single docker run command without doing it manually inside the container.

Below is the log output,

datawizard@datawizard-data:~$ docker run -it  --gpus all   --shm-size=16GB   -e NGC_API_KEY=$NGC_API_KEY   -e NIM_DISABLE_SSL_VERIFY=true   -v "$LOCAL_NIM_CACHE:/opt/nim/.cache"   -u 0   -p 8000:8000   nvcr.io/nim/meta/llama-3.2-1b-instruct:latest

===========================================
== NVIDIA Inference Microservice LLM NIM ==
===========================================

NVIDIA Inference Microservice LLM NIM Version 1.10.1
Model: meta/llama-3.2-1b-instruct

Container image Copyright (c) 2016-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

The NIM container is governed by the NVIDIA Software License Agreement (https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement/) and Product-Specific Terms for AI Products (https://www.nvidia.com/en-us/agreements/enterprise-software/product-specific-terms-for-ai-products/)


A copy of this license can be found under /opt/nim/LICENSE.


The use of this model is governed by the NVIDIA Community Model License (found at https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-community-models-license/).

ADDITIONAL INFORMATION: : Llama 3.2 Community License Agreement (https://www.llama.com/llama3_2/license/). Built with Llama.

INFO 07-31 09:28:28 [__init__.py:256] Automatically detected platform cuda.
INFO 2025-07-31 09:28:29.781 ngc_profile.py:360] Running NIM without LoRA. Only looking for compatible profiles that do not support LoRA.
INFO 2025-07-31 09:28:29.781 ngc_profile.py:362] Detected 1 compatible profile(s).
INFO 2025-07-31 09:28:29.781 ngc_injector.py:158] Valid profile: 4f904d571fe60ff24695b5ee2aa42da58cb460787a968f1e8a09f5a7e862728d (vllm-bf16-tp1-pp1) on GPUs [0]
INFO 2025-07-31 09:28:29.781 ngc_injector.py:322] Selected profile: 4f904d571fe60ff24695b5ee2aa42da58cb460787a968f1e8a09f5a7e862728d (vllm-bf16-tp1-pp1)
INFO 2025-07-31 09:28:29.782 ngc_injector.py:330] Profile metadata: feat_lora: false
INFO 2025-07-31 09:28:29.782 ngc_injector.py:330] Profile metadata: llm_engine: vllm
INFO 2025-07-31 09:28:29.782 ngc_injector.py:330] Profile metadata: pp: 1
INFO 2025-07-31 09:28:29.782 ngc_injector.py:330] Profile metadata: precision: bf16
INFO 2025-07-31 09:28:29.782 ngc_injector.py:330] Profile metadata: tp: 1
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/opt/nim/llm/nim_llm_sdk/entrypoints/launch.py", line 649, in <module>
    asyncio.run(main())
  File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/opt/nim/llm/nim_llm_sdk/entrypoints/launch.py", line 512, in main
    inference_env = prepare_environment()
                    ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/nim/llm/nim_llm_sdk/entrypoints/args.py", line 214, in prepare_environment
    engine_args, extracted_name = inject_ngc_hub(engine_args)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/nim/llm/nim_llm_sdk/hub/ngc_injector.py", line 353, in inject_ngc_hub
    engine_args = prepare_workspace_from_workspace(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/nim/llm/nim_llm_sdk/hub/ngc_injector.py", line 183, in prepare_workspace_from_workspace
    return prepare_workspace_from_repo(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/nim/llm/nim_llm_sdk/hub/ngc_injector.py", line 203, in prepare_workspace_from_repo
    cached = repo.get_all()
             ^^^^^^^^^^^^^^
Exception: ConnectionError: Check your ability to access the remote source and any network/dns/firewall/proxy settings. Details: reqwest::Error { kind: Request, url: "https://xfiles.ngc.nvidia.com/org/nim/team/meta/models/llama-3.2-1b-instruct/versions/hf-9213176-tool_calling/files/generation_config.json?ssec-algo=AES256&versionId=B8sLN3vF7wHO5xH2V2fpsGeE_ndN3goU&ssec-key=J3HTtMzq1jyfJs2ae22sI99X3rdZsF1RGp%2Bpoy4Mqpb74SOe%2F9kYnJIRMYY24IpFq4c%2BFu1Yq3sQmvjpFejugf77oj%2Bthl%2BcPfQf6AJTshYglA0Fp1NFxB4jRuYLUiUe%2BbNCamzrwK58oh25hxxT9YX0O1O9J%2Bb7OyYRf%2BFG75k%2FKthV2%2BUdAU%2F6onM7emUzEG0laBiffmbVXwxSljXipyfZBaPurmUEmiSTniiOb9%2BY8GTzRQ2yzQmmYfAuZLB9p8xZfUsHW0YfvIATli8WEBemFpW4V38NfIXYVR3c%2FPbE4q6bsOfFttM3nJguIWHZWLyjuMrWcMFfvqfkpf7Uh04o42yt1urkc%2FlFKJMn22gY2abv4GCoPp9VkGYYH6uFfs93K9sxV%2BX%2FB%2BTV27ktuSmLeWp6k0RL2yYntX5Ad9tvns1WcHlp2hwgaHMNVvJiZkQdaYfXBklK9oXnJQUWI%2BU%2BIN4Xw%2BRfzswJVJSucOLPg%2BnWmuN9%2F8XmI0mK4FNI&kid=bXJrLWU3OGM1M2FhZjE4YzRiNmJiNjlkYmRhZjcxNjA3YWEw&ssec-enabled=true&Expires=1754040510&Signature=sIxvqzJCoYmfz6hHPb1uWo0C3XYBQ78gl7zBF~74ClOJ7NOVFJyHb7xGmA-3liZazHF3y7VJ6eNNMPtKynBnPasAj4qddR~mbLyC0wmi6N2-8ZGCfkjTT-jEjDnDNu4Xw9fvJhy0BIZbO-ngcsR6Eq36ab8sOB0QTAs85-50XIwQiJivimlE2OZjh4v6K2cRR3O4bCp6YDTbRSzufsh2U34W2g34dDjjdcVSL7hcW7tpmN5wcLwpGMP4frEpvjNUxIgP404BVZ6ef9-UORoAfA9uWW6gCG2smZb7yP3viG0H9rrlNpY4zCAzsX3mO~G3y7wfnx2qt5heZLG8uMOuIQ__&Key-Pair-Id=KCX06E8E9L60W", source: hyper_util::client::legacy::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidCertificate(UnknownIssuer) } }) }
sys:1: RuntimeWarning: coroutine 'main.<locals>.shutdown' was never awaited