E: Failed to fetch https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/Packages.gz File has unexpected size (213860 != 185496)

1. Issue description

In nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 I get the error:

Step 1/15 : FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 as base
10.1-cudnn7-runtime-ubuntu18.04: Pulling from nvidia/cuda
7ddbc47eeb70: Already exists
c1bbdc448b72: Already exists
8c3b70e39044: Already exists
45d437916d57: Already exists
d8f1569ddae6: Already exists
85386706b020: Already exists
ee9b457b77d0: Already exists
be4f3343ecd3: Already exists
51f6bbaddf34: Pull complete
Digest: sha256:963696628c9a0d27e9e5c11c5a588698ea22eeaf138cc9bff5368c189ff79968
Status: Downloaded newer image for nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
—> e135227729c4
Step 2/15 : ARG BRANCH
—> Running in 587ffc7ac0f1
Removing intermediate container 587ffc7ac0f1
—> d535b6b4531d
Step 3/15 : ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
—> Running in b53d25042d11
Removing intermediate container b53d25042d11
—> 31d7f259a203
Step 4/15 : ENV FORCE_CUDA=1
—> Running in 5cb13c9bc552
Removing intermediate container 5cb13c9bc552
—> ae6e76c8255a
Step 5/15 : ENV PATH /opt/conda/bin:$PATH
—> Running in b6a73b911776
Removing intermediate container b6a73b911776
—> eed35d589f50
Step 6/15 : ENV TORCH_CUDA_ARCH_LIST “6.0 6.1 7.2+PTX 7.5+PTX”
—> Running in a0064ff4325d
Removing intermediate container a0064ff4325d
—> aebea41002f9
Step 7/15 : RUN apt-get update --fix-missing && apt-get install -y --no-install-recommends wget bzip2 ca-certificates libglib2.0-0 libxext6 libsm6 libxrender1 git mercurial subversion build-essential
—> Running in f3ad3de52931
Get:1 Index of /ubuntu bionic-security InRelease [88.7 kB]
Ign:2 Index of /compute/cuda/repos/ubuntu1804/x86_64 InRelease
Get:3 Index of /ubuntu bionic InRelease [242 kB]
Ign:4 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease
Get:5 Index of /compute/cuda/repos/ubuntu1804/x86_64 Release [697 B]
Get:6 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release [564 B]
Get:7 Index of /compute/cuda/repos/ubuntu1804/x86_64 Release.gpg [836 B]
Get:8 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release.gpg [833 B]
Ign:9 Index of /compute/cuda/repos/ubuntu1804/x86_64 Packages
Get:10 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Packages [41.2 kB]
Get:9 Index of /compute/cuda/repos/ubuntu1804/x86_64 Packages [185 kB]
Err:9 Index of /compute/cuda/repos/ubuntu1804/x86_64 Packages
File has unexpected size (213860 != 185496). Mirror sync in progress? [IP: 152.195.19.142 443]
Hashes of expected file:

  • Filesize:185496 [weak]
  • SHA256:f7e866fdb738e4bda095feb338ba61fc8150f584fb4ab1b39a025f58c0e01932
  • SHA1:4462a72d5e4952c9cb747af9678d0a1d85857837 [weak]
  • MD5Sum:4be0b5ad7f6bbec51557955b4906e4eb [weak]
    Release file created at: Wed, 24 Jun 2020 19:48:37 +0000
    E: Some index files failed to download. They have been ignored, or old ones used instead.
    The command ‘/bin/sh -c apt-get update --fix-missing’ returned a non-zero code: 100

2. To reproduce the issue
Build this dockerfile:
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
RUN apt-get update

2 Likes

fixed at 15:05 ET

how?

The issue seems to have reoccurred. I get the following error when doing an apt update in my Dockerfile:

E: Failed to fetch https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/Packages.gz File has unexpected size (47871 != 49498). Mirror sync in progress? [IP: 152.195.19.142 443]
Hashes of expected file:
- Filesize:49498 [weak]
- SHA256:332f3ee4e353b8a5e5a2bdd8fdbd47cf140c73822b82b328815f122e09e195a0
- SHA1:4dc8ef9a3ee3c97b3c26d46e07fdd83997e6880b [weak]
- MD5Sum:bbff3b9c3462257479d72521ee78ec29 [weak]
Release file created at: Wed, 23 Sep 2020 22:09:13 +0000
E: Some index files failed to download. They have been ignored, or old ones used instead.

2 Likes

same issue

1 Like

After a few more tests, this only occurs on AWS.

Also seeing this in Azure.

Any update on workaround?

2 Likes