Kubernetes cluster

Hi everyone, can anyone please help me to solve my issue???

I am trying to create a kubernetes cluster with Jetson TX2 board with ubuntu 16.04 installed. It have installed Jetpack software with it.

I installed docker and its version is Docker version 1.13.1, build 092cba3

Also I installed kubeadm, kubectl, kubelet and kubernetes-cni

I used kubeadm init command with calico network to create the cluster, it creates the cluster successfully, but the node is always in NotReady state.

So, I checked the available pods in the node using kubectl get pods --all-namespaces command and it shows,

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-etcd-s4pvh 0/1 CrashLoopBackOff 9 25m
kube-system calico-kube-controllers-79dccdc4cc-qxv7g 0/1 Pending 0 25m
kube-system calico-node-f4jv7 0/2 CrashLoopBackOff 18 25m
kube-system etcd-tegra-ubuntu 1/1 Running 0 26m
kube-system kube-apiserver-tegra-ubuntu 1/1 Running 0 26m
kube-system kube-controller-manager-tegra-ubuntu 1/1 Running 0 26m
kube-system kube-dns-5c9ff5597-sqbtq 0/3 Pending 0 27m
kube-system kube-proxy-288dq 1/1 Running 0 27m
kube-system kube-scheduler-tegra-ubuntu 1/1 Running 0 26m

The above said pods are always in the crashloopbackoff ststus and the dns is always in pending status.

So, I checked the available docker images by using docker images command,

REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/etcd latest b73ea5d87ea1 6 days ago 39.4 MB
arm64v8/ubuntu latest 0ff926db4f76 7 days ago 74.1 MB
k8s.gcr.io/kube-proxy-arm64 v1.10.4 0a4b69d28a6f 7 days ago 101 MB
k8s.gcr.io/kube-apiserver-arm64 v1.10.4 d58d0145769f 7 days ago 219 MB
k8s.gcr.io/kube-controller-manager-arm64 v1.10.4 88861ed2aedc 7 days ago 142 MB
k8s.gcr.io/kube-scheduler-arm64 v1.10.4 965113f16909 7 days ago 48.5 MB
quay.io/calico/node latest 7eca10056c8e 13 days ago 248 MB
quay.io/calico/node v3.0.8 6e991381712e 13 days ago 248 MB
quay.io/calico/kube-controllers latest 240a82836573 13 days ago 55 MB
quay.io/calico/cni latest 9f355e076ea7 13 days ago 68.8 MB
quay.io/calico/cni v2.0.6 dbeb77ece97f 13 days ago 69.1 MB
quay.io/calico/node v3.0.7 411358ca98dc 3 weeks ago 248 MB
quay.io/calico/cni v2.0.5 bf296711e770 7 weeks ago 69.1 MB
arm64v8/hello-world latest 993097e7b835 2 months ago 4.75 kB
k8s.gcr.io/etcd-arm64 3.1.12 db579ddd596b 3 months ago 181 MB
k8s.gcr.io/pause-arm64 3.1 6cf7c80fe444 5 months ago 525 kB
quay.io/coreos/etcd v3.1.10 47bb9dd99916 11 months ago 34.6 MB

My question is, when we are using the kubeadm init command it should automatically pull the kube-apiserver, kube-controller-manager, kube-scheduler, etcd, kube-dns images. But here it is not pulling the dns image. Also, i checked the kube-dns pod status by using the kubectl describe pod --namespace=kube-system kube-dns-5c9ff5597-sqbtq command and it shows,

Name: kube-dns-5c9ff5597-sqbtq
Namespace: kube-system
Node:
Labels: k8s-app=kube-dns
pod-template-hash=175991153
Annotations:
Status: Pending
IP:
Controlled By: ReplicaSet/kube-dns-5c9ff5597
Containers:
kubedns:
Image: k8s.gcr.io/k8s-dns-kube-dns-arm64:1.14.8
Ports: 10053/UDP, 10053/TCP, 10055/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
–domain=cluster.local.
–dns-port=10053
–config-dir=/kube-dns-config
–v=2
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
Environment:
PROMETHEUS_PORT: 10055
Mounts:
/kube-dns-config from kube-dns-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-gr99l (ro)
dnsmasq:
Image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-arm64:1.14.8
Ports: 53/UDP, 53/TCP
Host Ports: 0/UDP, 0/TCP
Args:
-v=2
-logtostderr
-configDir=/etc/k8s/dns/dnsmasq-nanny
-restartDnsmasq=true

-k
–cache-size=1000
–no-negcache
–log-facility=-
–server=/cluster.local/127.0.0.1#10053
–server=/in-addr.arpa/127.0.0.1#10053
–server=/ip6.arpa/127.0.0.1#10053
Requests:
cpu: 150m
memory: 20Mi
Liveness: http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
Environment:
Mounts:
/etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-gr99l (ro)
sidecar:
Image: k8s.gcr.io/k8s-dns-sidecar-arm64:1.14.8
Port: 10054/TCP
Host Port: 0/TCP
Args:
–v=2
–logtostderr
–probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
–probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
Requests:
cpu: 10m
memory: 20Mi
Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-gr99l (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-dns-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-dns
Optional: true
kube-dns-token-gr99l:
Type: Secret (a volume populated by a Secret)
SecretName: kube-dns-token-gr99l
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 7s (x107 over 30m) default-scheduler 0/1 nodes are available: 1 node(s) were not ready.

Please help me to solve this issue.

Were you able to resolve this issue? I am also facing the same problem. Any help will be appreciated.