Kubernetes on bluefield 2

I started the cluster but MetalLB (load balancer) had some problems. Here is the output:

root@themis:/home//kubernetes# kubectl describe pods controller-fb659dc8-szpps -n metallb-system

Name: controller-fb659dc8-szpps
Namespace: metallb-system
Priority: 0
Node: bluefield/10.93.231.112
Start Time: Wed, 25 Aug 2021 15:01:39 -0700
Labels: app=metallb
component=controller
pod-template-hash=fb659dc8
Annotations: prometheus.io/port: 7472
prometheus.io/scrape: true
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/controller-fb659dc8
Containers:
controller:
Container ID:
Image: metallb/controller:v0.9.3
Image ID:
Port: 7472/TCP
Host Port: 0/TCP
Args:
–port=7472
–config=config
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 100m
memory: 100Mi
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dlj54 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-dlj54:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 4m18s default-scheduler Successfully assigned metallb-system/controller-fb659dc8-szpps to bluefield
Warning FailedCreatePodSandBox kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “a716d44d4f731be49a59535749d6e1e1672d17904dc92821e87b6ccb168d0ae7” network for pod “controller-fb659dc8-szpps”: networkPlugin cni failed to set up pod “controller-fb659dc8-szpps_metallb-system” network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “d28fd55e242a7a86d4d16fb12a53f647db24e62f8ae12d9e7f168c382c51ffe3” network for pod “controller-fb659dc8-szpps”: networkPlugin cni failed to set up pod “controller-fb659dc8-szpps_metallb-system” network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “d628920de97b7e9362a135fa15ace49b4a4cfdbb6bd1f650f05c11e7c7b52b70” network for pod “controller-fb659dc8-szpps”: networkPlugin cni failed to set up pod “controller-fb659dc8-szpps_metallb-system” network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “f7b7ea08f193d259ca38b0f3095d088b0e8017489d8f0995e3d46a6573f88b80” network for pod “controller-fb659dc8-szpps”: networkPlugin cni failed to set up pod “controller-fb659dc8-szpps_metallb-system” network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “3937e47cbcfde38363b16e9837e18ab22fc493ca5cf2853da401f90ce1c1e43a” network for pod “controller-fb659dc8-szpps”: networkPlugin cni failed to set up pod “controller-fb659dc8-szpps_metallb-system” network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “dab4235ca3b1c8d28127e255c5e9b077a3765cd6d06644ce2c6b38b0cd7aeb40” network for pod “controller-fb659dc8-szpps”: networkPlugin cni failed to set up pod “controller-fb659dc8-szpps_metallb-system” network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “74184f5223fe14fec7df1af7cfe0884ba378c7cb2da924725b03d3d887ffdafd” network for pod “controller-fb659dc8-szpps”: networkPlugin cni failed to set up pod “controller-fb659dc8-szpps_metallb-system” network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “d38ce6ffc211f6159fbe4a7b8e6d75a08c65b88ac5cc462dc7f723784af473fb” network for pod “controller-fb659dc8-szpps”: networkPlugin cni failed to set up pod “controller-fb659dc8-szpps_metallb-system” network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “76eb8891f87af41717f45399a51cd98cf52a2d8066f6161e262c36a10dfefa94” network for pod “controller-fb659dc8-szpps”: networkPlugin cni failed to set up pod “controller-fb659dc8-szpps_metallb-system” network: open /run/flannel/subnet.env: no such file or directory
Normal SandboxChanged (x12 over ) kubelet Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox (x4 over ) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “44d9f791df9eb912628e5905fca8903e537844187be85824b6f2477bc8812a77” network for pod “controller-fb659dc8-szpps”: networkPlugin cni failed to set up pod “controller-fb659dc8-szpps_metallb-system” network: open /run/flannel/subnet.env: no such file or directory

Do you still have Flannel pod trying to run on the BF?

MetalLB is dependent on Flannel (my understanding), hence we deployed it. Do you think we should use another CNI for bluefield? Like this one: Docker Hub

root@themis:kubectl get pods -A -o wide

NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-78fcd69978-gfcxl 1/1 Running 0 43m 10.93.120.2 themis
kube-system coredns-78fcd69978-gqdfh 1/1 Running 0 43m 10.93.120.3 themis
kube-system etcd-themis 1/1 Running 6 43m 10.93.226.77 themis
kube-system kube-apiserver-themis 1/1 Running 3 43m 10.93.226.77 themis
kube-system kube-controller-manager-themis 1/1 Running 14 43m 10.93.226.77 themis
kube-system kube-flannel-ds-g2pvr 0/1 CrashLoopBackOff 8 ( ago) 21m 10.93.231.112 bluefield
kube-system kube-flannel-ds-rwhjl 1/1 Running 0 21m 10.93.226.77 themis
kube-system kube-proxy-vwzkp 1/1 Running 0 43m 10.93.226.77 themis
kube-system kube-proxy-zjwhg 1/1 Running 0 43m 10.93.231.112 bluefield
kube-system kube-scheduler-themis 1/1 Running 11 43m 10.93.226.77 themis
metallb-system controller-fb659dc8-szpps 0/1 ContainerCreating 0 17m bluefield
metallb-system speaker-4b7nb 1/1 Running 0 17m 10.93.226.77 themis
metallb-system speaker-bzr2k 1/1 Running 0 17m 10.93.231.112 bluefield

  1. How did you deploy Metallb, via yaml Manifest?

If yes, can you try to add this “host network” part (in bold) in your deployment manifest and then redeploy::

hostNetwork: true
nodeSelector:
kubernetes.io/arch: arm64

  1. No CNI support for bluefield currently, Only “host network” is supported today. I will double check the link you sent but as far as I know we are still working on a CNI and will soon be available.

Thanks for the suggestion. I tried it but with no success. Below is the manifest file.

$ vim metallb-system.yaml

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
labels:
app: metallb
name: controller
namespace: metallb-system
spec:
allowPrivilegeEscalation: false
allowedCapabilities:
allowedHostPaths:
defaultAddCapabilities:
defaultAllowPrivilegeEscalation: false
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
hostIPC: false
hostNetwork: false
hostPID: false
privileged: false
readOnlyRootFilesystem: true
requiredDropCapabilities:

  • ALL
    runAsUser:
    ranges:
    • max: 65535
      min: 1
      rule: MustRunAs
      seLinux:
      rule: RunAsAny
      supplementalGroups:
      ranges:
    • max: 65535
      min: 1
      rule: MustRunAs
      volumes:
  • configMap
  • secret
  • emptyDir

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
labels:
app: metallb
name: speaker
namespace: metallb-system
spec:
allowPrivilegeEscalation: false
allowedCapabilities:

  • NET_ADMIN
  • NET_RAW
  • SYS_ADMIN
    allowedHostPaths:
    defaultAddCapabilities:
    defaultAllowPrivilegeEscalation: false
    fsGroup:
    rule: RunAsAny
    hostIPC: false
    hostNetwork: true
    hostPID: false
    hostPorts:
  • max: 7472
    min: 7472
    privileged: true
    readOnlyRootFilesystem: true
    requiredDropCapabilities:
  • ALL
    runAsUser:
    rule: RunAsAny
    seLinux:
    rule: RunAsAny
    supplementalGroups:
    rule: RunAsAny
    volumes:
  • configMap
  • secret
  • emptyDir

apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: metallb
name: controller
namespace: metallb-system

apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: metallb
name: speaker
namespace: metallb-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: metallb
name: metallb-system:controller
rules:

  • apiGroups:
    • ‘’
      resources:
    • services
      verbs:
    • get
    • list
    • watch
    • update
  • apiGroups:
    • ‘’
      resources:
    • services/status
      verbs:
    • update
  • apiGroups:
    • ‘’
      resources:
    • events
      verbs:
    • create
    • patch
  • apiGroups:
    • policy
      resourceNames:
    • controller
      resources:
    • podsecuritypolicies
      verbs:
    • use

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: metallb
name: metallb-system:speaker
rules:

  • apiGroups:
    • ‘’
      resources:
    • services
    • endpoints
    • nodes
      verbs:
    • get
    • list
    • watch
  • apiGroups:
    • ‘’
      resources:
    • events
      verbs:
    • create
    • patch
  • apiGroups:
    • policy
      resourceNames:
    • speaker
      resources:
    • podsecuritypolicies
      verbs:
    • use

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: metallb
name: config-watcher
namespace: metallb-system
rules:

  • apiGroups:
    • ‘’
      resources:
    • configmaps
      verbs:
    • get
    • list
    • watch

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: metallb
name: pod-lister
namespace: metallb-system
rules:

  • apiGroups:
    • ‘’
      resources:
    • pods
      verbs:
    • list

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: metallb
name: metallb-system:controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:controller
subjects:

  • kind: ServiceAccount
    name: controller
    namespace: metallb-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: metallb
name: metallb-system:speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:speaker
subjects:

  • kind: ServiceAccount
    name: speaker
    namespace: metallb-system

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: metallb
name: config-watcher
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: config-watcher
subjects:

  • kind: ServiceAccount
    name: controller
  • kind: ServiceAccount
    name: speaker

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: metallb
name: pod-lister
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-lister
subjects:

  • kind: ServiceAccount
    name: speaker

apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: metallb
component: speaker
name: speaker
namespace: metallb-system
spec:
selector:
matchLabels:
app: metallb
component: speaker
template:
metadata:
annotations:
prometheus.io/port: ‘7472’
prometheus.io/scrape: ‘true’
labels:
app: metallb
component: speaker
spec:
containers:
- args:
- --port=7472
- --config=config
env:
- name: METALLB_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: METALLB_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: METALLB_ML_BIND_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: METALLB_ML_LABELS
value: “app=metallb,component=speaker”
- name: METALLB_ML_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: METALLB_ML_SECRET_KEY
valueFrom:
secretKeyRef:
name: memberlist
key: secretkey
image: metallb/speaker:v0.9.3
imagePullPolicy: Always
name: speaker
ports:
- containerPort: 7472
name: monitoring
resources:
limits:
cpu: 100m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
- SYS_ADMIN
drop:
- ALL
readOnlyRootFilesystem: true
hostNetwork: true
nodeSelector:
kubernetes.io/arch: arm64
beta.kubernetes.io/os: linux
serviceAccountName: speaker
terminationGracePeriodSeconds: 2
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: metallb
component: controller
name: controller
namespace: metallb-system
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: metallb
component: controller
template:
metadata:
annotations:
prometheus.io/port: ‘7472’
prometheus.io/scrape: ‘true’
labels:
app: metallb
component: controller
spec:
containers:
- args:
- --port=7472
- --config=config
image: metallb/controller:v0.9.3
imagePullPolicy: Always
name: controller
ports:
- containerPort: 7472
name: monitoring
resources:
limits:
cpu: 100m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
hostNetwork: true
nodeSelector:
kubernetes.io/arch: arm64
beta.kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: controller
terminationGracePeriodSeconds: 0

Thanks for the detailed response. Seeing that it takes a while and it got entangled with the thread regarding Kubernetes, could you please open a new thread about this specific TLS issue, and we’ll move there.

Meanwhile I’ll try to reproduce your issue on a setup on my side using the details you provided.
Eyal.