Restarting clara platform on new image

I moved my clara instance to a new VM at a different IP address and it will not start when I run clara platform start, I appears that the pod is still referencing my old ip address.

Error: Get https://ip-address:6443/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: dial tcp ip-address:6443: connect: connection refused

What should I do to get reset and the clara platform back up and running? Thanks

Hi,

Moving to a new VM will require a “resetup” for Kubernetes because the cluster configuration becomes invalidated. Are you able to rerun the bootstrap again in the new VM ?

Or, you can execute the following script to reset kubernetes to use new IP

sudo systemctl stop kubelet docker

cd /etc/

# backup old kubernetes data
sudo rm -rf kubernetes-backup /var/lib/kubelet-backup

sudo mv kubernetes kubernetes-backup
sudo mv /var/lib/kubelet /var/lib/kubelet-backup

# restore certificates
sudo mkdir -p kubernetes
sudo cp -r kubernetes-backup/pki kubernetes
sudo rm kubernetes/pki/{apiserver.*,etcd/peer.*}

sudo systemctl start docker

# reinit master with data in etcd
# add --kubernetes-version, --pod-network-cidr and --token options if needed
sudo kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd --pod-network-cidr="10.244.0.0/16"

# update kubectl config

mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown -Rf $(id -u):$(id -g) $HOME/.kube


# wait for some time and delete old node
sleep 30

kubectl get nodes --sort-by=.metadata.creationTimestamp

NODE_TO_REMOVE="$(kubectl get nodes -o jsonpath='{.items[?(@.status.conditions[0].status=="Unknown")].metadata.name}')"
[ -n "${NODE_TO_REMOVE}" ] && kubectl delete node $NODE_TO_REMOVE

# check running pods
kubectl get pods --all-namespaces

kubectl taint nodes --all node-role.kubernetes.io/master-