Kubernetes - ETCD backup & restore
Alasta 25 Août 2024 kubernetes kubernetes etcd backup
Description : Kubernetes, backup & restore
ETCD
Backup des ressources en mode déclaratif
kubectl get all --all-namespaces -o yaml > all-deploy-services.yaml
Ou utiliser une solution comme VELERO.
Backup ETCD
Sur le control-plane, le dossier de donnée d’ETCD:
- data-dir: /var/lib/etcd
etcd.service
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster controller-0=https://${CONTROLLER0_IP}:2380,controller-1=https://${CONTROLLER1_IP}:2380,controller-2=https://${CONTROLLER2_IP}:2380
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Dans un Kubernetes managed, on n’aura pas accès à ETCD !
Outil built-in pour backup
ETCDCTL_API=3 etcdctl snapshot save snapshot.db
Note: le snapshot est fait dans le dossier courant, il est possible de mettre le chemin vers un dossier de backup.
Status du snapshot
ETCDCTL_API=3 etcdctl snapshot status snapshot.db
Restauration ETCD
Restauration de la config ETCD
ETCDCTL_API=3 etcdctl \
snapshot restore snapshot.db \
--data-dir /var/lib/etcd-from-backup
Dans un cluster:
ETCDCTL_API=3 etcdctl \
snapshot restore snapshot.db \
--data-dir /var/lib/etcd-from-backup \
--initial-cluster master-1=https://192.168.5.11:2380,master-2=https://192.168.5.12:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380
Arret de kube-apiserver
service kube-apiserver stop
Modification du service ETCD
etcd.service
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster controller-0=https://${CONTROLLER0_IP}:2380,controller-1=https://${CONTROLLER1_IP}:2380,controller-2=https://${CONTROLLER2_IP}:2380
--initial-cluster-state new \\
--data-dir=/var/lib/etcd-from-backup
Redémarrage des services
systemctl daemon-reload
systemctl restart ectd
systemctl start kube-apiserver
Autre
Dans le cas d’un etcd en pod, il faut modifier le volume du manifest (/etc/kubernetes/manifests/etcd.yaml):
volumes:
- hostPath:
path: /var/lib/etcd-from-backup
type: DirectoryOrCreate
name: etcd-data