背景:kubeadm部署的集群为增强etcd可用性,不改变架构的情况下新增etcd节点

环境:当前环境k8s架构,3 master+etcd, n worker

ip name role
10.0.0.6 master01 master+etcd
10.0.0.7 master02 master+etcd
10.0.0.8 master03 master+etcd
10.0.0.9 worker01 node
10.0.0.10 worker02 node
... worker n node

目标:将某台(新增或现有)主机作为新etcd节点加入至etcd集群,后续架构如下

ip name role
10.0.0.6 master01 master+etcd
10.0.0.7 master02 master+etcd
10.0.0.8 master03 master+etcd
10.0.0.9 etcd etcd
10.0.0.10 worker02 node
... worker n node

步骤:

1. 在新主机上安装kubelet并加入当前k8s集群(省略)
2. 创建新节点etcd所需etcd证书文件
cat etcd-init.sh
# 使用你的主机 IP 替换 HOST0、HOST1 、HOST2 和 HOST3 的 IP 地址
export HOST0=10.0.0.6
export HOST1=10.0.0.7
export HOST2=10.0.0.8
export HOST3=10.0.0.9

# 使用你的主机名更新 NAME0、NAME1 NAME2 和 NAME4
export NAME0="10.0.0.6"
export NAME1="10.0.0.7"
export NAME2="10.0.0.8"
export NAME3="10.0.0.9"

# 创建临时目录来存储将被分发到其它主机上的文件
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/ /tmp/${HOST3}/

HOSTS=(${HOST0} ${HOST1} ${HOST2} ${HOST3})
NAMES=(${NAME0} ${NAME1} ${NAME2} ${NAME3})

for i in "${!HOSTS[@]}"; do
HOST=${HOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
---
apiVersion: "kubeadm.k8s.io/v1beta3"
kind: InitConfiguration
nodeRegistration:
    name: ${NAME}
localAPIEndpoint:
    advertiseAddress: ${HOST}
---
apiVersion: "kubeadm.k8s.io/v1beta3"
kind: ClusterConfiguration
etcd:
    local:
        serverCertSANs:
        - "${HOST}"
        peerCertSANs:
        - "${HOST}"
        extraArgs:
            initial-cluster: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380,${NAMES[3]}=https://${HOSTS[3]}:2380
            initial-cluster-state: new
            name: ${NAME}
            listen-peer-urls: https://${HOST}:2380
            listen-client-urls: https://${HOST}:2379
            advertise-client-urls: https://${HOST}:2379
            initial-advertise-peer-urls: https://${HOST}:2380
EOF
done

3. 获取ca cert和key,由于集群已有,可直接从任一master节点获取(也可通过命令生成,未验证可用性)
kubeadm init phase certs etcd-ca

# 这一操作创建如下两个文件:
# /etc/kubernetes/pki/etcd/ca.crt
# /etc/kubernetes/pki/etcd/ca.key

# 若从主机获取,需将ca文件同样放置上述路径

4. 为新成员10.0.0.9创建etcd证书,需在一台master节点执行命令,执行前需备份/etc/kubernetes目录
ssh root@10.0.0.6

# 备份k8s相关文件
cp -a /etc/kubernetes /etc/kubernetes-backup

# 生成etcd证书
kubeadm init phase certs etcd-server --config=/tmp/10.0.0.9/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/10.0.0.9/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/10.0.0.9/kubeadmcfg.yaml

# 证书默认生成在/etc/kubernetes/pki,将目录放置在/tmp/10.0.0.9下后还原master配置
cp -R /etc/kubernetes/pki /tmp/10.0.0.9/
mv /etc/kubernetes-backup /etc/kubernetes

5. 将证书传至10.0.0.6节点
# 该步骤结束后证书均已生成
scp -r /tmp/10.0.0.9/* root@10.0.0.9:/etc/kubernetes/

6. 修改kubelet config文件,使允许创建静态pod,并基于运行中master节点内已有etcd.yaml改为当前节点etcd.yaml
# 修改kubelet config
vim /var/lib/kubelet/config.yaml
#添加如下行
staticPodPath: /etc/kubernetes/manifests

# 将任一master节点中etcd.yaml发送至10.0.0.9节点并修改如下行:
cat /etc/kubernetes/manifests/etcd.yaml
   # ... omit ...
    - --advertise-clinet-urls=https://10.0.0.9:2379
    - --initial-advertise-peer-urls=https://10.0.0.9:2380
    - --initial-cluster=10.0.0.6=https://10.0.0.6:2380,10.0.0.7=https://10.0.0.7:2380,10.0.0.8=https://10.0.0.8:2380,10.0.0.9=https://10.0.0.9:2380,
    - --listen-client-urls=https://127.0.0.1:2379,https://10.0.0.9:2379
    - --listen-peer-urls=https://10.0.0.9:2380
    - --name=10.0.0.9
    - --initial-cluster-state=existing
    # ... 其余内容保持不变

# 重启kubelet使配置生效
systemctl restart kubelet

7. 在已有master节点上基于新节点信息新增member
# 添加新member至etcd集群
etcdctl --cacrt=xxx --cert=xxx --key=xxx --endpoints=https://10.0.0.6:2379,https://10.0.0.7:2379,https://10.0.0.8:2379 member add 10.0.0.9 --peer-urls=https://10.0.0.9:2380

# 查看集群成员
etcdctl --cacrt=xxx --cert=xxx --key=xxx --endpoints=https://10.0.0.6:2379,https://10.0.0.7:2379,https://10.0.0.8:2379,https://10.0.0.8:2379 endpoint status -w table

8. 更新各apiserver节点配置,添加新etcd节点,之后重启apiserver.yaml
cat /etc/kubernetes/manifests/kube-apiserver.yaml
#... omit ...
    - --etcd-servers=https://10.0.0.6:2379,https://10.0.0.7:2379,https://10.0.0.8:2379,https://10.0.0.8:2379

# 滚动重启apiserver.yaml
mv /etc/kubernetes/manifests/kube-apiserver.yaml /etc/kubernetes/
# docker ps |grep apiserver查看apiserver已停止后重新启动apiserver
mv /etc/kubernetes/kube-apiserver.yaml /etc/kubernetes/manifests/

# 对其他2个master进行同样操作

参考: https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/

 posted on 2022-07-26 23:01  shelterCJJ  阅读(379)  评论(0)    收藏  举报