|NO.Z.00082|——————————|^^ 部署 ^^|——|KuberNetes&kubeadm.V11|5台Server|——|kubernetes组件|calico|
一、kubernetes组件安装
### --- 下载组件安装源码包:下载版本包——在k8s-master01节点执行
~~~ 现在k8s-ha-install版本包
[root@k8s-master01 ~]# cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git
Cloning into 'k8s-ha-install'...
remote: Enumerating objects: 652, done.
remote: Counting objects: 100% (220/220), done.
remote: Compressing objects: 100% (141/141), done.
remote: Total 652 (delta 109), reused 141 (delta 62), pack-reused 432
Receiving objects: 100% (652/652), 19.60 MiB | 6.12 MiB/s, done.
Resolving deltas: 100% (256/256), done.
### --- 切换到manual-installation-v1.21.x分支下
~~~ 切换到1.21.x版本分支
[root@k8s-master01 ~]# cd /root/k8s-ha-install && git checkout manual-installation-v1.21.x
Branch manual-installation-v1.21.x set up to track remote branch manual-installation-v1.21.x from origin.
Switched to a new branch 'manual-installation-v1.21.x'
二、calico组件安装——k8s-master01节点执行
### --- 切换到1.21.x版本分支下并进入到calico安装目录下
~~~ 切换到1.21.x版本分支
[root@k8s-master01 ~]# cd /root/k8s-ha-install && git checkout manual-installation-v1.21.x && cd calico/
Branch manual-installation-v1.21.x set up to track remote branch manual-installation-v1.21.x from origin.
Switched to a new branch 'manual-installation-v1.21.x'
### --- 修改calico-etcd.yaml配置文件
[root@k8s-master01 calico]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379"#g' calico-etcd.yaml
[root@k8s-master01 calico]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
[root@k8s-master01 calico]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
[root@k8s-master01 calico]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
[root@k8s-master01 calico]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
[root@k8s-master01 calico]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
[root@k8s-master01 calico]# POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
[root@k8s-master01 calico]# echo $POD_SUBNET //pod的网段是写在controller-manager下的kube-controller-manager.yaml,直接定义变量即可
172.168.0.0/12
[root@k8s-master01 calico]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml
### --- 注意:pod地址更改说明——不执行
~~~ # pod地址说明:注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释:
[root@k8s-master01 calico]# vim calico-etcd.yaml
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/12"
~~~ # 所以更改的时候请确保这个步骤的这个网段没有被统一替换掉,如果被替换掉了,还请改回来:
[root@k8s-master01 calico]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/12"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml
### --- 查看calico-etcd.yaml配置是否生效
[root@k8s-master01 calico]# pwd
/root/k8s-ha-install/calico
[root@k8s-master01 calico]# vim calico-etcd.yaml
~~~ 注释一:key,cert,ca已经导入进来
etcd-key: LS0tLS1CRUdJTiBSU0EgU
etcd-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0
etcd-ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0t
~~~ 注释二:endpoints已经更改完成
etcd_endpoints: "https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379"
~~~ 注释三:pod地址已经更改完成
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/12"
### --- 部署calico组件
~~~ 部署calico组件服务
[root@k8s-master01 calico]# kubectl apply -f calico-etcd.yaml
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
三、查看服务及集群状态
### --- 查看calico是否部署成功
~~~ 查看calico组件是否启动完成
[root@k8s-master01 ~]# kubectl get po -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-cdd5755b9-hjkcq 1/1 Running 0 2m39s 192.168.1.15 k8s-node02 <none> <none>
calico-node-jsprh 1/1 Running 0 2m39s 192.168.1.12 k8s-master02 <none> <none>
calico-node-n97ff 1/1 Running 0 2m39s 192.168.1.11 k8s-master01 <none> <none>
calico-node-tk4kz 1/1 Running 0 2m39s 192.168.1.15 k8s-node02 <none> <none>
calico-node-vfcxf 1/1 Running 0 2m39s 192.168.1.13 k8s-master03 <none> <none>
calico-node-wvwbh 1/1 Running 0 2m39s 192.168.1.14 k8s-node01 <none> <none>
### --- 查看node节点状态
~~~ 查看node节点状态是否为ready
[root@k8s-master01 ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master01 Ready control-plane,master 69m v1.21.2 192.168.1.11 <none> CentOS Linux 7 (Core) 4.19.12-1.el7.elrepo.x86_64 docker://19.3.15
k8s-master02 Ready control-plane,master 48m v1.21.2 192.168.1.12 <none> CentOS Linux 7 (Core) 4.19.12-1.el7.elrepo.x86_64 docker://19.3.15
k8s-master03 Ready control-plane,master 33m v1.21.2 192.168.1.13 <none> CentOS Linux 7 (Core) 4.19.12-1.el7.elrepo.x86_64 docker://19.3.15
k8s-node01 Ready <none> 25m v1.21.2 192.168.1.14 <none> CentOS Linux 7 (Core) 4.19.12-1.el7.elrepo.x86_64 docker://19.3.15
k8s-node02 Ready <none> 23m v1.21.2 192.168.1.15 <none> CentOS Linux 7 (Core) 4.19.12-1.el7.elrepo.x86_64 docker://19.3.15
Walter Savage Landor:strove with none,for none was worth my strife.Nature I loved and, next to Nature, Art:I warm'd both hands before the fire of life.It sinks, and I am ready to depart
——W.S.Landor
浙公网安备 33010602011771号