二进制部署 k8s
内容复制到Typora效果更佳
架构说明:
master:101、102、103
vip(keepalived):109
node:104、105、106、107、108
# 二进制安装k8s ### 1、流程 ```bash 1、master 高可用 kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy etcd cronDNS flannel 2、node kubelet kube-proxy cronDNS flannel 3、免密登录 [root@k8s-master-01 ~]# ssh-keygen [root@k8s-master-01 ~]# for i in 101 102 103 104 105 106 107 108; do ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.11.$i; done ``` ### 2、部署生成证书工具 ```bash 下载路径:https://github.com/cloudflare/cfssl/releases wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 [root@k8s-master-01 ~]# chmod +x cfssl_1.6.1_linux_amd64 [root@k8s-master-01 ~]# chmod +x cfssljson_1.6.1_linux_amd64 [root@k8s-master-01 ~]# mv cfssl_1.6.1_linux_amd64 /usr/local/bin/cfssl [root@k8s-master-01 ~]# mv cfssljson_1.6.1_linux_amd64 /usr/local/bin/cfssljson ``` ### 3、部署Etcd #### 3.1、创建CA证书 ```bash mkdir -p /opt/cert/ca cat > /opt/cert/ca/ca-config.json <<EOF { "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "8760h" } } } } EOF [root@k8s-master-01 ca]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - ``` #### 3.2、创建Etcd应用证书 ```bash mkdir -p /opt/cert/etcd cd /opt/cert/etcd cat > etcd-csr.json << EOF { "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.11.101", "192.168.11.102", "192.168.11.103", "192.168.11.104", "192.168.11.105", "192.168.11.106", "192.168.11.107", "192.168.11.108", "192.168.11.109" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "ShangHai", "L": "ShangHai" } ] } EOF [root@k8s-master-01 etcd]# cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd ``` #### 3.3、分发证书 ```bash for ip in m1 m2 m3 do ssh root@${ip} "mkdir -pv /etc/etcd/ssl" scp ../ca/ca*.pem root@${ip}:/etc/etcd/ssl scp ./etcd*.pem root@${ip}:/etc/etcd/ssl done ``` #### 3.4、部署etcd ```bash # 下载ETCD安装包 wget https://mirrors.huaweicloud.com/etcd/v3.3.24/etcd-v3.3.24-linux-amd64.tar.gz # 解压 [root@k8s-master-01 data]# tar xf etcd-v3.3.24-linux-amd64.tar.gz # 分发至其他节点 for i in m1 m2 m3 do scp ./etcd-v3.3.24-linux-amd64/etcd* root@$i:/usr/local/bin/ done # 查看ETCD安装是否成功 [root@k8s-master-01 data]# etcd --version etcd Version: 3.3.24 Git SHA: bdd57848d Go Version: go1.12.17 Go OS/Arch: linux/amd64 ``` #### 3.5、注册ETCD服务 ```bash mkdir -pv /etc/kubernetes/conf/etcd ETCD_NAME=`hostname` INTERNAL_IP=`hostname -i` INITIAL_CLUSTER=k8s-master-01=https://192.168.11.101:2380,k8s-master-02=https://192.168.11.102:2380,k8s-master-03=https://192.168.11.103:2380 cat << EOF | sudo tee /usr/lib/systemd/system/etcd.service [Unit] Description=etcd Documentation=https://github.com/coreos [Service] ExecStart=/usr/local/bin/etcd \\ --name ${ETCD_NAME} \\ --cert-file=/etc/etcd/ssl/etcd.pem \\ --key-file=/etc/etcd/ssl/etcd-key.pem \\ --peer-cert-file=/etc/etcd/ssl/etcd.pem \\ --peer-key-file=/etc/etcd/ssl/etcd-key.pem \\ --trusted-ca-file=/etc/etcd/ssl/ca.pem \\ --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\ --peer-client-cert-auth \\ --client-cert-auth \\ --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\ --advertise-client-urls https://${INTERNAL_IP}:2379 \\ --initial-cluster-token etcd-cluster \\ --initial-cluster ${INITIAL_CLUSTER} \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF ``` #### 3.6、启动 ```bash systemctl daemon-reload systemctl start etcd ``` #### 3.7、测试ETCD集群 ```bash ETCDCTL_API=3 etcdctl \ --cacert=/etc/etcd/ssl/etcd.pem \ --cert=/etc/etcd/ssl/etcd.pem \ --key=/etc/etcd/ssl/etcd-key.pem \ --endpoints="https://192.168.11.101:2379,https://192.168.11.102:2379,https://192.168.11.103:2379" \ endpoint status --write-out='table' ETCDCTL_API=3 etcdctl \ --cacert=/etc/etcd/ssl/etcd.pem \ --cert=/etc/etcd/ssl/etcd.pem \ --key=/etc/etcd/ssl/etcd-key.pem \ --endpoints="https://192.168.11.101:2379,https://192.168.11.102:2379,https://192.168.11.103:2379" \ member list --write-out='table' ```  ### 4、Master节点 > kube-apiserver > kube-controller-manager > kube-scheduler > kubelet > kube-proxy > > etcd > cronDNS > flannel #### 4.1、创建证书 - 创建CA证书 ```bash [root@k8s-master-01 ~]# mkdir /opt/cert/k8s [root@k8s-master-01 ~]# cd /opt/cert/k8s cat > ca-config.json << EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "ShangHai", "ST": "ShangHai" } ] } EOF [root@k8s-master-01 k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - ``` - 生成apiserver的应用证书 ```bash cat > server-csr.json << EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.11.101", "192.168.11.102", "192.168.11.103", "192.168.11.104", "192.168.11.105", "192.168.11.106", "192.168.11.107", "192.168.11.108", "192.168.11.109", "10.96.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "ShangHai", "ST": "ShangHai" } ] } EOF [root@k8s-master-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server ``` - kube-controller-manager证书 ```bash cat > kube-controller-manager-csr.json << EOF { "CN": "system:kube-controller-manager", "hosts": [ "127.0.0.1", "192.168.11.101", "192.168.11.102", "192.168.11.103", "192.168.11.104", "192.168.11.105", "192.168.11.106", "192.168.11.107", "192.168.11.108", "192.168.11.109" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:kube-controller-manager", "OU": "System" } ] } EOF [root@k8s-master-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager ``` - kube-scheduler证书 ```bash cat > kube-scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "192.168.11.101", "192.168.11.102", "192.168.11.103", "192.168.11.104", "192.168.11.105", "192.168.11.106", "192.168.11.107", "192.168.11.108", "192.168.11.109" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:kube-scheduler", "OU": "System" } ] } EOF [root@k8s-master-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler ``` - kube-proxy证书 ```bash cat > kube-proxy-csr.json << EOF { "CN":"system:kube-proxy", "hosts":[], "key":{ "algo":"rsa", "size":2048 }, "names":[ { "C":"CN", "L":"BeiJing", "ST":"BeiJing", "O":"system:kube-proxy", "OU":"System" } ] } EOF [root@k8s-master-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy ``` - 管理员证书 ```bash cat > admin-csr.json << EOF { "CN":"admin", "key":{ "algo":"rsa", "size":2048 }, "names":[ { "C":"CN", "L":"BeiJing", "ST":"BeiJing", "O":"system:masters", "OU":"System" } ] } EOF [root@k8s-master-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin ``` - 颁发证书 ```bash mkdir -pv /etc/kubernetes/ssl cp -p ./{ca*pem,server*pem,kube-controller-manager*pem,kube-scheduler*.pem,kube-proxy*pem,admin*.pem} /etc/kubernetes/ssl for i in m1 m2 m3; do ssh root@$i "mkdir -pv /etc/kubernetes/ssl" scp /etc/kubernetes/ssl/* root@$i:/etc/kubernetes/ssl done ``` #### 4.2、创建证书配置文件 ##### 4.2.1、创建kube-controller-manager.kubeconfig文件 ```bash export KUBE_APISERVER="https://192.168.11.109:8443" # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-controller-manager.kubeconfig # 设置客户端认证参数 kubectl config set-credentials "kube-controller-manager" \ --client-certificate=/etc/kubernetes/ssl/kube-controller-manager.pem \ --client-key=/etc/kubernetes/ssl/kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=kube-controller-manager.kubeconfig # 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来) kubectl config set-context default \ --cluster=kubernetes \ --user="kube-controller-manager" \ --kubeconfig=kube-controller-manager.kubeconfig # 配置默认上下文 kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig ``` ##### 4.2.2、创建kube-scheduler.kubeconfig文件 ```bash export KUBE_APISERVER="https://192.168.11.109:8443" # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-scheduler.kubeconfig # 设置客户端认证参数 kubectl config set-credentials "kube-scheduler" \ --client-certificate=/etc/kubernetes/ssl/kube-scheduler.pem \ --client-key=/etc/kubernetes/ssl/kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig # 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来) kubectl config set-context default \ --cluster=kubernetes \ --user="kube-scheduler" \ --kubeconfig=kube-scheduler.kubeconfig # 配置默认上下文 kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig ``` ##### 4.2.3、创建kube-proxy.kubeconfig文件 ```bash export KUBE_APISERVER="https://192.168.11.109:8443" # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig # 设置客户端认证参数 kubectl config set-credentials "kube-proxy" \ --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \ --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig # 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来) kubectl config set-context default \ --cluster=kubernetes \ --user="kube-proxy" \ --kubeconfig=kube-proxy.kubeconfig # 配置默认上下文 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig ``` ##### 4.2.4、创建admin.kubeconfig文件 ```bash export KUBE_APISERVER="https://192.168.11.109:8443" # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=admin.kubeconfig # 设置客户端认证参数 kubectl config set-credentials "admin" \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --client-key=/etc/kubernetes/ssl/admin-key.pem \ --embed-certs=true \ --kubeconfig=admin.kubeconfig # 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来) kubectl config set-context default \ --cluster=kubernetes \ --user="admin" \ --kubeconfig=admin.kubeconfig # 配置默认上下文 kubectl config use-context default --kubeconfig=admin.kubeconfig ``` ##### 4.2.5、创建TLS Bootstrapping集群配置文件 ```bash # 必须要用自己机器创建的Token TLS_BOOTSTRAPPING_TOKEN=`head -c 16 /dev/urandom | od -An -t x | tr -d ' '` cat > token.csv << EOF ${TLS_BOOTSTRAPPING_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF export KUBE_APISERVER="https://192.168.11.109:8443" # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kubelet-bootstrap.kubeconfig # 设置客户端认证参数,此处token必须用上叙token.csv中的token kubectl config set-credentials "kubelet-bootstrap" \ --token=${TLS_BOOTSTRAPPING_TOKEN} \ --kubeconfig=kubelet-bootstrap.kubeconfig # 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来) kubectl config set-context default \ --cluster=kubernetes \ --user="kubelet-bootstrap" \ --kubeconfig=kubelet-bootstrap.kubeconfig # 配置默认上下文 kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig ``` ##### 4.2.6、分发集群配置文件 ```bash for i in m1 m2 m3; do ssh root@$i "mkdir -p /etc/kubernetes/cfg"; scp token.csv kube-scheduler.kubeconfig kube-controller-manager.kubeconfig admin.kubeconfig kube-proxy.kubeconfig kubelet-bootstrap.kubeconfig root@$i:/etc/kubernetes/cfg; done ``` #### 4.3、安装kube-apiserver ##### 4.3.1、下载kube-apiserver的安装包 ```bash # 下载安装包 wget https://dl.k8s.io/v1.18.8/kubernetes-server-linux-amd64.tar.gz # 解压 [root@k8s-master-01 data]# tar -xf kubernetes-server-linux-amd64.tar.gz # 分发安装包 for i in m1 m2 m3; do scp kube-apiserver kube-controller-manager kube-scheduler kubectl kube-proxy kubelet root@$i:/usr/local/bin/; done ``` ##### 4.3.2、编写apiserver的配置文件 ```bash KUBE_APISERVER_IP=`hostname -i` cat > /etc/kubernetes/cfg/kube-apiserver.conf << EOF KUBE_APISERVER_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/var/log/kubernetes \\ --advertise-address=${KUBE_APISERVER_IP} \\ --default-not-ready-toleration-seconds=360 \\ --default-unreachable-toleration-seconds=360 \\ --max-mutating-requests-inflight=2000 \\ --max-requests-inflight=4000 \\ --default-watch-cache-size=200 \\ --delete-collection-workers=2 \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --allow-privileged=true \\ --service-cluster-ip-range=10.96.0.0/16 \\ --service-node-port-range=30000-60000 \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\ --authorization-mode=RBAC,Node \\ --enable-bootstrap-token-auth=true \\ --token-auth-file=/etc/kubernetes/cfg/token.csv \\ --kubelet-client-certificate=/etc/kubernetes/ssl/server.pem \\ --kubelet-client-key=/etc/kubernetes/ssl/server-key.pem \\ --tls-cert-file=/etc/kubernetes/ssl/server.pem \\ --tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \\ --client-ca-file=/etc/kubernetes/ssl/ca.pem \\ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/var/log/kubernetes/k8s-audit.log \\ --etcd-servers=https://192.168.11.101:2379,https://192.168.11.102:2379,https://192.168.11.103:2379 \\ --etcd-cafile=/etc/etcd/ssl/ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem" EOF ``` ##### 4.3.3、注册apiserver的服务 ```bash cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] EnvironmentFile=/etc/kubernetes/cfg/kube-apiserver.conf ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure RestartSec=10 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF ``` ##### 4.3.4、启动apiserver服务 ```bash # 创建kubernetes日志目录 mkdir -p /var/log/kubernetes/ systemctl daemon-reload systemctl enable --now kube-apiserver ``` #### 4.4、安装高可用 ##### 4.4.1、安装高可用软件 ```bash yum install -y keepalived haproxy ``` ##### 4.4.2、配置高可用软件 ```bash cat > /etc/haproxy/haproxy.cfg <<EOF global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor listen stats bind *:8006 mode http stats enable stats hide-version stats uri /stats stats refresh 30s stats realm Haproxy\ Statistics stats auth admin:admin frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master-01 192.168.11.101:6443 check inter 2000 fall 2 rise 2 weight 100 server k8s-master-02 192.168.11.102:6443 check inter 2000 fall 2 rise 2 weight 100 server k8s-master-03 192.168.11.103:6443 check inter 2000 fall 2 rise 2 weight 100 EOF mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak cd /etc/keepalived KUBERNETES_HOSTNAME=`hostname` KUBERNETES_IP=`hostname -i` cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id ${KUBERNETES_HOSTNAME} } vrrp_instance VI_1 { state BACKUP interface eth0 mcast_src_ip ${KUBERNETES_IP} virtual_router_id 51 priority 80 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.11.109 } } EOF ``` ##### 4.4.3、启动 ```bash [root@k8s-master-01 keepalived]# systemctl enable --now keepalived haproxy ``` #### 4.5、安装kube-controller-manager ##### 4.5.1、编写配置文件 ```bash cat > /etc/kubernetes/cfg/kube-controller-manager.conf << EOF KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/var/log/kubernetes \\ --leader-elect=true \\ --cluster-name=kubernetes \\ --bind-address=127.0.0.1 \\ --allocate-node-cidrs=true \\ --cluster-cidr=10.244.0.0/12 \\ --service-cluster-ip-range=10.96.0.0/16 \\ --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/etc/kubernetes/ssl/ca.pem \\ --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --kubeconfig=/etc/kubernetes/cfg/kube-controller-manager.kubeconfig \\ --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\ --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\ --experimental-cluster-signing-duration=87600h0m0s \\ --controllers=*,bootstrapsigner,tokencleaner \\ --use-service-account-credentials=true \\ --node-monitor-grace-period=10s \\ --horizontal-pod-autoscaler-use-rest-clients=true" EOF ``` ##### 4.5.2、注册服务 ```bash cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] EnvironmentFile=/etc/kubernetes/cfg/kube-controller-manager.conf ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF ``` ##### 4.5.3、启动 ```bash systemctl daemon-reload systemctl enable --now kube-controller-manager systemctl status kube-controller-manager ``` #### 4.6、安装kube-scheduler ##### 4.6.1、编写配置文件 ```bash cat > /etc/kubernetes/cfg/kube-scheduler.conf << EOF KUBE_SCHEDULER_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/var/log/kubernetes \\ --kubeconfig=/etc/kubernetes/cfg/kube-scheduler.kubeconfig \\ --leader-elect=true \\ --master=http://127.0.0.1:8080 \\ --bind-address=127.0.0.1 " EOF ``` ##### 4.6.2、注册服务 ```bash cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] EnvironmentFile=/etc/kubernetes/cfg/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF ``` ##### 4.6.3、启动 ```bash # 分别在三台master节点上启动 systemctl daemon-reload systemctl enable --now kube-scheduler ``` #### 4.7、安装kube-proxy ##### 4.7.1、编写配置 ```bash cat > /etc/kubernetes/cfg/kube-proxy.conf << EOF KUBE_PROXY_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/var/log/kubernetes \\ --config=/etc/kubernetes/cfg/kube-proxy-config.yml" EOF KUBE_APISERVER_IP=`hostname -i` KUBE_APISERVER_HOSTNAME=`hostname` cat > /etc/kubernetes/cfg/kube-proxy-config.yml << EOF kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: ${KUBE_APISERVER_IP} healthzBindAddress: ${KUBE_APISERVER_IP}:10256 metricsBindAddress: ${KUBE_APISERVER_IP}:10249 clientConnection: burst: 200 kubeconfig: /etc/kubernetes/cfg/kube-proxy.kubeconfig qps: 100 hostnameOverride: ${KUBE_APISERVER_HOSTNAME} clusterCIDR: 10.96.0.0/16 enableProfiling: true mode: "ipvs" kubeProxyIPTablesConfiguration: masqueradeAll: false kubeProxyIPVSConfiguration: scheduler: rr excludeCIDRs: [] EOF ``` ##### 4.7.2、注册服务 ```bash cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=/etc/kubernetes/cfg/kube-proxy.conf ExecStart=/usr/local/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF ``` ##### 4.7.3、启动 ```bash systemctl daemon-reload; systemctl enable --now kube-proxy; systemctl status kube-proxy ``` #### 4.8、安装kubelet ##### 4.8.1、部署TLS bootstrapping ```bash kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap ``` ##### 4.8.2、编写kubelet配置文件 ```bash KUBE_HOSTNAME=`hostname` KUBE_HOSTNAME_IP=`hostname -i` cat > /etc/kubernetes/cfg/kubelet.conf << EOF KUBELET_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/var/log/kubernetes \\ --hostname-override=${KUBE_HOSTNAME} \\ --container-runtime=docker \\ --kubeconfig=/etc/kubernetes/cfg/kubelet.kubeconfig \\ --bootstrap-kubeconfig=/etc/kubernetes/cfg/kubelet-bootstrap.kubeconfig \\ --config=/etc/kubernetes/cfg/kubelet-config.yml \\ --cert-dir=/etc/kubernetes/ssl \\ --image-pull-progress-deadline=15m \\ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sos/pause:3.2" EOF cat > /etc/kubernetes/cfg/kubelet-config.yml << EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: ${KUBE_HOSTNAME_IP} port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: - 10.96.0.2 clusterDomain: cluster.local failSwapOn: false authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/ssl/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% maxOpenFiles: 1000000 maxPods: 110 EOF ``` ##### 4.8.3、注册kubelet的服务 ```bash cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet After=docker.service [Service] EnvironmentFile=/etc/kubernetes/cfg/kubelet.conf ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF ``` ##### 4.8.4、启动kubelet ```bash systemctl daemon-reload;systemctl enable --now kubelet;systemctl status kubelet.service ``` #### 4.9、将节点加入集群 ##### 4.9.1、查看apiserver状态 ```bash [root@k8s-master-01 keepalived]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} ``` ##### 4.9.2、查看集群加入请求 ```bash [root@k8s-master-01 keepalived]# kubectl get csr ``` ##### 4.9.3、批准加入 ```bash kubectl certificate approve `kubectl get csr | grep "Pending" | awk '{print $1}'` ``` ##### 4.9.4、查看 ```bash [root@k8s-master-01 keepalived]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master-01 NotReady <none> 2s v1.18.8 k8s-master-03 NotReady <none> 2s v1.18.8 ``` ##### 4.9.5、创建一个超管用户 ```bash kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubernetes ``` #### 4.10、安装DNS ##### 4.10.1、下载DNS代码 ```bash # 下载代码的链接:https://github.com/coredns/deployment/tree/coredns-1.14.0 [root@k8s-master-01 kubernetes]# pwd /root/deployment-coredns-1.14.0/kubernetes # 部署 [root@k8s-master-01 kubernetes]# ./deploy.sh -i 10.96.0.2 -s | kubectl apply -f - serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created # 检测 [root@k8s-master-01 kubernetes]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-8686db44f5-xlw46 1/1 Running 0 73s ``` #### 4.11、安装flannel网络插件 ##### 4.11.1、下载网络查看 ```bash [root@kubernetes-master-01 /opt]# wget https://github.com/coreos/flannel/releases/download/v0.13.1-rc1/flannel-v0.13.1-rc1-linux-amd64.tar.gz ``` ##### 4.11.2、解压 ```bash [root@k8s-master-01 ~]# tar -xf flannel-v0.13.1-rc1-linux-amd64.tar.gz ``` ##### 4.11.3、分发flannel ```bash for i in m1 m2 m3 n1 n2 n3 n4 n5; do scp flanneld mk-docker-opts.sh root@$i:/usr/local/bin; done ``` ##### 4.11.4、修改Docker启动文件 ```bash sed -i '/ExecStart/s/\(.*\)/#\1/' /usr/lib/systemd/system/docker.service sed -i '/ExecReload/a ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock' /usr/lib/systemd/system/docker.service sed -i '/ExecReload/a EnvironmentFile=-/run/flannel/subnet.env' /usr/lib/systemd/system/docker.service for i in 101 102 103 104 105 106 107 108 do ssh root@192.168.11.$i "sed -i '/ExecStart/s/\(.*\)/#\1/' /usr/lib/systemd/system/docker.service" ssh root@192.168.11.$i "sed -i '/ExecReload/a ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock' /usr/lib/systemd/system/docker.service" ssh root@192.168.11.$i "sed -i '/ExecReload/a EnvironmentFile=-/run/flannel/subnet.env' /usr/lib/systemd/system/docker.service" done ``` ##### 4.11.5、将flannel的配置文件写入ETCD ```bash etcdctl \ --ca-file=/etc/etcd/ssl/ca.pem \ --cert-file=/etc/etcd/ssl/etcd.pem \ --key-file=/etc/etcd/ssl/etcd-key.pem \ --endpoints="https://192.168.11.101:2379,https://192.168.11.102:2379,https://192.168.11.103:2379" \ mk /coreos.com/network/config '{"Network":"10.244.0.0/12", "SubnetLen": 21, "Backend": {"Type": "vxlan", "DirectRouting": true}}' ``` ##### 4.11.6、注册flannel服务 ```bash cat > /usr/lib/systemd/system/flanneld.service << EOF [Unit] Description=Flanneld address After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify ExecStart=/usr/local/bin/flanneld \\ -etcd-cafile=/etc/etcd/ssl/ca.pem \\ -etcd-certfile=/etc/etcd/ssl/etcd.pem \\ -etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ -etcd-endpoints=https://192.168.11.101:2379,https://192.168.11.102:2379,https://192.168.11.103:2379 \\ -etcd-prefix=/coreos.com/network \\ -ip-masq ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=always RestartSec=5 StartLimitInterval=0 [Install] WantedBy=multi-user.target RequiredBy=docker.service EOF ``` ##### 4.11.7、启动flannel ```bash systemctl daemon-reload systemctl start flanneld systemctl restart docker ``` #### 4.12、测试集群 ```bash [root@k8s-master-01 ~]# kubectl run test -it --rm --image=busybox:1.28.3 If you don't see a command prompt, try pressing enter. / # nslookup kubernetes Server: 10.96.0.2 Address 1: 10.96.0.2 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local ``` ### 5、Node节点 >kubelet >kube-proxy >flannel #### 5.1、部署kebelet服务 ##### 5.1.1、下载kebelet的安装包 ```bash # 下载安装包 [root@k8s-master-01 data]# wget https://dl.k8s.io/v1.18.8/kubernetes-server-linux-amd64.tar.gz ``` ##### 5.1.2、分发安装包 ```bash [root@k8s-master-01 bin]# for i in n1 n2 n3 n4 n5; do scp kubelet kube-proxy root@$i:/usr/local/bin/; done ``` ##### 5.1.3、颁发证书 ```bash [root@k8s-master-01 bin]# cd /etc/kubernetes/ssl/ [root@k8s-master-01 ssl]# for i in n1 n2 n3 n4 n5; do ssh root@$i "mkdir -pv /etc/kubernetes/ssl"; scp -pr ./{ca*.pem,admin*pem,kube-proxy*pem} root@$i:/etc/kubernetes/ssl; done ``` ##### 5.1.4、复制kubelet的配置文件 ```bash [root@k8s-master-01 kubernetes]# for ip in n1 n2 n3 n4 n5;do ssh root@${ip} "mkdir -pv /var/log/kubernetes"; ssh root@${ip} "mkdir -pv /etc/kubernetes/cfg/"; scp /etc/kubernetes/cfg/{kubelet-config.yml,kubelet.conf} root@${ip}:/etc/kubernetes/cfg; scp /usr/lib/systemd/system/kubelet.service root@${ip}:/usr/lib/systemd/system; done [root@k8s-node-01 ~]# sed -i 's#master-01#node-01#g' /etc/kubernetes/cfg/kubelet.conf [root@k8s-node-01 ~]# sed -i 's#192.168.11.101#192.168.11.104#g' /etc/kubernetes/cfg/kubelet-config.yml [root@k8s-node-02 ~]# sed -i 's#master-01#node-02#g' /etc/kubernetes/cfg/kubelet.conf [root@k8s-node-02 ~]# sed -i 's#192.168.11.101#192.168.11.105#g' /etc/kubernetes/cfg/kubelet-config.yml [root@k8s-node-03 ~]# sed -i 's#master-01#node-03#g' /etc/kubernetes/cfg/kubelet.conf [root@k8s-node-03 ~]# sed -i 's#192.168.11.101#192.168.11.106#g' /etc/kubernetes/cfg/kubelet-config.yml [root@k8s-node-04 ~]# sed -i 's#master-01#node-04#g' /etc/kubernetes/cfg/kubelet.conf [root@k8s-node-04 ~]# sed -i 's#192.168.11.101#192.168.11.107#g' /etc/kubernetes/cfg/kubelet-config.yml [root@k8s-node-05 ~]# sed -i 's#master-01#node-05#g' /etc/kubernetes/cfg/kubelet.conf [root@k8s-node-05 ~]# sed -i 's#192.168.11.101#192.168.11.108#g' /etc/kubernetes/cfg/kubelet-config.yml ``` ##### 5.1.5、开启kubelet服务 ```bash systemctl daemon-reload;systemctl enable --now kubelet;systemctl status kubelet.service ``` #### 5.2、部署kube-proxy ##### 5.2.1、复制配置文件 ```bash [root@k8s-master-01 ssl]# for ip in n1 n2 n3 n4 n5;do scp /etc/kubernetes/cfg/{kube-proxy-config.yml,kube-proxy.conf} root@${ip}:/etc/kubernetes/cfg/; scp /usr/lib/systemd/system/kube-proxy.service root@${ip}:/usr/lib/systemd/system/; done [root@k8s-master-01 ~]# for i in n1 n2 n3 n4 n5; do scp /etc/kubernetes/cfg/kube-proxy.kubeconfig root@$i:/etc/kubernetes/cfg/kube-proxy.kubeconfig; scp /etc/kubernetes/cfg/kubelet-bootstrap.kubeconfig root@$i:/etc/kubernetes/cfg/kubelet-bootstrap.kubeconfig; done ``` ##### 5.2.2、修改配置文件 ```bash sed -i 's#192.168.11.101#192.168.11.104#g' /etc/kubernetes/cfg/kube-proxy-config.yml sed -i 's#master-01#node-01#g' /etc/kubernetes/cfg/kube-proxy-config.yml sed -i 's#192.168.11.101#192.168.11.105#g' /etc/kubernetes/cfg/kube-proxy-config.yml sed -i 's#master-01#node-02#g' /etc/kubernetes/cfg/kube-proxy-config.yml sed -i 's#192.168.11.101#192.168.11.106#g' /etc/kubernetes/cfg/kube-proxy-config.yml sed -i 's#master-01#node-03#g' /etc/kubernetes/cfg/kube-proxy-config.yml sed -i 's#192.168.11.101#192.168.11.107#g' /etc/kubernetes/cfg/kube-proxy-config.yml sed -i 's#master-01#node-04#g' /etc/kubernetes/cfg/kube-proxy-config.yml sed -i 's#192.168.11.101#192.168.11.108#g' /etc/kubernetes/cfg/kube-proxy-config.yml sed -i 's#master-01#node-05#g' /etc/kubernetes/cfg/kube-proxy-config.yml ``` ##### 5.2.3、启动kube-proxy服务 ```bash [root@k8s-node-01 ~]# systemctl daemon-reload; systemctl enable --now kube-proxy; systemctl status kube-proxy ``` #### 5.3、部署flannel网络插件 ##### 5.3.1、分发网络插件安装包 ```bash [root@k8s-master-01 ~]# for i in n1 n2 n3 n4 n5; do scp flanneld mk-docker-opts.sh root@$i:/usr/local/bin; done ``` ##### 5.3.2、同步ETCD的证书 ```bash [root@k8s-master-01 ~]# for i in n1 n2 n3 n4 n5;do ssh root@$i "mkdir -pv /etc/etcd/ssl"; scp -p /etc/etcd/ssl/*.pem root@$i:/etc/etcd/ssl; done ``` ##### 5.3.3、分发flannel启动脚本 ```bash [root@k8s-master-01 ~]# for i in n1 n2 n3 n4 n5;do scp /usr/lib/systemd/system/flanneld.service root@$i:/usr/lib/systemd/system; done ``` ##### 5.3.4、修改Docker的启动命令 ```bash [root@k8s-master-01 ~]# for ip in n1 n2 n3 n4 n5;do scp /usr/lib/systemd/system/docker.service root@${ip}:/usr/lib/systemd/system; done ``` ##### 5.3.4、重启flannel和docker ```bash [root@k8s-master-01 ~]# systemctl daemon-reload [root@k8s-master-01 ~]# systemctl restart flanneld [root@k8s-master-01 ~]# systemctl restart docker ``` #### 5.4、加入集群 ##### 5.4.1、查看集群请求 ```bash [root@k8s-master-01 ~]# kubectl get csr ``` ##### 5.4.2、批准加入 ```bash [root@k8s-master-01 ~]# kubectl certificate approve `kubectl get csr | grep "Pending" | awk '{print $1}'` ``` ##### 5.4.3、查看加入情况 ```bash [root@k8s-master-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master-01 Ready <none> 21h v1.18.8 k8s-master-02 Ready <none> 13h v1.18.8 k8s-master-03 Ready <none> 21h v1.18.8 k8s-node-01 NotReady <none> 4s v1.18.8 k8s-node-02 NotReady <none> 4s v1.18.8 k8s-node-03 NotReady <none> 4s v1.18.8 k8s-node-04 NotReady <none> 4s v1.18.8 k8s-node-05 NotReady <none> 4s v1.18.8 ``` ### 6、增加命令提示 ```bash yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc ``` ### 7、设置集群角色 ```bash kubectl label nodes 节点名称 node-role.kubernetes.io/master=节点名称 kubectl label nodes 节点名称 node-role.kubernetes.io/node=节点名称 [root@k8s-master-01 ~]# for i in 1 2 3; do kubectl label nodes k8s-master-0$i node-role.kubernetes.io/master=k8s-master-0$i; done [root@k8s-master-01 ~]# for i in 1 2 3 4 5; do kubectl label nodes k8s-node-0$i node-role.kubernetes.io/worker=k8s-node-0$i; done ``` ### 8、设置集群污点 ```bash kubectl taint nodes 节点名称 node-role.kubernetes.io/master=节点名称:NoSchedule --overwrite [root@k8s-master-01 ~]# for i in 1 2 3; do kubectl taint nodes k8s-master-0$i node-role.kubernetes.io/master=k8s-master-0$i:NoSchedule --overwrite; done # 验证 [root@k8s-master-01 ~]# kubectl describe nodes k8s-master-01 ``` 完美收官。