kubeadm部署k8s集群

一、系统环境准备

1、系统环境说明

系统环境说明
系统 角色 IP 组件 K8s版本
centos7.9 kubeadm-master1 192.168.100.41 docker,kubeadm,kubelet,kubectl v1.20.0
centos7.9 kubeadm-master1 192.168.100.42 docker,kubeadm,kubelet,kubectl v1.20.0
centos7.9 kubeadm-master1 192.168.100.43 docker,kubeadm,kubelet,kubectl v1.20.0
centos7.9 kubeadm-node1 192.168.100.44 docker,kubeadm,kubelet,kubectl v1.20.0
centos7.9 kubeadm-node1 192.168.100.45 docker,kubeadm,kubelet,kubectl v1.20.0
  VIP 192.168.100.46 使用VIP进行kubeadm初始化master  

2、初始化环境配置

2.1、关闭防火墙

systemctl stop firewalld

systemctl disable --now firewalld

2.2、关闭selinux

#永久关闭
sed -i 's/enforcing/disabled/' /etc/selinux/config

#临时关闭
setenforce 0 

2.3、关闭swap分区

sed -ri 's/.*swap.*/#&/' /etc/fstab

swapoff -a && sysctl -w vm.swappiness=0

cat /etc/fstab


# 参数解释:

# -ri: 这个参数用于在原文件中替换匹配的模式。-r表示扩展正则表达式,-i允许直接修改文件。

# 's/.*swap.*/#&/': 这是一个sed命令,用于在文件/etc/fstab中找到包含swap的行,并在行首添加#来注释掉该行。

# /etc/fstab: 这是一个文件路径,即/etc/fstab文件,用于存储文件系统表。

# swapoff -a: 这个命令用于关闭所有启用的交换分区。

# sysctl -w vm.swappiness=0: 这个命令用于修改vm.swappiness参数的值为0,表示系统在物理内存充足时更倾向于使用物理内存而非交换分区。

2.4、时间同步

#加到计划任务每5分钟同步一次时间
echo "5 * * * *    ntpdate ntp1.aliyun.com" > /var/spool/cron/root

#同步时间
/usr/sbin/ntpdate ntp.aliyun.com

#同步到硬件时钟
hwclock --systohc

2.5、将桥接的IPv4流量传递到iptables的链

#桥接前确认br_netfilter模块是否加载,执行以下命令
lsmod | grep br_netfilter
modprobe br_netfilter

#然后执行下命令
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-iptables  = 1
EOF

#使其生效
sysctl --system 

2.6、更换centos7配置yum源

#备份官方centos源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
 
#下载阿里centos源
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
 
#配置epel源
wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
 
#清除缓存
yum clean all
 
#生成新的缓存
yum makecache

2.7、配置docker和kubernetes源

#下载docker源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

#配置kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#更新yum缓存
yum makecache fast

2.8、所有节点设置主机名并配置hosts解析

#所有节点设置主机名
hostnamectl set-hostname <hostname>

#添加hosts解析
cat > /etc/hosts << EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.100.41 kubeadm-master1
192.168.100.42 kubeadm-master2
192.168.100.43 kubeadm-master3
192.168.100.44 kubeadm-node1
192.168.100.45 kubeadm-node2
EOF

2.9、配置免密登录

yum install -y sshpass

ssh-keygen -f /root/.ssh/id_rsa -P ''

export IP="192.168.100.41 192.168.100.42 192.168.100.43 192.168.100.44 192.168.100.45"
export SSHPASS=086530
for HOST in $IP;do
     sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST
done

# 这段脚本的作用是在一台机器上安装sshpass工具,并通过sshpass自动将本机的SSH公钥复制到多个远程主机上,以实现无需手动输入密码的SSH登录。

# 具体解释如下:

# 1. `apt install -y sshpass` 或 `yum install -y sshpass`:通过包管理器(apt或yum)安装sshpass工具,使得后续可以使用sshpass命令。

# 2. `ssh-keygen -f /root/.ssh/id_rsa -P ''`:生成SSH密钥对。该命令会在/root/.ssh目录下生成私钥文件id_rsa和公钥文件id_rsa.pub,同时不设置密码(即-P参数后面为空),方便后续通过ssh-copy-id命令自动复制公钥。

# 3. `export IP="192.168.100.41 192.168.100.42 192.168.100.43 192.168.100.44 192.168.100.45"`:设置一个包含多个远程主机IP地址的环境变量IP,用空格分隔开,表示要将SSH公钥复制到这些远程主机上。

# 4. `export SSHPASS=086530`:设置环境变量SSHPASS,将sshpass所需的SSH密码(在这里是"123123")赋值给它,这样sshpass命令可以自动使用这个密码进行登录。

# 5. `for HOST in $IP;do`:遍历环境变量IP中的每个IP地址,并将当前IP地址赋值给变量HOST。

# 6. `sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST`:使用sshpass工具复制本机的SSH公钥到远程主机。其中,-e选项表示使用环境变量中的密码(即SSHPASS)进行登录,-o StrictHostKeyChecking=no选项表示连接时不检查远程主机的公钥,以避免交互式确认。

# 通过这段脚本,可以方便地将本机的SSH公钥复制到多个远程主机上,实现无需手动输入密码的SSH登录。

二、 所有节点安装docker、kubeadm、kubelet和kubectl

kubernetes部署采用yum安装默认版本

kubernetes需要用到容器运行时接口,本例采用docker容器运行时

容器运行时安装参考:https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/

相关资料下载:

链接:https://pan.baidu.com/s/1DB95Izwn54u8Za4tjNBdWA 
提取码:4yxs

#查看版本
yum list kubeadm --showduplicates | sort -r

指定安装版本
yum install -y containerd.io-1.2.13 docker-ce-19.03.11 docker-ce-cli-19.03.11 kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0 kubernetes-cni-0.8.6-0.x86_64

#重启 Docker
systemctl daemon-reload && systemctl restart docker&& systemctl enable docker

#配置镜像加载器及Cgroup Driver驱动采用system
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

#重启docker
systemctl restart docker

#查看Cgroup驱动是否为systemd
docker info | grep "Cgroup Driver"

#kubelet设置开机启动
systemctl enable --now kubelet

#查看版本
docker --version
Docker version 19.03.11, build 42e35e61f3

kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:57:36Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

三、 在所有master节点上建立高可用

在master建立高可用,其实就是给所有的kube-apiserver做反向代理,可使用SLB或者使用一台独立虚拟服务器代理。本例是在所有master节点上部署nginx(upstream)+keepalived方式反向代理kube-apiserver。

3.1、kube-proxy开启IPVS配置

#ipvs称之为IP虚拟服务器(IP Virtual Server,简写为IPVS)
#在所有master节点执行以下命令
yum -y install ipvsadm ipset sysstat conntrack libseccomp

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules

#查看IPVS模块加载情况
[root@kubeadm-master1 ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      19149  7 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
nf_conntrack          143411  9 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

#能看到ip_vs ip_vs_rr ip_vs_wrr  ip_vs_sh nf_conntrack_ipv4加载成功

3.2、部署nginx和keepalived

#在所有master节点安装nginx、nginx-all-modules.noarch模块和keepalived
yum -y install nginx keepalived nginx-all-modules.noarch
systemctl start keepalived && systemctl enable keepalived
systemctl start nginx && systemctl enable nginx

3.3、配置Nginx的upstream反向代理

#在所有master节点配置nginx.conf
cat /etc/nginx/nginx.conf | grep -vE "(^[ \t]*#|^[ \t]*$)"

#写入nginx配置文件
cat > /etc/nginx/nginx.conf <<EOF
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}
stream {
	log_format proxy '\$remote_addr \$remote_port - [\$time_local] \$status \$protocol '
	'"$upstream_addr" "\$upstream_bytes_sent" "\$upstream_connect_time"' ;
	access_log /var/log/nginx/nginx-proxy.log proxy;
	upstream k8s-apiserver {
		server 192.168.100.41:6443 weight=5 max_fails=3 fail_timeout=30s; 
		server 192.168.100.42:6443 weight=5 max_fails=3 fail_timeout=30s;
		server 192.168.100.43:6443 weight=5 max_fails=3 fail_timeout=30s;
	}
	server {
		listen 7443;
		proxy_connect_timeout 30s;
		proxy_timeout 30s;
		proxy_pass k8s-apiserver;
	}	
}
http {
    log_format  main  '\$remote_addr - \$remote_user [\$time_local] "\$request" '
                      '\$status $body_bytes_sent "\$http_referer" '
                      '"\$http_user_agent" "\$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    include /etc/nginx/conf.d/*.conf;
    server {
        listen       80;
        listen       [::]:80;
        server_name  _;
        root         /usr/share/nginx/html;
        include /etc/nginx/default.d/*.conf;
        error_page 404 /404.html;
        location = /404.html {
        }
        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
        }
    }
}
EOF

#说明:
# 四层负载均衡,为Master apiserver组件提供负载均衡
stream {
......
}

#监听端口
listen 7443;

## Master APISERVER IP:PORT
    upstream kubernetes_lb {
	server 192.168.100.41:6443 weight=5 max_fails=3 fail_timeout=30s; 
	server 192.168.100.42:6443 weight=5 max_fails=3 fail_timeout=30s;
	server 192.168.100.43:6443 weight=5 max_fails=3 fail_timeout=30s;
    }

#将nginx配置文件发送到master2和master3节点
[root@kubeadm-master1 ~]# scp /etc/nginx/nginx.conf 192.168.100.42:/etc/nginx/
nginx.conf                                                           100% 1725     2.1MB/s   00:00    
[root@kubeadm-master1 ~]# scp /etc/nginx/nginx.conf 192.168.100.43:/etc/nginx/
nginx.conf             

#检查Nginx配置文件语法是否正常
nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

3.4、keepalived配置

#在所有master节点配置keepalived.conf
cat > /etc/keepalived/keepalived.conf <<EOF
global_defs {
notification_email {
root@localhost
}
notification_email_from root@k8s.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id kubeadm_master1
}

vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"
interval 2
weight -20
}

vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 88
advert_int 1
priority 110
authentication {
auth_type PASS
auth_pass 1234abcd
}

track_script {
chk_nginx
}

virtual_ipaddress {
192.168.100.46/24
}
}
EOF

#创建nginx_check.sh脚本
cat > /etc/keepalived/nginx_check.sh <<EOF
#!/bin/bash
export LANG="en_US.UTF-8"
if [ ! -f "/run/nginx.pid" ]; then
    /usr/bin/systemctl restart nginx
    sleep 2
    if [ ! -f "/run/nginx.pid" ]; then
       /bin/kill -9 \$(head -n 1 /var/run/keepalived.pid)
    fi
fi
EOF

chmod a+x /etc/keepalived/nginx_check.sh

[root@kubeadm-master1 ~]# scp /etc/keepalived/keepalived.conf 192.168.100.42:/etc/keepalived/
keepalived.conf                                                      100%  472   398.4KB/s   00:00  
[root@kubeadm-master1 ~]# scp /etc/keepalived/nginx_check.sh 192.168.100.42:/etc/keepalived/
nginx_check.sh                                                       100%  228   206.4KB/s   00:00 
  
[root@kubeadm-master1 ~]# scp /etc/keepalived/keepalived.conf 192.168.100.43:/etc/keepalived/
keepalived.conf                                                      100%  474   628.9KB/s   00:00    
[root@kubeadm-master1 ~]# scp /etc/keepalived/nginx_check.sh 192.168.100.43:/etc/keepalived/
nginx_check.sh                                                       100%  228   241.5KB/s   00:00    

#master2和master3节点需要修改的地方
#1>修改interface ens33中的ens33改为服务模块节点实际的网卡名
#2>三个节点router_id分别修改为kubeadm_master1、kubeadm_master2、kubeadm_master3
#3>三个节点state MASTER分别修改为:state MASTER、state BACKUP、state BACKUP
#4>三个节点priority 110 分别修改为:110,100,90



#说明:
router_id kubeadm_master1  #router_id每台机器设置不同
script "/etc/keepalived/nginx_check.sh"  ## 检测 nginx 状态的脚本路径
interval 2  ## 检测时间间隔
weight -20  ## 如果条件成立,权重-20
state MASTER   #其他节点设置为BACKUP
interface ens33  #网卡设备名称,根据自己网卡信息进行更改 
virtual_router_id 88 # VRRP 路由 ID实例,每个实例是唯一的
priority 110 # 优先级,备服务器设置为100,90
advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
chk_nginx #执行nginx监控
192.168.100.46/24  #这就是虚拟IP地址

#所有master节点重启nginx和keepalived服务
systemctl restart nginx && systemctl restart keepalived

#查看日志
journalctl -f -u keepalived

#在同网络任意节点验证keepalived是否畅通
ping 192.168.100.46
#5.在同网络任意节点验证nginx 的VIP:7443端口是否畅通
ssh -v -p 7443 192.168.100.46
#出现这个结果代表畅通
debug1: Connection established.

四、 在master1节点上进行kubeadm初始化

 4.1、获取kubeadm-init.yaml文件

#初始化master1节点
kubeadm config print init-defaults > kubeadm-init.yaml

#2.编辑kubeadm-init.yaml
cat > kubeadm-init.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.100.41
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: kubeadm-master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: "192.168.100.46:7443"
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
EOF

#说明:
advertiseAddress: 192.168.100.41      #指定本地ip地址
name: kubeadm-master1                   #指定本地主机名
controlPlaneEndpoint: "192.168.100.46:7443"   #增加kubeapiserver集群ip地址和端口,就是VIP
registry.aliyuncs.com/google_containers #国外网址k8s.gcr.io受限换成国内
kubernetesVersion: v1.23.0  #修改实际kubernetes版本
podSubnet: 10.244.0.0/16  #增加pod网络

---                                                   #增加kubeproxy代理配置
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"

4.2、kubeadm初始化

#kubeadm初始化
kubeadm init --config kubeadm-init.yaml

#以下初始化结果
[root@kubeadm-master1 ~]# kubeadm init --config kubeadm-init.yaml
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeadm-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.100.41 192.168.100.46]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeadm-master1 localhost] and IPs [192.168.100.41 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeadm-master1 localhost] and IPs [192.168.100.41 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.517700 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubeadm-master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node kubeadm-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.100.46:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:ff0b34df599a5d9dc637df64f056db4f31b3d3eedd0ad0b2bedd17414d146a4e \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.46:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:ff0b34df599a5d9dc637df64f056db4f31b3d3eedd0ad0b2bedd17414d146a4e 

##也可以先先下载镜像再进行初始化
kubeadm config images pull --config kubeadm-init.yaml

根据输出提示操作:

kubeadm初始化完成先本地执行命令
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

五、把master2和master3节点加入集群

5.1、复制相关文件到另外两个master节点

#在master2和master3节点上创建etcd目录
mkdir -p /etc/kubernetes/pki/etcd

#复制相关文件到master2和master3节点
master="192.168.100.42 192.168.100.43"
for host in ${master}; do
	scp /etc/kubernetes/pki/ca.* $host:/etc/kubernetes/pki/
	scp /etc/kubernetes/pki/sa.* $host:/etc/kubernetes/pki/
	scp /etc/kubernetes/pki/front-proxy-ca.* $host:/etc/kubernetes/pki/
	scp /etc/kubernetes/pki/etcd/ca.* $host:/etc/kubernetes/pki/etcd/
	scp /etc/kubernetes/admin.conf $host:/etc/kubernetes/
done

#查看
[root@kubeadm-master2 ~]# tree /etc/kubernetes
/etc/kubernetes
├── admin.conf
├── manifests
└── pki
    ├── ca.crt
    ├── ca.key
    ├── etcd
    │   ├── ca.crt
    │   └── ca.key
    ├── front-proxy-ca.crt
    ├── front-proxy-ca.key
    ├── sa.key
    └── sa.pub

3 directories, 9 files

5.2、在另外两个master节点执行相关操作

#在master2和master3节点分别执行:
kubeadm join 192.168.100.46:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:ff0b34df599a5d9dc637df64f056db4f31b3d3eedd0ad0b2bedd17414d146a4e \
    --control-plane
    
#根据输出提示执行:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.3、验证集群

#在任意master节点上查看pod、svc状态,其中pod是否全部处于running状态
kubectl get pod,svc --all-namespaces -o wide

[root@kubeadm-master1 ~]# kubectl get pod,svc --all-namespaces -o wide
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE     IP               NODE              NOMINATED NODE   READINESS GATES
kube-system   pod/coredns-7f89b7bc75-8lk9k                  0/1     Pending   0          24m     <none>           <none>            <none>           <none>
kube-system   pod/coredns-7f89b7bc75-j4f9g                  0/1     Pending   0          24m     <none>           <none>            <none>           <none>
kube-system   pod/etcd-kubeadm-master1                      1/1     Running   0          24m     192.168.100.41   kubeadm-master1   <none>           <none>
kube-system   pod/etcd-kubeadm-master2                      1/1     Running   0          5m      192.168.100.42   kubeadm-master2   <none>           <none>
kube-system   pod/etcd-kubeadm-master3                      1/1     Running   0          4m4s    192.168.100.43   kubeadm-master3   <none>           <none>
kube-system   pod/kube-apiserver-kubeadm-master1            1/1     Running   0          24m     192.168.100.41   kubeadm-master1   <none>           <none>
kube-system   pod/kube-apiserver-kubeadm-master2            1/1     Running   0          5m4s    192.168.100.42   kubeadm-master2   <none>           <none>
kube-system   pod/kube-apiserver-kubeadm-master3            1/1     Running   0          4m18s   192.168.100.43   kubeadm-master3   <none>           <none>
kube-system   pod/kube-controller-manager-kubeadm-master1   1/1     Running   1          24m     192.168.100.41   kubeadm-master1   <none>           <none>
kube-system   pod/kube-controller-manager-kubeadm-master2   1/1     Running   0          5m3s    192.168.100.42   kubeadm-master2   <none>           <none>
kube-system   pod/kube-controller-manager-kubeadm-master3   1/1     Running   0          4m18s   192.168.100.43   kubeadm-master3   <none>           <none>
kube-system   pod/kube-proxy-g5cxd                          1/1     Running   0          4m19s   192.168.100.43   kubeadm-master3   <none>           <none>
kube-system   pod/kube-proxy-gdckm                          1/1     Running   0          24m     192.168.100.41   kubeadm-master1   <none>           <none>
kube-system   pod/kube-proxy-qdgkh                          1/1     Running   0          5m5s    192.168.100.42   kubeadm-master2   <none>           <none>
kube-system   pod/kube-scheduler-kubeadm-master1            1/1     Running   1          24m     192.168.100.41   kubeadm-master1   <none>           <none>
kube-system   pod/kube-scheduler-kubeadm-master2            1/1     Running   0          5m4s    192.168.100.42   kubeadm-master2   <none>           <none>
kube-system   pod/kube-scheduler-kubeadm-master3            1/1     Running   0          4m18s   192.168.100.43   kubeadm-master3   <none>           <none>

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE   SELECTOR
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  24m   <none>
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   24m   k8s-app=kube-dns

#在任意master节点上执行以下命令验证是否部署成功
[root@kubeadm-master1 ~]# kubectl get node
NAME              STATUS     ROLES                  AGE     VERSION
kubeadm-master1   NotReady   control-plane,master   24m     v1.20.0
kubeadm-master2   NotReady   control-plane,master   5m34s   v1.20.0
kubeadm-master3   NotReady   control-plane,master   4m48s   v1.20.0
#以上NotReady等待CNI网络插件安装

六、安装CNI网络插件

#在其中一个master节点上下载
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

#执行命令
kubectl apply -f kube-flannel.yml

#coredns应用测试验证
kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
/# nslookup kubernetes
/# ping kubernetes
/# nslookup 163.com
/# ping 163.com

#所有节点再验证
[root@kubeadm-master1 ~]# kubectl get nodes
NAME              STATUS   ROLES                  AGE   VERSION
kubeadm-master1   Ready    control-plane,master   50m   v1.20.0
kubeadm-master2   Ready    control-plane,master   31m   v1.20.0
kubeadm-master3   Ready    control-plane,master   30m   v1.20.0

[root@kubeadm-master1 ~]# kubectl get pods --all-namespaces
NAMESPACE      NAME                                      READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-fcwf4                     1/1     Running   0          5m25s
kube-flannel   kube-flannel-ds-k5pl5                     1/1     Running   0          5m25s
kube-flannel   kube-flannel-ds-v6tkp                     1/1     Running   0          5m25s
kube-system    coredns-7f89b7bc75-8lk9k                  1/1     Running   0          52m
kube-system    coredns-7f89b7bc75-j4f9g                  1/1     Running   0          52m
kube-system    etcd-kubeadm-master1                      1/1     Running   0          52m
kube-system    etcd-kubeadm-master2                      1/1     Running   0          32m
kube-system    etcd-kubeadm-master3                      1/1     Running   0          32m
kube-system    kube-apiserver-kubeadm-master1            1/1     Running   0          52m
kube-system    kube-apiserver-kubeadm-master2            1/1     Running   0          33m
kube-system    kube-apiserver-kubeadm-master3            1/1     Running   0          32m
kube-system    kube-controller-manager-kubeadm-master1   1/1     Running   2          52m
kube-system    kube-controller-manager-kubeadm-master2   1/1     Running   2          33m
kube-system    kube-controller-manager-kubeadm-master3   1/1     Running   0          32m
kube-system    kube-proxy-g5cxd                          1/1     Running   0          32m
kube-system    kube-proxy-gdckm                          1/1     Running   0          52m
kube-system    kube-proxy-qdgkh                          1/1     Running   0          33m
kube-system    kube-scheduler-kubeadm-master1            1/1     Running   2          52m
kube-system    kube-scheduler-kubeadm-master2            1/1     Running   2          33m
kube-system    kube-scheduler-kubeadm-master3            1/1     Running   0          32m

七、加入worker节点

7.1、加入worker节点

#在node1和node2节点分别执行:
kubeadm join 192.168.100.46:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:ff0b34df599a5d9dc637df64f056db4f31b3d3eedd0ad0b2bedd17414d146a4e

7.2、验证

#通过journalctl查看日志
journalctl -f -u kubelet

#在任意master节点执行如下命令进行验证node节点是否加入成功
kubectl get node -A | grep node

#返回结果如下
kubeadm-node1     NotReady   <none>                 2m46s   v1.20.0
kubeadm-node2     NotReady   <none>                 80s     v1.20.0

#node节点处于NotReady状态说明pod的kube-flannel、kube-proxy未部署完成,通过命令
kubectl -n kube-system get pods #查看
[root@kubeadm-master1 ~]# kubectl -n kube-system get pods
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-8lk9k                  1/1     Running   0          60m
coredns-7f89b7bc75-j4f9g                  1/1     Running   0          60m
etcd-kubeadm-master1                      1/1     Running   0          60m
etcd-kubeadm-master2                      1/1     Running   0          40m
etcd-kubeadm-master3                      1/1     Running   0          39m
kube-apiserver-kubeadm-master1            1/1     Running   0          60m
kube-apiserver-kubeadm-master2            1/1     Running   0          40m
kube-apiserver-kubeadm-master3            1/1     Running   0          40m
kube-controller-manager-kubeadm-master1   1/1     Running   2          60m
kube-controller-manager-kubeadm-master2   1/1     Running   2          40m
kube-controller-manager-kubeadm-master3   1/1     Running   0          40m
kube-proxy-6lv24                          1/1     Running   0          2m28s
kube-proxy-g5cxd                          1/1     Running   0          40m
kube-proxy-gdckm                          1/1     Running   0          60m
kube-proxy-gdcth                          1/1     Running   0          3m54s
kube-proxy-qdgkh                          1/1     Running   0          40m
kube-scheduler-kubeadm-master1            1/1     Running   2          60m
kube-scheduler-kubeadm-master2            1/1     Running   2          40m
kube-scheduler-kubeadm-master3            1/1     Running   0          40m
#READY全部为1说明部署已完成

#再次验证返回结果都为Ready状态
[root@kubeadm-master1 ~]# kubectl get nodes
NAME              STATUS   ROLES                  AGE     VERSION
kubeadm-master1   Ready    control-plane,master   60m     v1.20.0
kubeadm-master2   Ready    control-plane,master   41m     v1.20.0
kubeadm-master3   Ready    control-plane,master   40m     v1.20.0
kubeadm-node1     Ready    <none>                 4m3s    v1.20.0
kubeadm-node2     Ready    <none>                 2m37s   v1.20.0

八、Dashboard部署及验证k8s集群

#在master1上部署
#下载Dashboard的yaml文件
#官方主页https://github.com/kubernetes/dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml

#默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:
vim recommended.yaml
...
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort   #新增
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001   #新增
  selector:
    k8s-app: kubernetes-dashboard

---
...

kubectl apply -f recommended.yaml

#验证
kubectl -n kubernetes-dashboard get pod,svc
#pod状态处于Running说明部署成功

#通过网页访问使用worker节点任意ip访问
https://NodeIP:30001

#创建service account并绑定默认cluster-admin管理员集群角色:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

#使用输出的token登录Dashboard
https://NodeIP:30001
#在设置项里可以修改语言

九、etcdctl部署

#在master1节点下载etcd程序包
wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz

#解压程序etcd-v3.4.13-linux-amd64.tar.gz
tar -xzf etcd-v3.4.13-linux-amd64.tar.gz
cp etcd-v3.4.13-linux-amd64/etcdctl /usr/bin/

#etcdctl使用
#---验证集群状态
etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --endpoints="https://192.168.100.41:2379,https://192.168.100.42:2379,https://192.168.100.43:2379" endpoint health

#绑定etcdctl环境变量使用
cat <<EOF | sudo tee ~/.bashrc 
export ETCDCTL_API=3
export ETCDCTL_DIAL_TIMEOUT=3s
export ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt
export ETCDCTL_CERT=/etc/kubernetes/pki/etcd/server.crt
export ETCDCTL_KEY=/etc/kubernetes/pki/etcd/server.key
EOF
source ~/.bashrc 

#以表格形式查看集群状态
etcdctl --endpoints="https://192.168.100.41:2379" -w table endpoint --cluster status

#查看所有的key
etcdctl --endpoints="https://192.168.100.41:2379" --keys-only=true get --from-key ''
#或
etcdctl --endpoints="https://192.168.100.41:2379" --prefix --keys-only=true get /

#查看拥有某个前缀的keys
etcdctl --endpoints="https://192.168.100.41:2379" --prefix --keys-only=true get /registry/pods/

#查看某个具体key的值以json格式输出
etcdctl --endpoints="https://192.168.100.41:2379" --prefix --keys-only=false -w json get /registry/pods/kube-system/etcd-k8s-master1
#更多etcdctl操作命令:https://github.com/etcd-io/etcd/tree/master/etcdctl

十、启用kubectl命令的自动补全功能

yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

参考:https://www.jianshu.com/p/351b61a87c17

posted @ 2023-10-21 03:04  我的城市没有海  阅读(77)  评论(0)    收藏  举报