K8S篇之四 kubeadm安装2master+1node集群-之kubeadm安装控制节点+扩容控制节点+扩容工作节点+安装网络插件calico+安装coredns
六、kubeadm安装K8S控制节点
6.1 在nflmaster1上创建kubeadm-config.yaml文件:
[root@nflmaster1 ~]# vim /root/kubeadm-config.yaml #创建kubeadm-config.yaml新文件,输入以下内容
点击查看kubeadm-config.yaml代码
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.6
controlPlaneEndpoint: 192.168.10.200:16443
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
certSANs:
- 192.168.10.201
- 192.168.10.202
- 192.168.10.211
- 192.168.10.200
networking:
podSubnet: 192.168.200.0/24
serviceSubnet: 192.168.201.0/24
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

解释:apiVersion: 后面跟apiserver组件的版本信息;
kind:后面跟的是k8s集群的资源类型-ClusterConfiguration;
kubenetesVersion:后面跟的是k8s的版本v1.20.6;
controllerPlaneEndpoint:后跟多节点的vip地址:端口-192.168.10.200:16443;单节点是控制节点的地址:端口;
imageRepostory:镜像仓库,默认是k8s.io***,没在代理访问不通;后换成repisttry.aliyuncs.com/google_containers ;从阿里云找镜像,--这里后面已经做成离线包了。
apiServer:
ceertSANs: --这里是生成证书的IP地址,控制节点1、2、vip、工作节点1.
networking:
podSubnet:--pod资源的网段规划 如(192.168.200.0/24)
serviceSubenet:--service资源的网段规划 如(192.168.201.0/24)
6.2 在控制节点1上操作,上传制作好的离线镜像包k8simage-1-20-6.tar.gz;然后在拷贝到matser2、node1节点上
链接:https://pan.baidu.com/s/1CVaBka55niAb96NnEKwMnQ
提取码:1234

[root@nflmaster1 ~]# scp k8simage-1-20-6.tar.gz nflmaster2:/root/ #在节点1上,把文件传到节点2的/root/下
[root@nflmaster1 ~]# scp k8simage-1-20-6.tar.gz nflnode1:/root/ #在节点1上,把文件传到node1节点的/root/下

扩展1:下面内容可不做,仅演示docker save 命令如何使用--自己制作离线镜像包
备注:k8simage-1-20-6.tar.gz改文件使用 docker save 镜像1 镜像2 -o k8simage-1-20-6.tar.gz 制作,无法使用tar -zxvf解压出来。
docker save -o 命令使用演示
[root@nflmaster1 ~]# docker images #本地暂无镜像
[root@nflmaster1 ~]# docker pull tomcat #拉取tomcat镜像
[root@nflmaster1 ~]# docker pull busybox #拉取busybox镜像
[root@nflmaster1 ~]# docker images #查看本地镜像

[root@nflmaster1 ~]# docker save busybox tomcat -o busybox.tar.gz #把busybox、tomcat镜像,打成busybox.tar.gz离线镜像包

[root@nflmaster1 ~]# docker rmi busybox tomcat #删除本地镜像busybox、tomcat
[root@nflmaster1 ~]# docker load -i busybox.tar.gz #从离线镜像包中加载镜像
6.3 在各个节点上,加载离线镜像包k8simage-1-20-6.tar.gz
[root@nflmaster1 ~]# docker load -i k8simage-1-20-6.tar.gz #在master1节点上加载
[root@nflmaster2 ~]# docker load -i k8simage-1-20-6.tar.gz #在master2节点上加载
[root@nflnode1 ~]# docker load -i k8simage-1-20-6.tar.gz #在node1节点上加载

此时node1上是工作节点,也许用不到kube-apiserver,controller-manager组件,加载也没事。
6.4 在master1节点上,在之前yaml文件目录下,执行kubeadm init
[root@nflmaster1 ~]# kubeadm init --confg kubeadm-config.yaml #master1节点上,kubeadm-config目录下执行
点击查看执行时输出的代码
[init] Using Kubernetes version: v1.20.6
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versi ons: 20.10.6. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernete s.default.svc kubernetes.default.svc.cluster.local nflmaster1] and IPs [192.168.201.1 192.168.1 0.201 192.168.10.200 192.168.10.202 192.168.10.211]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost nflmaster1] and IPs [192.16 8.10.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost nflmaster1] and IPs [192.168. 10.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlpla ne address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlpla ne address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlpla ne address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlpla ne address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-f lags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from d irectory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 58.518181 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node nflmaster1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node nflmaster1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: wzowtp.d9wdeojl594jy808
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 192.168.10.200:16443 --token wzowtp.d9wdeojl594jy808 \
--discovery-token-ca-cert-hash sha256:6873ba7438ba7755e773614ed8a25f3f4888628be50e242e091f843eae13ece6 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.10.200:16443 --token wzowtp.d9wdeojl594jy808 \
--discovery-token-ca-cert-hash sha256:6873ba7438ba7755e773614ed8a25f3f4888628be50e242e091f843eae13ece6
解释:
pre-flight checks 预检查之前初始化环境操作;
warnging:docker version的问题,可以忽略;

[root@nflmaster1 ~]# mkdir -p $HOME/.kube
[root@nflmaster1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@nflmaster1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@nflmaster1 ~]# kubectl get nodes #在master1节点上验证,查看K8S集群上的所有节点

此时,没有安装网络插件calico,状态未notready。安装后是running。
[========]
七、扩容K8S集群--添加master2控制节点
7.1 把master1节点上的相关证书,拷贝到节点上2
在master2节点上创建目录/隐藏目录
[root@nflmaster2 ~]# cd /root/ && mkdir -p /etc/kubernetes/pki/etcd && mkdir -p ~/.kube/ #创在master2节点上建相关目录和隐藏目录
在master1节点上操作,把相关证书拷贝到master2上
[root@nflmaster1 ~]# scp /etc/kubernetes/pki/ca.crt nflmaster2:/etc/kubernetes/pki/
[root@nflmaster1 ~]# scp /etc/kubernetes/pki/ca.key nflmaster2:/etc/kubernetes/pki/
[root@nflmaster1 ~]# scp /etc/kubernetes/pki/sa.key nflmaster2:/etc/kubernetes/pki/
[root@nflmaster1 ~]# scp /etc/kubernetes/pki/sa.pub nflmaster2:/etc/kubernetes/pki/
[root@nflmaster1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt nflmaster2:/etc/kubernetes/pki/
[root@nflmaster1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.key nflmaster2:/etc/kubernetes/pki/
[root@nflmaster1 ~]# scp /etc/kubernetes/pki/etcd/ca.crt nflmaster2:/etc/kubernetes/pki/etcd/
[root@nflmaster1 ~]# scp /etc/kubernetes/pki/etcd/ca.key nflmaster2:/etc/kubernetes/pki/etcd/

7.2 在master2上操作,加入K8S集群
7.2.1 在master1节点上,查看控制节点加入集群的命令
[root@nflmaster1 ~]# kubeadm token create --print-join-command #在master1节点上查看命令

每次加入集群,需要先在master1节点上,执行下命令。因为token会更新
7.2.2 在master2节点上,复制master1上的join命令,并加入--control-plane参数
[root@nflmaster2 ~]# kubeadm join 192.168.10.200:16443 --token vzzqg9.wouxes3gwz912no5 --discovery-token-ca-cert-hash sha256:6873ba7438ba7755e773614ed8a25f3f4888628be50e242e091f843eae13ece6 --control-plane
点击查看执行的命令输出代码
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost nflmaster2] and IPs [192.168.10.202 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost nflmaster2] and IPs [192.168.10.202 127.0.0.1 ::1]
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local nflmaster2] and IPs [192.168.201.1 192.168.10.202 192.168.10.200 192.168.10.201 192.168.10.211]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node nflmaster2 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node nflmaster2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
[root@nflmaster2 ~]#

根据提示,在master2节点上操作
[root@nflmaster2 ~]# mkdir -p $HOME/.kube
[root@nflmaster2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@nflmaster2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
7.2.3 验证master2是否加入成功
[root@nflmaster2 ~]# kubectl get nodes

name:所有节点的名字;
status:notready 没有安装网络插件calico
roles:角色
age:年龄
version:版本
K8S篇之三 kubeadm安装2master+1node集群-之2master节点上配置keepalived+nginx实现K8S-apiserver高可用
K8S篇之五 kubeadm安装2master+1node集群-之扩容工作节点+安装网络插件calico+安装coredns

浙公网安备 33010602011771号