前提条件:
a、etcd集群(可以是etcd单实例,不是集群也可以)搭建完成并启动,etcd集群搭建链接https://www.cnblogs.com/ouyanghuanlin/p/11206009.html
b、各节点已完成ssh互信,ssh互信链接https://www.cnblogs.com/ouyanghuanlin/p/11208785.html
c、docker安装成功并启动,docker安装链接https://www.cnblogs.com/ouyanghuanlin/articles/11209039.html
d、k8s的tar.gz包已准备好,解压后./kubernetes/server/bin/目录内有相关的k8s二进制组件
e、环境信息如下
| 节点主机名、ip、节点性质 | 安装服务 | etcd服务 |
| k8s-01 192.168.10.111 master | docker、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy | 192.168.10.111:2379 |
| k8s-02 192.168.10.112 node | docker、kubelet、kube-proxy | 192.168.10.112:2379 |
| k8s-03 192.168.10.113 node | docker、kubelet、kube-proxy | 192.168.10.113:2379 |
一、关闭防火墙和SELINUX 关闭swap
该小节引用链接https://blog.csdn.net/wangjunsheng/article/details/86594245
3个节点均需如下操作
关闭防火墙并禁止开机自启
[root@k8s-01 /]# systemctl stop firewalld && systemctl disable firewalld
关闭SELINUX并使重开机仍然是关闭
[root@k8s-01 /]# setenforce 0 [root@k8s-01 /]# vi /etc/selinux/config
下是标注/etc/selinux/config被修改后的部分
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
查看swap
[root@k8s-01 /]# free total used free shared buff/cache available Mem: 1867264 167932 149340 16752 1549992 1418348 Swap: 4194300 2608 4191692
如上,显示swap是开启的
关闭swap并查看
[root@k8s-01 ~]# swapoff -a && sysctl -w vm.swappiness=0 vm.swappiness = 0 [root@k8s-01 ~]# free total used free shared buff/cache available Mem: 1867264 169244 141604 17356 1556416 1417680 Swap: 0 0 0
编辑/etc/fstab文件,避免开机后swap会使用,注释该文件swap对应的行,以实际为准
# /dev/mapper/cl-swap swap swap defaults 0 0
二、安装kubernetes各个组件服务
该小节引用链接https://blog.csdn.net/q328730422/article/details/82836953
注意以下操作:将二进制文件放入某个目录时无特别说明,均与该目录内文件权限属主一致;编写某文件时,路径与文件不存在则需自行创建,权限属主要保证可读;文件参数中不存在的路径需要自行创建
1、安装kube-apiserver服务
a、将kube-apiserver的二进制文件放入到/usr/bin/目录
b、编辑/etc/kubernetes/kube-apiserver,内容如下
--etcd-servers参数中也可以只写集群中任意一个etcd的地址,k8s用的etcd不一定要集群
KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://192.168.10.111:2379,http://192.168.10.112:2379,http://192.168.10.113:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=10.10.10.0/24 --service-node-port-range=10000-65535 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,DefaultStorageClass,ResourceQuota --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
kube-apiserver服务将会监听所在节点所有网卡的8080端口,包括192.168.10.111:8080
设置systemd服务文件/usr/lib/systemd/system/kube-apiserver.service,内容如下
[Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes # After=etcd.service # Wants=etcd.service [Service] EnvironmentFile=/etc/kubernetes/kube-apiserver ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS Restart=on-failure Type=notify [Install] WantedBy=multi-user.target
2、安装kube-controller-manager
a、将kube-controller-manager的二进制文件放入到/usr/bin/目录
b、编辑/etc/kubernetes/kube-controller-manager,内容如下
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://192.168.10.111:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
设置systemd服务文件/usr/lib/systemd/system/kube-controller-manager.service,内容如下
[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=kube-apiserver.service Requires=kube-apiserver.service [Service] EnvironmentFile=-/etc/kubernetes/kube-controller-manager ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
3、安装kube-scheduler
a、将kube-scheduler的二进制文件放入到/usr/bin/目录
b、编辑/etc/kubernetes/kube-scheduler,内容如下
KUBE_SCHEDULER_ARGS="--master=http://192.168.10.111:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
设置systemd服务文件/usr/lib/systemd/system/kube-scheduler.service,内容如下
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
4、安装kubelet
a、将kubelet的二进制文件放入到/usr/bin/目录
b、编辑/etc/kubernetes/kubelet.kubeconfig,内容如下
apiVersion: v1 kind: Config clusters: - cluster: server: http://192.168.10.111:8080 name: local contexts: - context: cluster: local name: local current-context: local
c、编辑/etc/kubernetes/kubelet,内容如下
--address和--hostname-override参数是kubelet服务所在节点ip,每个节点均不同,master节点可以不安装kubelet
--cgroup-driver参数要与docker的参数一致,不然kubelet会启动失败,可参考https://www.cnblogs.com/ouyanghuanlin/articles/11209039.html(中的6)
KUBELET_ARGS="--address=192.168.10.111 --port=10250 --cgroup-driver=cgroupfs --hostname-override=192.168.10.111 --allow-privileged=false --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cluster-dns=10.10.10.2 --cluster-domain=cluster.local --fail-swap-on=false --logtostderr=true --log-dir=/var/log/kubernetes --v=4"
设置systemd服务文件/usr/lib/systemd/system/kubelet.service,内容如下
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
5、安装kube-proxy
a、将kube-proxy的二进制文件放入到/usr/bin/目录
b、编辑/etc/kubernetes/kube-proxy,内容如下
--hostname-override参数是kube-proxy服务所在节点ip,每个节点均不同,master节点可以不安装kube-proxy
KUBE_PROXY_ARGS="--master=http://192.168.10.111:8080 --hostname-override=192.168.10.111 --logtostderr=true --log-dir=/var/log/kubernetes --v=4"
设置systemd服务文件/usr/lib/systemd/system/kube-proxy.service,内容如下
[Unit]
Description=Kubernetes Kube-proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.service
Requires=network.service
[Service]
EnvironmentFile=/etc/kubernetes/kube-proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
KillMode=process
[Install]
WantedBy=multi-user.target
三、启动
各组件启动前etcd和docker必须先启动成功,etcd如果不是docker启动的,则etcd和docker无先后启动顺序
其余服务组件启动顺序如下:
1、启动kube-apiserver服务
[root@k8s-01 /]# systemctl start kube-apiserver.service
设置kube-apiserver服务开机自启(可选)
systemctl enable kube-apiserver.service
关闭kube-apiserver服务开机自启(可选)
systemctl disable kube-apiserver.service
停止kube-apiserver服务(可选)
systemctl stop kube-apiserver.service
重启kube-apiserver服务(可选)
systemctl restart kube-apiserver.service
可选操作以下类似
2、启动kube-controller-manager服务
[root@k8s-01 /]# systemctl start kube-controller-manager.service
3、启动kube-scheduler服务
[root@k8s-01 /]# systemctl start kube-scheduler.service
4、启动kubelet服务
[root@k8s-01 /]# systemctl start kubelet.service
5、启动kube-proxy服务
[root@k8s-01 /]# systemctl start kube-proxy.service
6、查看k8s部署结果
找到kubectl客户端工具所在位置,配置可执行权限后
[root@k8s-01 /]# ./kubectl get node --server="192.168.10.111:8080" -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 192.168.10.111 Ready <none> 15m v1.12.9 192.168.10.111 <none> CentOS Linux 7 (Core) 3.10.0-514.el7.x86_64 docker://18.6.3 192.168.10.112 Ready <none> 14m v1.12.9 192.168.10.112 <none> CentOS Linux 7 (Core) 3.10.0-514.el7.x86_64 docker://18.6.3 192.168.10.113 Ready <none> 13m v1.12.9 192.168.10.113 <none> CentOS Linux 7 (Core) 3.10.0-514.el7.x86_64 docker://18.6.3
显示如上,k8s服务部署结束,后续k8s使用的dns服务和网络服务再另写
结束