k8s部署 2主2从

kubeadm高可用master节点(三主两从)

1、安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  • 五台机器,操作系统 CentOS7.5+
  • 硬件配置:2GBRAM,2vCPU+,硬盘30GB+
  • 集群中所有机器之间网络互通,且可访问外网。

2、安装步骤

角色IP
k8s-lb 192.168.50.100
master1 192.168.50.128
master2 192.168.50.129
master3 192.168.50.130
node1 192.168.50.131
node2 192.168.50.132

 

2.1、安装前预处理操作

(1)系统优化(预先在原始机上操作,然后克隆为其他节点机)

  1.升级内核

1.1 下载内核源
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
1.2 安装最新内核版本
yum --enablerepo=elrepo-kernel install -y kernel-lt
1.3 查看可用内核版本
cat /boot/grub2/grub.cfg |grep menuentry
1.4 设置默认内核启动项
grub2-set-default "CentOS Linux (4.4.221-1.el7.elrepo.x86_64) 7 (Core)"
1.5 查看内核启动项
grub2-editenv list
1.6 重启系统使内核设置生效 (一定要重启机器)
reboot
1.7 查看正在使用的内核版本
uname -r

  2.关闭防火墙selinux、关闭swap分区、优化内核、配置yum源、时区与时间同步通过编写shell脚本解决

#!/bin/sh
#****************************************************************#
# ScriptName: init.sh
# Author: boming
# Create Date: 2020-06-23 22:19
#***************************************************************#

#关闭防火墙
systemctl disable --now firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
#关闭swap分区
swapoff -a
sed -i.bak 's/^.*centos-swap/#&/g' /etc/fstab
#优化系统
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
EOF
#立即生效
sysctl --system
#配置阿里云的base和epel源
mv /etc/yum.repos.d/* /tmp
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
#安装dnf工具

yum install dnf -y
dnf makecache
#安装上传下载工具
dnf install lrzsz #安装ntpdate工具 dnf install ntpdate -y #同步阿里云时间 ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime ntpdate ntp.aliyun.com

  3.安装docker(1)添加docker软件yum源curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2)安装docker-ce组件
列出所有可以安装的版本
~]# dnf list docker-ce --showduplicates
docker-ce.x86_64       3:18.09.6-3.el7               docker-ce-stable
docker-ce.x86_64       3:18.09.7-3.el7               docker-ce-stable
docker-ce.x86_64       3:18.09.8-3.el7               docker-ce-stable
docker-ce.x86_64       3:18.09.9-3.el7               docker-ce-stable
docker-ce.x86_64       3:19.03.0-3.el7               docker-ce-stable
docker-ce.x86_64       3:19.03.1-3.el7               docker-ce-stable
docker-ce.x86_64       3:19.03.2-3.el7               docker-ce-stable
docker-ce.x86_64       3:19.03.3-3.el7               docker-ce-stable
docker-ce.x86_64       3:19.03.4-3.el7               docker-ce-stable
docker-ce.x86_64       3:19.03.5-3.el7               docker-ce-stable
.....
这里我们安装最新版本的docker,所有的节点都需要安装docker服务
~]# dnf install -y  docker-ce docker-ce-cli
(3)启动docker并设置开机自启动
~]# systemctl enable --now docker
查看版本号,检测docker是否安装成功
~]# docker --version
Docker version 19.03.12, build 48a66213fea
4)更换docker的镜像仓库源

默认的镜像仓库地址是docker官方的,国内访问异常缓慢,因此更换为个人阿里云的源,同时修改docker驱动引擎

~]# cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://pn8399pq.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"] } EOF 由于重新加载docker仓库源,所以需要重启docker ~]# systemctl restart docker

  4.安装kubenetes

1)添加kubernetes软件yum源
~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
最好是重新生成缓存

~]# dnf clean all
~]# dnf makecache
(2)安装kubeadm、kubelet和kubectl组件

所有的节点都需要安装这几个组件。

[root@master1 ~]# dnf list kubeadm --showduplicates  
kubeadm.x86_64                       1.17.7-0                     kubernetes
kubeadm.x86_64                       1.17.7-1                     kubernetes
kubeadm.x86_64                       1.17.8-0                     kubernetes
kubeadm.x86_64                       1.17.9-0                     kubernetes
kubeadm.x86_64                       1.18.0-0                     kubernetes
kubeadm.x86_64                       1.18.1-0                     kubernetes
kubeadm.x86_64                       1.18.2-0                     kubernetes
kubeadm.x86_64                       1.18.3-0                     kubernetes
kubeadm.x86_64                       1.18.4-0                     kubernetes
kubeadm.x86_64                       1.18.4-1                     kubernetes
kubeadm.x86_64                       1.18.5-0                     kubernetes
kubeadm.x86_64                       1.18.6-0                     kubernetes
由于kubernetes版本变更非常快,因此列出有哪些版本,选择一个合适的。我们这里安装1.18.6版本。

[root@master1 ~]# dnf install -y kubelet-1.18.6 kubeadm-1.18.6 kubectl-1.18.6  --disableexcludes=kubernetes3)设置kubelet开机自启动
[root@master1 ~]# systemctl enable  kubelet

(2)配置主机名

k8s-lb节点设置:

hostnamectl set-hostname k8s-lb

master1节点设置:

hostnamectl set-hostname master1

master2节点设置:

hostnamectl set-hostname master2

node1节点设置:

hostnamectl set-hostname node1

node2节点设置:

hostnamectl set-hostname node2

添加hosts(每个节点机上都需要执行)

cat >>/etc/hosts <<EOF
192.168.50.100 k8s-lb
192.168.50.128 master1
192.168.50.129 master2
192.168.50.130 master3
192.168.50.131 node1
192.168.50.132 node2
EOF

配置免密

在master1节点生成密钥对,并分发给其他的所有主机。
[root@master1 ~]# ssh-keygen -t rsa -b 1200
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:OoMw1dARsWhbJKAQL2hUxwnM4tLQJeLynAQHzqNQs5s root@localhost.localdomain
The key's randomart image is:
+---[RSA 1200]----+
|*=X=*o*+         |
|OO.*.O..         |
|BO= + +          |
|**o* o           |
|o E .   S        |
|   o . .         |
|    . +          |
|       o         |
|                 |
+----[SHA256]-----+

分发公钥

[root@master1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub  root@master1
[root@master1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub  root@master2
[root@master1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub  root@k8s-lb
[root@master1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub  root@node1
[root@master1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub  root@node2

(3) Haproxy+Keepalived配置高可用VIP(二个master节点执行)

  1.安装keepalivedhaproxy

[root@master1 ~]# dnf install -y keepalived haproxy 

  2.配置Haproxy服务

所有master节点的haproxy配置相同,haproxy的配置文件是/etc/haproxy/haproxy.cfgmaster1节点配置完成之后再分发给master2节点。

global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

listen stats
  bind    *:8006
  mode    http
  stats   enable
  stats   hide-version
  stats   uri       /stats
  stats   refresh   30s
  stats   realm     Haproxy\ Statistics
  stats   auth      admin:admin

frontend k8s-master
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server master1 192.168.50.128:6443  check inter 2000 fall 2 rise 2 weight 100
  server master2 192.168.50.129:6443  check inter 2000 fall 2 rise 2 weight 100
  server master3 192.168.50.130:6443  check inter 2000 fall 2 rise 2 weight 100
# 注意这里的三个master节点的ip地址要根据你自己的情况配置好。

  3.配置Keepalived服务

keepalived中使用track_script机制来配置脚本进行探测kubernetesmaster节点是否宕机,并以此切换节点实现高可用。

master1节点的keepalived配置文件如下所示,配置文件所在的位置/etc/keepalived/keepalived.confmaster1节点配置完成之后再分发给master2节点。

需要注意几点(前两点记得修改):

  • mcast_src_ip:配置多播源地址,此地址是当前主机的ip地址。
  • prioritykeepalived根据此项参数的大小仲裁master节点。我们这里让master节点为kubernetes提供服务,其他两个节点暂时为备用节点。因此master1节点设置为100master2节点设置为99
  • state:我们将master1节点的state字段设置为MASTER,其他两个节点字段修改为BACKUP
  • 上面的集群检查功能是关闭的,等到集群建立完成后再开启。

   4.配置健康检测脚本

我这里将健康检测脚本放置在/etc/keepalived目录下,check_kubernetes.sh检测脚本如下:

#!/bin/bash
#****************************************************************#
# ScriptName: check_kubernetes.sh
# Author: boming
# Create Date: 2020-06-23 22:19
#***************************************************************#

function chech_kubernetes() {
 for ((i=0;i<5;i++));do
  apiserver_pid_id=$(pgrep kube-apiserver)
  if [[ ! -z $apiserver_pid_id ]];then
   return
  else
   sleep 2
  fi
  apiserver_pid_id=0
 done
}

# 1:running  0:stopped
check_kubernetes
if [[ $apiserver_pid_id -eq 0 ]];then
 /usr/bin/systemctl stop keepalived
 exit 1
else
 exit 0
fi

  5.启动KeeplivedHaproxy服务

systemctl enable --now keepalived haproxy

~]# systemctl status keepalived haproxy
~]# ping 192.168.50.100                    #检测一下是否通
PING 192.168.50.100 (192.168.50.100) 56(84) bytes of data.
64 bytes from 192.168.50.100: icmp_seq=1 ttl=64 time=0.778 ms
64 bytes from 192.168.50.100: icmp_seq=2 ttl=64 time=0.339 ms

4、部署master节点

(1)生成预处理文件

在所有master节点执行如下指令:

[root@master1 ~]# kubeadm config print init-defaults > kubeadm-init.yaml

这个文件kubeadm-init.yaml,是我们初始化使用的文件,里面大概修改这几项参数。

[root@master1 ~]# cat kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.50.100      #VIP的地址   当不需要多个master时,唯一的master就是node节点的vip,所以这个地方修改为master主机ip
  bindPort:  6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:            #添加如下两行信息
  certSANs:
  - "192.168.50.100"         #VIP地址,当只有一个master时,不需要添加
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers   #阿里云的镜像站点,或者填写自己的阿里云镜像仓库地址  registry.cn-hangzhou.aliyuncs.com/cocolone-security
controlPlaneEndpoint: "192.168.50.100:8443"    #VIP的地址和端口,当只有一个master时,不需要添加
kind: ClusterConfiguration
kubernetesVersion: v1.18.3        #kubernetes版本号
networking:
  dnsDomain: cluster.local 
  serviceSubnet: 10.96.0.0/12       #选择默认即可,当然也可以自定义CIDR
  podSubnet: 10.244.0.0/16        #添加pod网段
scheduler: {}
**注意:**上面的advertiseAddress字段的值,这个值并非当前主机的网卡地址,而是高可用集群的VIP的地址。
**注意:**上面的controlPlaneEndpoint这里填写的是VIP的地址,而端口则是haproxy服务的8443端口,也就是我们在haproxy里面配置的这段信息。
frontend k8s
-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp 这一段里面的8443端,如果你自定义了其他端口,这里请记得修改controlPlaneEndpoint里面的端口。

(2) 提前拉取镜像(所有master节点都需要操作)

如果直接采用kubeadm init来初始化,中间会有系统自动拉取镜像的这一步骤,这是比较慢的,我建议分开来做,所以这里就先提前拉取镜像。

[root@master1 ~]# kubeadm config images pull --config kubeadm-init.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.3-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.6.5

(3)初始化kubenetesmaster1节点

[root@master1 ~]# kubeadm init --config kubeadm-init.yaml --upload-certs
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.50.128 192.168.50.100]
...           # 省略
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.50.100:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4c738bc8e2684c5d52d80687d48925613b66ab660403649145eb668d71d85648 \
    --control-plane --certificate-key 4931f39d3f53351cb6966a9dcc53cb5cbd2364c6d5b83e50e258c81fbec69539 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.50.100:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4c738bc8e2684c5d52d80687d48925613b66ab660403649145eb668
[root@master1 ~]# mkdir -p $HOME/.kube
[root@master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

当初始化失败,可以使用

 

journalctl -xefu kubelet(组件名称)    
systemctl status kubelet(组件名称)   
查看日志信息,进行相应处理
然后kubeadm reset重置

 

初始化完成后,再配置一下环境变量

[root@master1 ~]# cat >> ~/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@master1 ~]# source ~/.bashrc

5、其他master节点加入kubernetes集群中

[root@master2 ~]#  kubeadm join 192.168.50.100:8443 --token abcdef.0123456789abcdef \
     --discovery-token-ca-cert-hash sha256:4c738bc8e2684c5d52d80687d48925613b66ab660403649145eb668d71d85648 \
     --control-plane --certificate-key 4931f39d3f53351cb6966a9dcc53cb5cbd2364c6d5b83e50e258c81fbec69539

[preflight] Running pre-flight checks
 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The 
......                                  #省略若干
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

To start administering your cluster from this node, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
[root@master2 ~]# mkdir -p $HOME/.kube
[root@master2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

加环境变量

[root@master2 ~]# cat >> ~/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@master2 ~]# source ~/.bashrc

查看集群master节点

[root@master1 ~]# kubectl get node
NAME      STATUS     ROLES    AGE     VERSION
master1   NotReady   master   25m     v1.18.4
master2   NotReady   master   12m     v1.18.4

6、node节点加入kubernetes集群中(两个node节点操作)

[root@node1 ~]# kubeadm join 192.168.50.100:8443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:4c738bc8e2684c5d52d80687d48925613b66ab660403649145eb668d71d85648
[preflight] Running pre-flight checks
 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
....
....
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看集群节点信息

[root@master1 ~]# kubectl get nodes
NAME      STATUS     ROLES    AGE     VERSION
master1   NotReady   master   20h     v1.18.4
master2   NotReady   master   20h     v1.18.4
master3   NotReady   master   20h     v1.18.4
node1     NotReady   <none>   5m15s   v1.18.4
node2     NotReady   <none>   5m11s   v1.18.4

7、安装网络插件(master安装)

(1) 在master节点下载镜像

sudo docker login --username=zhudaolong registry.cn-hangzhou.aliyuncs.com               用于登录的用户名为阿里云账号全名,密码为开通服务时设置的密码。
sudo docker pull registry.cn-hangzhou.aliyuncs.com/cocolone-security/flannel:v0.11.0-amd64        因为已经将镜像放入阿里云的私人仓库,所以可以直接在仓库拉取
修改镜像标签
docker tag imageid quay.io/coreos/flannel:v0.11.0-amd64

(2) 在gitee上下载flannel.yaml(V0.11.0版本)

https://gitee.com/cocolone/flannel/blob/master/flannel.yaml 可以看到,然后复制文件内容
在master中新建flannel.yaml文件
将复制的内容粘贴进去
修改 image 镜像修改为 image: quay.io/coreos/flannel:v0.11.0-amd64

此时保存保存退出。在master节点执行此命令。

[root@master1 ~]# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

查看flannel是否正常(在master上执行)

[root@master1 ~]# kubectl get pod -n kube-system | grep flannel
NAME                              READY   STATUS    RESTARTS   AGE
kube-flannel-ds-amd64-dp972       1/1     Running   0          66s
kube-flannel-ds-amd64-lkspx       1/1     Running   0          66s
kube-flannel-ds-amd64-rmsdk       1/1     Running   0          66s
kube-flannel-ds-amd64-wp668       1/1     Running   0          66s
kube-flannel-ds-amd64-zkrwh       1/1     Running   0          66s

8、测试kubenetes集群

(1)创建一个nginxpod

master节点执行一下步骤:

[root@master1 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master1 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

现在我们查看podservice

[root@master1 ~]# kubectl get pod,svc -o wide

打印的结果中,前半部分是pod相关信息,后半部分是service相关信息。我们看service/nginx这一行可以看出service暴漏给集群的端口是30249。记住这个端口。

然后从pod的详细信息可以看出此时podnode2节点之上。node2节点的IP地址是192.168.50.132

当出现产生pod失败,可能是因为主机性能问题,可以尝试重启master主机

(2) 访问nginx验证集群

那现在我们访问一下。打开浏览器(建议火狐浏览器),访问地址就是:http://192.168.50.132:30249

9、安装dashboard

(1)创建dashboard

先把dashboard的配置文件下载下来。

https://gitee.com/cocolone/cdashboard/blob/master/kubernetes-dashboard.yaml 可以看到,然后复制文件内容
在虚拟机中新创kubernetes-dashboard.yaml文件,将复制的内容粘贴进去

下载dashboard镜像

docker pull registry.cn-hangzhou.aliyuncs.com/cocolone-security/kubernetes-dashboard-amd64:v2.0.3

默认Dashboard只能集群内部访问,修改ServiceNodePort类型,暴露到外部:

大概在此文件的32-44行之间,修改为如下:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort       #加上此行
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001     #加上此行,端口30001可以自行定义
  selector:
    k8s-app: kubernetes-dashboard
---
image: registry.cn-hangzhou.aliyuncs.com/cocolone-security/kubernetes-dashboard-amd64:v2.0.3 将所有镜像改为刚才下载的镜像

 

运行次yaml文件

[root@master1 ~]# kubectl apply -f kubernetes-dashboard.yaml
namespace/kubernetes-dashboard created 
serviceaccount
/kubernetes-dashboard created
service
/kubernetes-dashboard created
secret
/kubernetes-dashboard-certs created
...
service
/dashboard-metrics-scraper
created deployment.apps
/dashboard-metrics-scraper created

 查看dashboard运行是否正常

[root@master1 ~]# kubectl get pod -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-694557449d-mlnl4   1/1     Running   0          2m31s
kubernetes-dashboard-9774cc786-ccvcf         1/1     Running   0          2m31s

主要是看status这一列的值,如果是Running,并且RESTARTS字段的值为0(只要这个值不是一直在渐渐变大),就是正常的,目前来看是没有问题的。我们可以继续下一步。

查看此dashboardpod运行所在的节点

从上面可以看出,kubernetes-dashboard-9774cc786-ccvcf运行所在的节点是node2上面,并且暴漏出来的端口是30001,所以访问地址是:https://192.168.50.132:30001

用火狐浏览器访问,访问的时候会让输入token,从此处可以查看到token的值。

[root@master1 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

 

 把上面的token值输入进去即可进去dashboard界面。

 不过现在我们虽然可以登陆上去,但是我们权限不够还查看不了集群信息,因为我们还没有绑定集群角色,同学们可以先按照上面的尝试一下,再来做下面的步骤

(2)cluster-admin管理员角色绑定(master1节点操作)

[root@master1 ~]# kubectl create serviceaccount dashboard-admin -n kube-system
[root@master1 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
[root@master1 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

再使用输出的token登陆dashboard即可。

 

posted @ 2020-10-21 17:34  Cocolone  阅读(1082)  评论(0)    收藏  举报