centos7下kubeadm搭建k8s集群(v1.19.8)操作详细步骤

一、部署环境


主机列表:

主机名 Centos版本 ip docker version flannel version 主机配置 k8s版本
master 7.8.2003 192.168.214.128 19.03.13 v0.13.0-rc2 2C2G v1.19.8
node01 7.8.2003 192.168.214.129 19.03.13 / 2C2G v1.19.8
node02 7.8.2003 192.168.214.130 19.03.13 / 2C2G v1.19.8


共有3台服务器 1台master,2台node。

二、安装准备工作


1. 配置主机名


1.0 关闭防火墙以及永久关闭selinux

1 [root@centos7 ~] systemctl stop firewalld && systemctl disable firewalld 
2 # 永久关闭selinux 
3 [root@centos7 ~] sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config;cat /etc/selinux/config 
4 # 临时关闭selinux 
5 root@centos7 ~] setenforce 0

 

1.1 修改主机名


三个机器对应分别执行

1 [root@centos7 ~] hostnamectl set-hostname master
2 [root@centos7 ~] hostnamectl set-hostname node01
3 [root@centos7 ~] hostnamectl set-hostname node02​

 

退出重新登陆即可显示新设置的主机名master

1.2 修改hosts文件

1 [root@master ~] cat >> /etc/hosts << EOF
2 192.168.214.128 master
3 192.168.214.129 node01
4 192.168.214.130 node02
5 EOF​

 

2. 验证mac地址uuid

[root@master ~] cat /sys/class/dmi/id/product_uuid


保证各节点mac和uuid唯一

3. 禁用swap


3.1 临时禁用

[root@master ~] swapoff -a​

 

3.2 永久禁用


若需要重启后也生效,在禁用swap后还需修改配置文件/etc/fstab,注释swap

[root@master ~] sed -i.bak '/swap/s/^/#/' /etc/fstab​

 

4. 内核参数修改


本文的k8s网络使用flannel,该网络需要设置内核参数bridge-nf-call-iptables=1

4.1 内核参数临时修改

1 [root@master ~] sysctl net.bridge.bridge-nf-call-iptables=1
2 net.bridge.bridge-nf-call-iptables = 1
3 [root@master ~] sysctl net.bridge.bridge-nf-call-ip6tables=1
4 net.bridge.bridge-nf-call-ip6tables = 1

 

4.2 内核参数永久修改

1 [root@master ~] cat <<EOF > /etc/sysctl.d/k8s.conf
2 net.bridge.bridge-nf-call-ip6tables = 1
3 net.bridge.bridge-nf-call-iptables = 1
4 EOF
5 [root@master ~] sysctl -p /etc/sysctl.d/k8s.conf
6 net.bridge.bridge-nf-call-ip6tables = 1
7 net.bridge.bridge-nf-call-iptables = 1

 

5. 设置kubernetes源


5.1 新增kubernetes源

1 [root@master ~] cat <<EOF > /etc/yum.repos.d/kubernetes.repo
2 [kubernetes]
3 name=Kubernetes
4 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
5 enabled=1
6 gpgcheck=1
7 repo_gpgcheck=1
8 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
9 EOF​

 

[] 中括号中的是repository id,唯一,用来标识不同仓库
name 仓库名称,自定义
baseurl 仓库地址
enable 是否启用该仓库,默认为1表示启用
gpgcheck 是否验证从该仓库获得程序包的合法性,1为验证
repo_gpgcheck 是否验证元数据的合法性 元数据就是程序包列表,1为验证
gpgkey=URL 数字签名的公钥文件所在位置,如果gpgcheck值为1,此处就需要指定gpgkey文件的位置,如果gpgcheck值为0就不需要此项了


5.2 更新缓存

1 [root@master ~] yum clean all
2 [root@master ~] yum -y makecache​

 

三、Docker安装(已安装可跳过)

master node节点都执行本部分操作。

1. 安装依赖包

1 [root@master ~] yum install -y yum-utils device-mapper-persistent-data lvm2​

 

2. 设置Docker源


#3.设置镜像的仓库

1 [root@master ~] yum-config-manager \
2  --add-repo \
3  https://download.docker.com/linux/centos/docker-ce.repo
4 #默认是从国外的,不推荐
5 #推荐使用国内的
6 [root@master ~] yum-config-manager \
7  --add-repo \
8  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo   

3. 安装Docker CE


3.1 docker安装版本查看

1 [root@master ~] yum list docker-ce --showduplicates | sort -r​

 

3.2 安装docker

1 [root@master ~] yum install docker-ce-19.03.13 docker-ce-cli-19.03.13 containerd.io -y​

 

指定安装的docker版本为19.03.13

 

4. 启动Docker并开机自启

1 [root@master ~] systemctl start docker
2 [root@master ~] systemctl enable docker​

5. 命令补全


5.1 安装bash-completion

1 [root@master ~] yum -y install bash-completion​


5.2 加载bash-completion

1 [root@master ~] source /etc/profile.d/bash_completion.sh

6. 镜像加速

由于Docker Hub的服务器在国外,下载镜像会比较慢,可以配置镜像加速器。主要的加速器有:Docker官方提供的中国registry mirror、阿里云加速器、DaoCloud 加速器,本文以阿里加速器配置为例。

 

6.1 登陆阿里云容器模块


登陆地址为:https://account.aliyun.com ,使用支付宝账户登录

 

6.2 配置镜像加速器


配置daemon.json文件

1 [root@master ~] mkdir -p /etc/docker
2 [root@master ~] tee /etc/docker/daemon.json <<-'EOF'
3 {
4  "registry-mirrors": ["https://23h04een.mirror.aliyuncs.com"]
5 }
6 EOF​

 

重启服务

1 [root@master ~] systemctl daemon-reload
2 [root@master ~] systemctl restart docker​


加速器配置完成

7. 验证

1 [root@master ~] docker --version
2 [root@master ~] docker run hello-world​

 

通过查询docker版本和运行容器hello-world来验证docker是否安装成功。

 

8. 修改Cgroup Driver


8.1 修改daemon.json


修改daemon.json,新增‘”exec-opts”: [“native.cgroupdriver=systemd”’

1 [root@master ~] vim /etc/docker/daemon.json 
2 {
3  "registry-mirrors": ["https://23h04een.mirror.aliyuncs.com"],
4  "exec-opts": ["native.cgroupdriver=systemd"]
5 }​


8.2 重新加载docker

1 [root@master ~] systemctl daemon-reload
2 [root@master ~] systemctl restart docker​


修改cgroupdriver是为了消除告警:
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/

四、k8s安装

master node节点都执行本部分操作。

1. 版本查看

1 [root@master ~] yum list kubelet --showduplicates | sort -r​


本文安装的kubelet版本是1.16.4,该版本支持的docker版本为1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。

2. 安装kubelet、kubeadm和kubectl

2.1 安装三个包

1 [root@master ~] yum install -y kubelet-1.19.8 kubeadm-1.19.8 kubectl-1.19.8

 

2.2 安装包说明


kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具
kubeadm 用于初始化集群,启动集群的命令工具
kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件


2.3 启动kubelet


启动kubelet并设置开机启动

1 [root@master ~] systemctl enable kubelet && systemctl start kubelet​

 

2.4 kubectl命令补全

 

[root@master ~] echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@master ~] source .bash_profile ​

 

2.5 查询kubelet是否运行起来了

 

master需要运行起来没有报错,再往下走,而node节点这里状态不是active,只要没报错可以往下走。

 

 1 [root@master ~]# systemctl status kubelet
 2 ● kubelet.service - kubelet: The Kubernetes Node Agent
 3    Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
 4   Drop-In: /usr/lib/systemd/system/kubelet.service.d
 5            └─10-kubeadm.conf
 6    Active: active (running) since Wed 2021-03-03 07:56:55 EST; 52min ago
 7      Docs: https://kubernetes.io/docs/
 8  Main PID: 1350 (kubelet)
 9     Tasks: 19
10    Memory: 139.0M
11    CGroup: /system.slice/kubelet.service
12            └─1350 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2
13 
14 Mar 03 07:59:58 master kubelet[1350]: E0303 07:59:58.872371    1350 pod_workers.go:191] Error syncing pod 6e1a9d9d-148c-4866-b6f2-4c1fbfaef8fa ("coredns-f9fd979d6-tgwj5_kube-system(6e1a9d9d-148c-4866-b6f2-4c1fbfaef8fa)"), skipping: failed to "C...148c-4866-b6f2-4c1fb
15 Mar 03 07:59:59 master kubelet[1350]: W0303 07:59:59.292810    1350 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-f9fd979d6-ttb6f_kube-system": CNI failed to retrieve network namespace...
16 Mar 03 07:59:59 master kubelet[1350]: W0303 07:59:59.339552    1350 pod_container_deletor.go:79] Container "81bc507c034f6e55212e82d1f3f4d12ecd28c3634095d3230e714eb6b3d03be3" not found in pod's containers
17 Mar 03 07:59:59 master kubelet[1350]: W0303 07:59:59.341532    1350 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "81bc507c034f6e55212e82d1f3f4d12ecd28c3634095d3230e714eb6b3d03be3"
18 Mar 03 07:59:59 master kubelet[1350]: W0303 07:59:59.347622    1350 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-f9fd979d6-tgwj5_kube-system": CNI failed to retrieve network namespace...
19 Mar 03 07:59:59 master kubelet[1350]: W0303 07:59:59.383585    1350 pod_container_deletor.go:79] Container "b0c032c78d077d21930da33ea95a971251c97e8c64bde258cc462da3bb5fe3c3" not found in pod's containers
20 Mar 03 07:59:59 master kubelet[1350]: W0303 07:59:59.384676    1350 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "b0c032c78d077d21930da33ea95a971251c97e8c64bde258cc462da3bb5fe3c3"
21 Mar 03 08:00:04 master kubelet[1350]: E0303 08:00:04.605581    1350 docker_sandbox.go:572] Failed to retrieve checkpoint for sandbox "9fca6d888406d4620913e3ccf7bc8cca41fb44cc56158dc8daa6e78479724e5b": checkpoint is not found
22 Mar 03 08:00:04 master kubelet[1350]: E0303 08:00:04.661462    1350 kuberuntime_manager.go:951] PodSandboxStatus of sandbox "55306b6580519c06a5b0c78a6eeaccdf38fe8331170aa7c011c0aec567dfd16b" for pod "coredns-f9fd979d6-tgwj5_kube-system(6e1a9d9d-148c-4866-b6f2-4c1f...
23 Mar 03 08:00:05 master kubelet[1350]: E0303 08:00:05.695207    1350 kuberuntime_manager.go:951] PodSandboxStatus of sandbox "5380257f06e5f7ea0cc39094b97825d737afa1013f003ff9097a5d5ef34c78ae" for pod "coredns-f9fd979d6-ttb6f_kube-system(63f24147-b4d2-4635-bc61-f7da...
24 Hint: Some lines were ellipsized, use -l to show in full.

 

3. 下载镜像


3.1 镜像下载的脚本


Kubernetes几乎所有的安装组件和Docker镜像都放在goolge自己的网站上,直接访问可能会有网络问题,这里的解决办法是从阿里云镜像仓库下载镜像,拉取到本地以后改回默认的镜像tag。本文通过运行image.sh脚本方式拉取镜像。

 1 [root@master ~] vim image.sh 
 2 #!/bin/bash
 3 url=registry.aliyuncs.com/google_containers
 4 version=v1.19.8
 5 images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
 6 for imagename in ${images[@]} ; do
 7  docker pull $url/$imagename
 8  docker tag $url/$imagename k8s.gcr.io/$imagename
 9  docker rmi -f $url/$imagename
10 done

 

url为阿里云镜像仓库地址,version为安装的kubernetes版本。

 

3.2 下载镜像


运行脚本image.sh,下载指定版本的镜像

1 [root@master ~] chmod 775 image.sh
2 [root@master ~] ./image.sh
3 [root@master ~] docker images​

五、初始化Master

master节点执行本部分操作。

1. master初始化

 

1 [root@master ~] kubeadm init --apiserver-advertise-address=192.168.214.128 --kubernetes-version v1.19.8 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

--apiserver-advertise-address为master的IP地址

这里的--pod-network-cidr和kube-flannel.yml中的network地址保持一致

net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}


记录kubeadm join的输出,后面需要这个命令将node节点和其他control plane节点加入集群中。

1 kubeadm join 192.168.214.128:6443 --token 4xpmwx.nw6psmvn9qi4d3cj \
2 > --discovery-token-ca-cert-hash sha256:c7cbe95a66092c58b4da3ad20874f0fe2b6d6842d28b2762ffc8d36227d​

 

1.1 查询是否运行正常

coredns是Pending状态,没有运行起来,没有关系,跟后面的flannel-ds有关

kube-system   coredns-66bff467f8-c79f7                      0/1     Pending   0          24h
kube-system   coredns-66bff467f8-ncpd6                      0/1     Pending   0          24h
# flannel运行起来后,这两个就会变为running
 1 [root@master ~]# kubectl get pod -n kube-system -o wide
 2 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
 3 coredns-f9fd979d6-tgwj5 1/1 Running 4 9h 10.244.0.11 master <none> <none>
 4 coredns-f9fd979d6-ttb6f 1/1 Running 4 9h 10.244.0.10 master <none> <none>
 5 etcd-master 1/1 Running 8 9h 172.17.114.109 master <none> <none>
 6 kube-apiserver-master 1/1 Running 5 132m 172.17.114.109 master <none> <none>
 7 kube-controller-manager-master 1/1 Running 8 130m 172.17.114.109 master <none> <none>
 8 kube-flannel-ds-hs79q 1/1 Running 5 9h 172.17.114.109 master <none> <none>
 9 kube-flannel-ds-qtt7s 1/1 Running 2 9h 172.17.114.103 node01 <none> <none>
10 kube-proxy-24dnq 1/1 Running 2 9h 172.17.114.103 node01 <none> <none>
11 kube-proxy-vbdg2 1/1 Running 4 9h 172.17.114.109 master <none> <none>
12 kube-scheduler-master 1/1 Running 7 130m 172.17.114.109 master <none> <none>​

 

健康检查

 1 [root@master ~]# kubectl get cs
 2 Warning: v1 ComponentStatus is deprecated in v1.19+
 3 NAME STATUS MESSAGE ERROR
 4 scheduler Healthy ok 
 5 controller-manager Healthy ok 
 6 etcd-0 Healthy {"health":"true"}​
 7 
 8 [root@master ~]# kubectl get ns
 9 NAME                   STATUS   AGE
10 default                Active   9h
11 kube-node-lease        Active   9h
12 kube-public            Active   9h
13 kube-system            Active   9h
14 kubernetes-dashboard   Active   7h6m

2. 初始化失败:

如果初始化失败,可执行kubeadm reset后重新初始化

1 [root@master ~] kubeadm reset
2 [root@master ~] rm -rf $HOME/.kube/config​

 

3. 加载环境变量

1 [root@master ~] echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
2 [root@master ~] source .bash_profile​

 

本文所有操作都在root用户下执行,若为非root用户,则执行如下操作:

1 mkdir -p $HOME/.kube
2 cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3 chown $(id -u):$(id -g) $HOME/.kube/config​

4. 安装flannel网络

4.1 配置host(省略,下面有下载链接)

因为国内网络无法解析raw.githubusercontent.com,因此先访问https://tool.chinaz.com/dns/?type=1&host=raw.githubusercontent.com&ip=查看raw.githubusercontent.com的真实IP,并对应修改host

cat >> /etc/hosts << EOF
151.101.108.133 raw.githubusercontent.com
EOF​

4.2 在master上新建flannel网络

vi kube-flannel.yml文件内容如下

  1 ---
  2 apiVersion: policy/v1beta1
  3 kind: PodSecurityPolicy
  4 metadata:
  5   name: psp.flannel.unprivileged
  6   annotations:
  7     seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
  8     seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  9     apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
 10     apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
 11 spec:
 12   privileged: false
 13   volumes:
 14   - configMap
 15   - secret
 16   - emptyDir
 17   - hostPath
 18   allowedHostPaths:
 19   - pathPrefix: "/etc/cni/net.d"
 20   - pathPrefix: "/etc/kube-flannel"
 21   - pathPrefix: "/run/flannel"
 22   readOnlyRootFilesystem: false
 23   # Users and groups
 24   runAsUser:
 25     rule: RunAsAny
 26   supplementalGroups:
 27     rule: RunAsAny
 28   fsGroup:
 29     rule: RunAsAny
 30   # Privilege Escalation
 31   allowPrivilegeEscalation: false
 32   defaultAllowPrivilegeEscalation: false
 33   # Capabilities
 34   allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
 35   defaultAddCapabilities: []
 36   requiredDropCapabilities: []
 37   # Host namespaces
 38   hostPID: false
 39   hostIPC: false
 40   hostNetwork: true
 41   hostPorts:
 42   - min: 0
 43     max: 65535
 44   # SELinux
 45   seLinux:
 46     # SELinux is unused in CaaSP
 47     rule: 'RunAsAny'
 48 ---
 49 kind: ClusterRole
 50 apiVersion: rbac.authorization.k8s.io/v1
 51 metadata:
 52   name: flannel
 53 rules:
 54 - apiGroups: ['extensions']
 55   resources: ['podsecuritypolicies']
 56   verbs: ['use']
 57   resourceNames: ['psp.flannel.unprivileged']
 58 - apiGroups:
 59   - ""
 60   resources:
 61   - pods
 62   verbs:
 63   - get
 64 - apiGroups:
 65   - ""
 66   resources:
 67   - nodes
 68   verbs:
 69   - list
 70   - watch
 71 - apiGroups:
 72   - ""
 73   resources:
 74   - nodes/status
 75   verbs:
 76   - patch
 77 ---
 78 kind: ClusterRoleBinding
 79 apiVersion: rbac.authorization.k8s.io/v1
 80 metadata:
 81   name: flannel
 82 roleRef:
 83   apiGroup: rbac.authorization.k8s.io
 84   kind: ClusterRole
 85   name: flannel
 86 subjects:
 87 - kind: ServiceAccount
 88   name: flannel
 89   namespace: kube-system
 90 ---
 91 apiVersion: v1
 92 kind: ServiceAccount
 93 metadata:
 94   name: flannel
 95   namespace: kube-system
 96 ---
 97 kind: ConfigMap
 98 apiVersion: v1
 99 metadata:
100   name: kube-flannel-cfg
101   namespace: kube-system
102   labels:
103     tier: node
104     app: flannel
105 data:
106   cni-conf.json: |
107     {
108       "name": "cbr0",
109       "cniVersion": "0.3.1",
110       "plugins": [
111         {
112           "type": "flannel",
113           "delegate": {
114             "hairpinMode": true,
115             "isDefaultGateway": true
116           }
117         },
118         {
119           "type": "portmap",
120           "capabilities": {
121             "portMappings": true
122           }
123         }
124       ]
125     }
126   net-conf.json: |
127     {
128       "Network": "10.244.0.0/16",
129       "Backend": {
130         "Type": "vxlan"
131       }
132     }
133 ---
134 apiVersion: apps/v1
135 kind: DaemonSet
136 metadata:
137   name: kube-flannel-ds
138   namespace: kube-system
139   labels:
140     tier: node
141     app: flannel
142 spec:
143   selector:
144     matchLabels:
145       app: flannel
146   template:
147     metadata:
148       labels:
149         tier: node
150         app: flannel
151     spec:
152       affinity:
153         nodeAffinity:
154           requiredDuringSchedulingIgnoredDuringExecution:
155             nodeSelectorTerms:
156             - matchExpressions:
157               - key: kubernetes.io/os
158                 operator: In
159                 values:
160                 - linux
161       hostNetwork: true
162       priorityClassName: system-node-critical
163       tolerations:
164       - operator: Exists
165         effect: NoSchedule
166       serviceAccountName: flannel
167       initContainers:
168       - name: install-cni
169         image: quay.io/coreos/flannel:v0.13.1-rc2
170         command:
171         - cp
172         args:
173         - -f
174         - /etc/kube-flannel/cni-conf.json
175         - /etc/cni/net.d/10-flannel.conflist
176         volumeMounts:
177         - name: cni
178           mountPath: /etc/cni/net.d
179         - name: flannel-cfg
180           mountPath: /etc/kube-flannel/
181       containers:
182       - name: kube-flannel
183         image: quay.io/coreos/flannel:v0.13.1-rc2
184         command:
185         - /opt/bin/flanneld
186         args:
187         - --ip-masq
188         - --kube-subnet-mgr
189         resources:
190           requests:
191             cpu: "100m"
192             memory: "50Mi"
193           limits:
194             cpu: "100m"
195             memory: "50Mi"
196         securityContext:
197           privileged: false
198           capabilities:
199             add: ["NET_ADMIN", "NET_RAW"]
200         env:
201         - name: POD_NAME
202           valueFrom:
203             fieldRef:
204               fieldPath: metadata.name
205         - name: POD_NAMESPACE
206           valueFrom:
207             fieldRef:
208               fieldPath: metadata.namespace
209         volumeMounts:
210         - name: run
211           mountPath: /run/flannel
212         - name: flannel-cfg
213           mountPath: /etc/kube-flannel/
214       volumes:
215       - name: run
216         hostPath:
217           path: /run/flannel
218       - name: cni
219         hostPath:
220           path: /etc/cni/net.d
221       - name: flannel-cfg
222         configMap:
223           name: kube-flannel-cfg
View Code
1 [root@master ~] kubectl apply -f kube-flannel.yml​

 

六、node节点加入集群

1. node加入集群

1 kubeadm join 192.168.214.128:6443 --token 4xpmwx.nw6psmvn9qi4d3cj \
2  --discovery-token-ca-cert-hash sha256:c7cbe95a66092c58b4da3ad20874f0fe2b6d6842d28b2762ffc8d36227d7a0a7 ​

2. 集群节点查看

1 [root@master ~] kubectl get nodes​

七、Dashboard搭建

Dashboard提供了可以实现集群管理、工作负载、服务发现和负载均衡、存储、字典配置、日志视图等功能。

1. 下载yaml

vi recommended.yaml

文件内容如下

  1 # Copyright 2017 The Kubernetes Authors.
  2 #
  3 # Licensed under the Apache License, Version 2.0 (the "License");
  4 # you may not use this file except in compliance with the License.
  5 # You may obtain a copy of the License at
  6 #
  7 #     http://www.apache.org/licenses/LICENSE-2.0
  8 #
  9 # Unless required by applicable law or agreed to in writing, software
 10 # distributed under the License is distributed on an "AS IS" BASIS,
 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 # See the License for the specific language governing permissions and
 13 # limitations under the License.
 14 
 15 apiVersion: v1
 16 kind: Namespace
 17 metadata:
 18   name: kubernetes-dashboard
 19 
 20 ---
 21 
 22 apiVersion: v1
 23 kind: ServiceAccount
 24 metadata:
 25   labels:
 26     k8s-app: kubernetes-dashboard
 27   name: kubernetes-dashboard
 28   namespace: kubernetes-dashboard
 29 
 30 ---
 31 
 32 kind: Service
 33 apiVersion: v1
 34 metadata:
 35   labels:
 36     k8s-app: kubernetes-dashboard
 37   name: kubernetes-dashboard
 38   namespace: kubernetes-dashboard
 39 spec:
 40   ports:
 41     - port: 443
 42       targetPort: 8443
 43       nodePort: 30001
 44   type: NodePort
 45   selector:
 46     k8s-app: kubernetes-dashboard
 47 
 48 ---
 49 
 50 apiVersion: v1
 51 kind: Secret
 52 metadata:
 53   labels:
 54     k8s-app: kubernetes-dashboard
 55   name: kubernetes-dashboard-certs
 56   namespace: kubernetes-dashboard
 57 type: Opaque
 58 
 59 ---
 60 
 61 apiVersion: v1
 62 kind: Secret
 63 metadata:
 64   labels:
 65     k8s-app: kubernetes-dashboard
 66   name: kubernetes-dashboard-csrf
 67   namespace: kubernetes-dashboard
 68 type: Opaque
 69 data:
 70   csrf: ""
 71 
 72 ---
 73 
 74 apiVersion: v1
 75 kind: Secret
 76 metadata:
 77   labels:
 78     k8s-app: kubernetes-dashboard
 79   name: kubernetes-dashboard-key-holder
 80   namespace: kubernetes-dashboard
 81 type: Opaque
 82 
 83 ---
 84 
 85 kind: ConfigMap
 86 apiVersion: v1
 87 metadata:
 88   labels:
 89     k8s-app: kubernetes-dashboard
 90   name: kubernetes-dashboard-settings
 91   namespace: kubernetes-dashboard
 92 
 93 ---
 94 
 95 kind: Role
 96 apiVersion: rbac.authorization.k8s.io/v1
 97 metadata:
 98   labels:
 99     k8s-app: kubernetes-dashboard
100   name: kubernetes-dashboard
101   namespace: kubernetes-dashboard
102 rules:
103   # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
104   - apiGroups: [""]
105     resources: ["secrets"]
106     resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
107     verbs: ["get", "update", "delete"]
108     # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
109   - apiGroups: [""]
110     resources: ["configmaps"]
111     resourceNames: ["kubernetes-dashboard-settings"]
112     verbs: ["get", "update"]
113     # Allow Dashboard to get metrics.
114   - apiGroups: [""]
115     resources: ["services"]
116     resourceNames: ["heapster", "dashboard-metrics-scraper"]
117     verbs: ["proxy"]
118   - apiGroups: [""]
119     resources: ["services/proxy"]
120     resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
121     verbs: ["get"]
122 
123 ---
124 
125 kind: ClusterRole
126 apiVersion: rbac.authorization.k8s.io/v1
127 metadata:
128   labels:
129     k8s-app: kubernetes-dashboard
130   name: kubernetes-dashboard
131 rules:
132   # Allow Metrics Scraper to get metrics from the Metrics server
133   - apiGroups: ["metrics.k8s.io"]
134     resources: ["pods", "nodes"]
135     verbs: ["get", "list", "watch"]
136 
137 ---
138 
139 apiVersion: rbac.authorization.k8s.io/v1
140 kind: RoleBinding
141 metadata:
142   labels:
143     k8s-app: kubernetes-dashboard
144   name: kubernetes-dashboard
145   namespace: kubernetes-dashboard
146 roleRef:
147   apiGroup: rbac.authorization.k8s.io
148   kind: Role
149   name: kubernetes-dashboard
150 subjects:
151   - kind: ServiceAccount
152     name: kubernetes-dashboard
153     namespace: kubernetes-dashboard
154 
155 ---
156 
157 apiVersion: rbac.authorization.k8s.io/v1
158 kind: ClusterRoleBinding
159 metadata:
160   name: kubernetes-dashboard
161 roleRef:
162   apiGroup: rbac.authorization.k8s.io
163   kind: ClusterRole
164   name: kubernetes-dashboard
165 subjects:
166   - kind: ServiceAccount
167     name: kubernetes-dashboard
168     namespace: kubernetes-dashboard
169 
170 ---
171 
172 kind: Deployment
173 apiVersion: apps/v1
174 metadata:
175   labels:
176     k8s-app: kubernetes-dashboard
177   name: kubernetes-dashboard
178   namespace: kubernetes-dashboard
179 spec:
180   replicas: 1
181   revisionHistoryLimit: 10
182   selector:
183     matchLabels:
184       k8s-app: kubernetes-dashboard
185   template:
186     metadata:
187       labels:
188         k8s-app: kubernetes-dashboard
189     spec:
190       containers:
191         - name: kubernetes-dashboard
192           image: kubernetesui/dashboard:v2.2.0
193           imagePullPolicy: Always
194           ports:
195             - containerPort: 8443
196               protocol: TCP
197           args:
198             - --auto-generate-certificates
199             - --namespace=kubernetes-dashboard
200             # Uncomment the following line to manually specify Kubernetes API server Host
201             # If not specified, Dashboard will attempt to auto discover the API server and connect
202             # to it. Uncomment only if the default does not work.
203             # - --apiserver-host=http://my-address:port
204           volumeMounts:
205             - name: kubernetes-dashboard-certs
206               mountPath: /certs
207               # Create on-disk volume to store exec logs
208             - mountPath: /tmp
209               name: tmp-volume
210           livenessProbe:
211             httpGet:
212               scheme: HTTPS
213               path: /
214               port: 8443
215             initialDelaySeconds: 30
216             timeoutSeconds: 30
217           securityContext:
218             allowPrivilegeEscalation: false
219             readOnlyRootFilesystem: true
220             runAsUser: 1001
221             runAsGroup: 2001
222       volumes:
223         - name: kubernetes-dashboard-certs
224           secret:
225             secretName: kubernetes-dashboard-certs
226         - name: tmp-volume
227           emptyDir: {}
228       serviceAccountName: kubernetes-dashboard
229       nodeSelector:
230         "kubernetes.io/os": linux
231       # Comment the following tolerations if Dashboard must not be deployed on master
232       tolerations:
233         - key: node-role.kubernetes.io/master
234           effect: NoSchedule
235 
236 ---
237 
238 kind: Service
239 apiVersion: v1
240 metadata:
241   labels:
242     k8s-app: dashboard-metrics-scraper
243   name: dashboard-metrics-scraper
244   namespace: kubernetes-dashboard
245 spec:
246   ports:
247     - port: 8000
248       targetPort: 8000
249   selector:
250     k8s-app: dashboard-metrics-scraper
251 
252 ---
253 
254 kind: Deployment
255 apiVersion: apps/v1
256 metadata:
257   labels:
258     k8s-app: dashboard-metrics-scraper
259   name: dashboard-metrics-scraper
260   namespace: kubernetes-dashboard
261 spec:
262   replicas: 1
263   revisionHistoryLimit: 10
264   selector:
265     matchLabels:
266       k8s-app: dashboard-metrics-scraper
267   template:
268     metadata:
269       labels:
270         k8s-app: dashboard-metrics-scraper
271       annotations:
272         seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
273     spec:
274       containers:
275         - name: dashboard-metrics-scraper
276           image: kubernetesui/metrics-scraper:v1.0.6
277           ports:
278             - containerPort: 8000
279               protocol: TCP
280           livenessProbe:
281             httpGet:
282               scheme: HTTP
283               path: /
284               port: 8000
285             initialDelaySeconds: 30
286             timeoutSeconds: 30
287           volumeMounts:
288           - mountPath: /tmp
289             name: tmp-volume
290           securityContext:
291             allowPrivilegeEscalation: false
292             readOnlyRootFilesystem: true
293             runAsUser: 1001
294             runAsGroup: 2001
295       serviceAccountName: kubernetes-dashboard
296       nodeSelector:
297         "kubernetes.io/os": linux
298       # Comment the following tolerations if Dashboard must not be deployed on master
299       tolerations:
300         - key: node-role.kubernetes.io/master
301           effect: NoSchedule
302       volumes:
303         - name: tmp-volume
304           emptyDir: {}
View Code

2. 配置yaml


2.1 修改镜像地址(省略,直接用文件里面的镜像)

1 [root@client ~] sed -i 's/kubernetesui/registry.cn-hangzhou.aliyuncs.com\/loong576/g' recommended.yaml​

 

由于默认的镜像仓库网络访问不通,故改成阿里镜像

 

2.2 外网访问

1 [root@client ~] sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml​


配置NodePort,外部通过https://NodeIp:NodePort 访问Dashboard,此时端口为30001

 

3. 部署访问


3.1 部署Dashboard

1 [root@client ~] kubectl apply -f recommended.yaml​


3.2 状态查看

1 [root@client ~] kubectl get all -n kubernetes-dashboard ​

 

3.3 令牌生成


创建service account并绑定默认cluster-admin管理员集群角色:

1 # 创建用户
2 $ kubectl create serviceaccount dashboard-admin -n kube-system
3 # 用户授权
4 $ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin​

 

3.4 令牌查看

1 [root@client ~] kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')​

 

令牌为:

1 eyJhbGciOiJSUzI1NiIsImtpZCI6ImhPdFJMQVZCVmRlZ25HOUJ5enNUSFV5M3Z6NnVkSTBQR0gzbktqbUl3bGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tczhtbTciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2JjNWY0ZTktNmQwYy00MjYxLWFiNDItMDZhM2QxNjVkNmI4Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.N5eoH_rjPVX4a4BJwi2P-f3ohVLq2W0afzkaWomFk5iBe1KMczv5zAzz0EYZwnHkNOZ-Ts7Z6Z778vo7fR5_B-6KRNvdE5rg5Cq6fZKBlRvqB9FqYA0a_3JXCc0FK1et-97ycM20XtekM1KBt0uyHkKnPzJJBkqAedtALbJ0ccWU_WmmKVHKzfgXTiQNdiM-mqyelIxQa7i0KFFdnyN7Euhh4uk_ueXeLlZeeijxWTOpu9p91jMuN45xFuny0QkxxQcWlnjL8Gz7mELemEMOQEbhZRcKSHleZ72FVpjvHwn0gg7bQuaNc-oUKUxB5VS-h7CF8aOjk-yrLvoaY3Af-g​

 

3.5 访问


请使用浏览器访问:https://192.168.214.128:30001/
接受风险,输入令牌,即可图形化管理k8s集群

八、问题处理

1. 问题一

解决k8s集群在节点运行kubectl出现的错误:

The connection to the server localhost:8080 was refused - did you specify the right host or port?​

1.1 问题分析:

出现这个问题的原因是kubectl命令需要使用kubernetes-admin来运行


1.2 解决办法:


将主节点(master节点)中的【/etc/kubernetes/admin.conf】文件拷贝到从节点相同目录下:

scp -/etc/kubernetes/admin.conf ${node1}:/etc/kubernetes/admin.conf

配置环境变量:

echo export KUBECONFIG=/etc/kubernetes/admin.conf >> ~/.bash_profile

立即生效:

source ~/.bash_profile

2. 问题二

部署一个服务,在master和node节点上 curl https://localhost:30001

都可以访问,外部不能访问master的30001端口,node节点也访问不了master的30001端口,外部可以访问node节点的30001端口,然而master上的防火墙是关闭的,通过httpserver开一个服务,外部能访问进去,kubectl看也似乎运行都正常。

2.1 解决办法

修改/etc/kubernetes/manifests下的kube-controller-manager.yaml和kube-scheduler.yaml

注释掉--port=0

保存即可,不需要重启服务,k8s会自动重启

posted @ 2021-03-03 22:32  风吹过的绿洲  阅读(3034)  评论(0编辑  收藏  举报