安装Kubernetes(k8s)保姆级教程---无坑版

一、安装环境说明

硬件要求

内存:2GB或更多RAM

CPU: 2核CPU或更多CPU

硬盘: 30GB或更多

本次环境说明:

操作系统:CentOS 7.9

内核版本:3.10.0-1160

master: 192.168.68.106

node01: 192.168.68.107

node02: 192.168.68.108

转载请在文章开头附上原文链接地址: https://www.cnblogs.com/Sunzz/p/15184167.html

二、环境准备

1.关闭防火墙和selinux

关闭防火墙

systemctl stop firewalld && systemctl disable firewalld && iptables -F

关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0

2. 关闭swap分区

临时关闭

swapoff -a

永久关闭swap

sed -ri 's/.*swap.*/#&/' /etc/fstab

3.修改hosts文件

设置主机名(不设置也可以,但是要保证主机名不相同)

master上

hostnamectl set-hostname master.local 

node01

hostnamectl set-hostname node01.local

node02

hostnamectl set-hostname node02.local

修改本地hosts文件

vi /etc/hosts 添加如下内容

192.168.68.106 master.local
192.168.68.107 node01.local
192.168.68.108 node02.local

4.修改内核参数

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl --system

5.加载ip_vs内核模块

如果kube-proxy 模式为ip_vs则必须加载,本文采用iptables

modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4

 设置下次开机自动加载

cat > /etc/modules-load.d/ip_vs.conf << EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOF

三、安装docker

1.配置yum源(这里使用阿里云的源)

yum install wget -y 
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2. 安装docker

yum install docker-ce docker-ce-cli -y

3.编辑docker配置文件

编辑/etc/docker/daemon.json

mkdir /etc/docker/ 
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://gqs7xcfd.mirror.aliyuncs.com","https://hub-mirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF

3.启动docker服务

systemctl daemon-reload && systemctl enable docker && systemctl start docker

4.安装指定版本的docker

列出所有docker版本

yum list docker-ce.x86_64 --showduplicates |sort

选择一个你想要的版本进行安装,这里安装docker 19.03.9版本

yum -y install docker-ce-19.03.9-3.el7 docker-ce-cli-19.03.9-3.el7

四、安装kubeadm,kubelet和kubectl

 1.配置yum源(这里使用阿里云的源)

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.安装指定版本的kubeadm,kubelet,kubectl

yum install -y kubelet-1.18.8 kubeadm-1.18.8 kubectl-1.18.8

指定其他版本也可照做,只需指定相应的版本即可,比如安装 1.16.9

yum install -y kubelet-1.16.9 kubeadm-1.16.9  kubectl-1.16.9 

由于不知道默认安装的最新版,国内的阿里云镜像站同步会有延迟,导致无法拉取镜像。如果你可以拉去到最新的镜像那请随意。

3.设置开机自启

systemctl enable kubelet

4.列出所有版本

yum list kubelet --showduplicates

五、部署Kubernetes Master节点

1.master节点初始化

kubeadm init \
  --kubernetes-version 1.18.8 \
  --apiserver-advertise-address=0.0.0.0 \
  --service-cidr=10.96.0.0/16 \
  --pod-network-cidr=10.245.0.0/16 \
  --image-repository registry.aliyuncs.com/google_containers 

参数说明

--kubernetes-version v1.18.8 指定版本
--apiserver-advertise-address 为通告给其它组件的IP,一般应为master节点的IP地址
--service-cidr 指定service网络,不能和node网络冲突
--pod-network-cidr 指定pod网络,不能和node网络、service网络冲突
--image-repository registry.aliyuncs.com/google_containers 指定镜像源,由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。
如果k8s版本比较新,可能阿里云没有对应的镜像,就需要自己从其它地方获取镜像了。
--control-plane-endpoint 标志应该被设置成负载均衡器的地址或 DNS 和端口(可选)

注意点:

版本必须和上边安装的kubelet,kubead,kubectl保持一致

 2.等待拉取镜像

也可用自己提前给各个节点拉取镜像 ,查看所需镜像命令: kubeadm --kubernetes-version 1.18.8 config images list

 等待镜像拉取成功后,会继续初始化集群,等到初始化完成后,会看到类似如下信息,保留最后两行的输出后边会用到

 3. 配置kubectl  

就是执行初始化成功后输出的那三条命令

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

4.查看节点信息

kubectl get nodes

 此时只能看到master节点,等待其他节点加入进来后即可看到。

六、node节点加入集群

各个node节点也要进行 二三四步骤的操作,然后才能加入集群

1.  node01 加入集群

kubeadm join 192.168.68.106:6443 --token 1quyaw.xa7yel3xla129kfw \
    --discovery-token-ca-cert-hash sha256:470410e1180b119ebe8ee3ae2842e7a4a852e590896306ec0dab26b168d99197

2. node02进行相同的操作即可,这里不再赘述

3.master节点上查看集群节点

kubectl get nodes

可以看到 STATUS状态都是NotReady, 这是因为确实网络插件导致的,等安装好网络插件就好了

七、安装插件

1.安装 flannel

从官网下载yaml文件

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

也可从这里直接复制

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.245.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
View Code

更改 128行的网络配置,要和 pod-network-cidr保持一致

 然后执行yaml文件

kubectl apply -f kube-flannel.yaml

 2.查看flannel部署结果

kubectl -n kube-system get pods -o wide

 

 3.查看各个node的状态

kubectl get nodes

 4. 修改集群kube-proxy的模式为iptables

由于 k8s 1.18对内核版本要求比较高,3.10的内核部署1.18.8使用ipvs模式会存在coredns无法解析等问题。股在此采用iptables模式。 如果你服务内核4+起,则采用iptables 和ipvs均可。 

kubectl get cm kube-proxy -n kube-system -o yaml | sed 's/mode: ""/mode: "iptables"/' | kubectl apply -f -
kubectl -n kube-system rollout restart  daemonsets.apps  kube-proxy
kubectl -n kube-system rollout restart  daemonsets.apps  kube-flannel-ds

升级内核可参考:https://www.cnblogs.com/Sunzz/p/15624582.html

八.部署busybox来测试集群各网络情况

busybox.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
spec:
  replicas: 2
  selector:
    matchLabels:
      name: busybox
  template:
    metadata:
      labels:
        name: busybox
    spec:
      containers:
      - name: busybox
        image: busybox  
        imagePullPolicy: IfNotPresent
        args:
        - /bin/sh
        - -c
        - sleep 1; touch /tmp/healthy; sleep 30000
        readinessProbe:   
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 1 
kubectl apply -f busybox.yaml 

1.查看集群所有pod ip 和service ip

2.跨node的pods是否互通

kubectl exec -it busybox-7c84546778-h6t2d -- /bin/sh

  10.245.2.6 为另一个busybox pod的ip

3. pod 和各node是否互通

在pod里面分别ping 各node的ip

4.pod 和service 的网络

5.测试core-dns是否正常

转载请在文章开头附上原文链接地址: https://www.cnblogs.com/Sunzz/p/15184167.html

九,部署metrics-server、nginx-ingress

 1.部署nginx-ingress

apiVersion: v1
kind: Namespace
metadata:
  name: nginx-ingress

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress 
  namespace: nginx-ingress

---
apiVersion: v1
kind: Secret
metadata:
  name: default-server-secret
  namespace: nginx-ingress
type: Opaque
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURWekNDQWorZ0F3SUJBZ0lKQUtaRTllUmZaRlJSTUEwR0NTcUdTSWIzRFFFQkN3VUFNRUl4SHpBZEJnTlYKQkFNTUZrNUhTVTVZU1c1bmNtVnpjME52Ym5SeWIyeHNaWEl4SHpBZEJnTlZCQW9NRms1SFNVNVlTVzVuY21WegpjME52Ym5SeWIyeHNaWEl3SGhjTk1qRXdNVEkxTURjeE9USTRXaGNOTXpFd01USXpNRGN4T1RJNFdqQkNNUjh3CkhRWURWUVFEREJaT1IwbE9XRWx1WjNKbGMzTkRiMjUwY205c2JHVnlNUjh3SFFZRFZRUUtEQlpPUjBsT1dFbHUKWjNKbGMzTkRiMjUwY205c2JHVnlNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQQp0M1hVcFhYcUdDUTJBblRTd21DMjZ1bzRudGhmQWNYd3U4REdLUVlBSGQ1VlFKaDRPdStSWEJ4M1BTMVlTY3VNCmVTVXExV1R1ejQzRW9tRHZOWkdHcmNkSlJrWDhjYk05WWNBanlWZ0pnd0cxVFB0OUl2ZzVibWJBWE1INnR3OTUKRGJiOFFlSWVSeXBUWVl6bXJmMnJxaU45aXdROUMzSHhySDc3ektoZ1hxT20vZ09CdGpWRVhKc3lSeEpTcUdJZQpiNUp6NENJK0FzaUZGYWRjMHZhcmpvei8wTTJuS1JaU3VUQThmRHhsMUpTZlVJc2hYWUt1YmZvMzh0ck1paHFoCnh2Q09SMWFJUmxPaDNDTXAyekRoUm9nNHpjTFdyUHBXQnlyVkN2aGVBWWp2MitpWHlTQ1MyU2pRbmV4UE43aWoKUE5Kc0hpZVZMdVE0R0VCYTJ3RkxFUUlEQVFBQm8xQXdUakFkQmdOVkhRNEVGZ1FVSllDdmVtZFpFQVM4eGIxUAo3bVo3YjBGRFViZ3dId1lEVlIwakJCZ3dGb0FVSllDdmVtZFpFQVM4eGIxUDdtWjdiMEZEVWJnd0RBWURWUjBUCkJBVXdBd0VCL3pBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQW1Nc2QwbVRtaFBXc2pvejh0NUN2dTdXMVNhT2EKTzRQTlRNakJ3eEpHZUt1ZUlGNytSZWR3WUVaWXJ0M1hiY0NHRDAxMklFMjNJK1k0eWs4ekFkdWxtMmtZRUdPZQp6UDQyRGpCcFpZelhiUmpkQkY0aVFlMWR6K0xVWGJyK096bWFoRjMwOW5MVU5KVStNTStHbVVUbmlwdy90dGxnCkszNlI3WWpuSzNpelhqaE8rdFJvWnp5eWhsdDRVcmQxYTN0Tkxacy8yNk14akZ2aGlOWUNNSmh4blMzUlVzWXYKdllxUUw0WkQ1b0VaTVIvUEN3VlR3OHpTUEVuU0hlMnFLT2dmZGlOYTEwTHpYYzc0eGNmK2FZTmlvcndNdEpXdwo5NnR1WExUN3ZNMW11Ky9oM3BCR0RaMFY3SitwZTlvV2daZSsrNVVINyswK2tpSlN5S1VCN29LMHN3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ==
  tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2Z0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktnd2dnU2tBZ0VBQW9JQkFRQzNkZFNsZGVvWUpEWUMKZE5MQ1lMYnE2amllMkY4QnhmQzd3TVlwQmdBZDNsVkFtSGc2NzVGY0hIYzlMVmhKeTR4NUpTclZaTzdQamNTaQpZTzgxa1lhdHgwbEdSZnh4c3oxaHdDUEpXQW1EQWJWTSszMGkrRGx1WnNCY3dmcTNEM2tOdHZ4QjRoNUhLbE5oCmpPYXQvYXVxSTMyTEJEMExjZkdzZnZ2TXFHQmVvNmIrQTRHMk5VUmNtekpIRWxLb1loNXZrblBnSWo0Q3lJVVYKcDF6UzlxdU9qUC9RemFjcEZsSzVNRHg4UEdYVWxKOVFpeUZkZ3E1dCtqZnkyc3lLR3FIRzhJNUhWb2hHVTZIYwpJeW5iTU9GR2lEak53dGFzK2xZSEt0VUsrRjRCaU8vYjZKZkpJSkxaS05DZDdFODN1S004MG13ZUo1VXU1RGdZClFGcmJBVXNSQWdNQkFBRUNnZ0VBQlAvbkRhTksvK0ZzdjJCanBmeHd2N0ltWE4zVXFQMjE4OGZySG84VlRic1QKWTdGRUJZY2wxUGJKb1JjdFFzV1RUSEhnMnZQbk5pek00UWYzUE9SOFlSdi9PVFVMRGlZdVZBMmliQWhFS2hmUAowd3MvZThaNytqQStxY2gzaHFtYlNPNWxyWDMyQ1VaMEEwS052c3djODRRSUZkUEZ2aHdhMC9LWjloZlltSHVkCmFOcGcvbzJuS3dSUVJSYjZYVkpsWWNjTHcvMmE1VzdQMGM0NENWWnI0VDh5MEErWHBUaTF3YUh6VlloQTZOc1QKemE1emJBS3ljMjZ2NWRhQ3VCT1JhYjkzWTFIUFRjOG9VRGIwR1VQa3BNd2dOaW9TV01rakNHRzBWWGtRUFE4NwplWlVteDNrbXNCVDF6RGtPR3pnNm9oSmlpSVQvcFhoVWVIckc1K0doWVFLQmdRRG5vUE0xQmNkeFArTGpMWDVrCnJqamxuSzA3SlQ3N0RlZ0o4VmtUQU90VDQ2cmJVUGxTT0VQTEV4ZUF6dnBkM3VWNkYwa1dYcDI0L0oydlZrSXcKTklQOWJxZ0RQYzBldFdhT0hLMHloanRuM1N1VTJreGtaQTBGN3lHNEFnTzU5SlZidWNnYWpVOStKcDc0TEdHTQprVmUrc09WMGlNVDRmWGMyQzJLUWJtNlduUUtCZ1FES3cyL0lyM25ucUI0dU44bFVSdUZ6MnBNQytTK3RLZlFVCi8xSEFSODAvRDZOYU5xdGhKcXFLd015Z091OGVjcTBMVUhFdm43T1ZyTk9aYzBHcktFQzJnTjF6YXRXVURHcWYKY3J5NnovRjVVQVkwOTI5K004bUJMeWFKeldudWtYaUUwcURlQ2p0cDh4Y3hiZnJOZ1JBU1h3NytudW1jb0RxUAoxb0dSMnQyaUJRS0JnUUNRS1U5Vlg5eHFzdDF1a1VFS1Bwanc1NXUxcFEvV3h5ZjFFRDVsSW54VXdPejFCU2UzCnNZY1lIRERUblg2YjcvK1pCbWNad2hlZUs3T2tqaVl4eEcybHpUcEtraXRaQW9QcXpSUkt6dHFvWVRJZnVlSXoKMVVWNXZRU2FkcjZFL1NIOGJkdUtFd3MzczZmYlJCd09sZU1ycndPUWpSTXlxVHdKNmZvVmRIWGx6UUtCZ0RCUwp3WnBmajdzUkN4aFN2VTJ6a3Rtc2x1clhmbkJUbGxOR3dqSUVLcnREdTlldFBjenFqU3lDWklJdmFYdWxNdTZHClhtTk9PVnVMaytaM1hJZ3hFTE11SlJqenRqRVJnSHU5dVpNQUtmbVNnOWd0dkVta2gvcWN4UitFY0NHbVU4VzcKK1JEUitYVDN0V2hYWUxXSGM5QWREWkxMUnJ2SVNBeXR2N1dHSnRvTkFvR0JBSlJPMzhxOVhuRHZBeEVRUzdpWgphd2dOWFA2TnBiSnVIeUo0cDJ4VldIV0N4VDhrL0E3eDVrb0FrTEpudTUvRE10d0pVL09ZRUpPY1k1b0l2UUVsCnAybWkxM3dEcVpvNml4d2kvU3h5T2xRQ2ZmNHBjNTlTUzE1WVM1RGVEeVVuakcwNTNha2VyQ3R5TGQ4Z2xnOUYKSTE2Y05TQXdTbDFvVi9KM21qOGdzZHpmCi0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0=
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
  namespace: nginx-ingress
data:
  client-max-body-size: "0"
  server-names-hash-bucket-size: "1024"


---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: nginx-ingress
rules:
- apiGroups:
  - ""
  resources:
  - services
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - list
  - watch
  - update
  - create
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - list
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - "extensions"
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - k8s.nginx.org
  resources:
  - virtualservers
  - virtualserverroutes
  verbs:
  - list
  - watch
  - get

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: nginx-ingress
subjects:
- kind: ServiceAccount
  name: nginx-ingress
  namespace: nginx-ingress
roleRef:
  kind: ClusterRole
  name: nginx-ingress
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "9113"
spec:
  selector:
    matchLabels:
      app: nginx-ingress
  template:
    metadata:
      labels:
        app: nginx-ingress
    spec:
      serviceAccountName: nginx-ingress
      containers:
      - image: nginx/nginx-ingress:1.5.5
        imagePullPolicy: IfNotPresent
        name: nginx-ingress
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: https
          containerPort: 443
          hostPort: 443
        - name: prometheus
          containerPort: 9113
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        args:
          - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
          - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
          - -enable-prometheus-metrics
nginx-ingress.yaml
kubectl apply -f  nginx-ingress.yaml

2.部署metrics-server

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        # 跳过tls,解决cannot validate certificate for 192.168.65.3 because it doesn’t contain any IP SANs报错
        - --kubelet-insecure-tls
        image: k8s.gcr.io/metrics-server/metrics-server:v0.5.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
        - mountPath: /etc/localtime
          name: host-time

      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
      - name: host-time
        hostPath:
          path: /etc/localtime

---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
metrics-server
kubectl apply -f metrics-server.yaml

 

posted @ 2021-08-25 15:43  Sunzz  阅读(60428)  评论(16编辑  收藏  举报