超详细Kubernetes二进制部署集群文档

官方提供的几种Kubernetes部署方式

  • minikube

Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用。不能用于生产环境。

官方地址:https://kubernetes.io/docs/setup/minikube/

  • kubeadm

Kubeadm也是一个工具,提供kubeadm initkubeadm join,用于快速部署Kubernetes集群。

官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

  • 二进制包

从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。

小结:
生产环境中部署Kubernetes集群,只有Kubeadm和二进制包可选,Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。我们这里使用二进制包部署Kubernetes集群,我也是推荐大家使用这种方式,虽然手动部署麻烦点,但学习很多工作原理,更有利于后期维护。

软件环境

软件

版本

操作系统

CentOS7.5_x64

Docker

19.03.4ce

Kubernetes

1.12

etcd

V3.3.10

服务器角色

角色

IP

组件

k8s-master

192.168.0.114

kube-apiserverkube-controller-managerkube-scheduleretcd

k8s-node1

192.168.0.29

kubeletkube-proxydockerflanneletcd

k8s-node2

192.168.0.232

kubeletkube-proxydockerflanneletcd


架构图

一、系统初始化配置

  1. 主机名配置

        hostnamectl set-hostname k8s-maste

    2.配置/etc/hosts

        192.168.0.114 k8s-master

        192.168.0.29  k8s-node1

        192.168.0.232 k8s-node2

    3.关闭SELinux及防火墙

         systemctl disable firewalld

         systemctl stop firewalld

         sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

        4.环境变量配置(后续k8s相关命令都会放到/opt/kubernetes/bin目录下)

         PATH=$PATH:$HOME/bin:/opt/kubernetes/bin

         source ~/.bash_profile

        5.使用国内Docker源

         cd /etc/yum.repos.d/

        wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

       6.Docker安装启动docker

          yum install -y docker-ce

          systemctl start docker&systemctl enable docker&systemctl status docker

       7.准备部署目录

          mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}

         # 目录结构, 所有文件均存放在/opt/kubernetes目录下:

        # tree -L 1 /opt/kubernetes/

       /opt/kubernetes/

     ├── bin   #二进制文件

     ├── cfg   #配置文件

     ├── log   #日志文件

     └── ssl   #证书文件

   8.做好master节点跟其他node节点的ssh互信,便于搭建

         ssh-keygen

         ssh-copy-id root@k8s-node1

         ssh-copy-id root@k8s-node2

     9.关闭swap   swapoff -a 临时

       永久 注释:vim /etc/fstab

 

二、CA证书生成

kubernetes 系统的各组件需要使用 TLS 证书对通信进行加密,使用 CloudFlare PKI 工具集 cfssl 来生成 Certificate Authority (CA) 和其它证书;

使用证书的组件如下:

etcd:使用 ca.pemkubernetes-key.pemkubernetes.pem

kube-apiserver:使用 ca.pemkubernetes-key.pemkubernetes.pem

kubelet:使用 ca.pem

kube-proxy:使用 ca.pemkube-proxy-key.pemkube-proxy.pem

kubectl:使用 ca.pemadmin-key.pemadmin.pem

kube-controllerkube-scheduler 当前需要和 kube-apiserver 部署在同一台机器上且使用非安全端口通信,故不需要证书。

(后面的操作我们做到哪一步再去做对应组建的证书,以免出现差错

2.1 安装 CFSSL(二进制方式)

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo

mv cfssl_linux-amd64  /opt/kubernetes/bin/cfssl

mv cfssl_linux-amd64  /opt/kubernetes/bin/cfssl

chmod +x  /opt/kubernetes/bin/cf*

scp /opt/kubernetes/bin/cfssl* k8s-node1:/opt/kubernetes/bin

scp /opt/kubernetes/bin/cfssl* k8s-node2:/opt/kubernetes/bin

2.2初始化cfssl

cd /usr/local/src

mkdir ssl && cd ssl

cfssl print-defaults config > config.json

cfssl print-defaults csr > csr.json

2.3创建用来生成 CA 文件的 JSON 配置文件

cat > ca-config.json <<EOF

{

  "signing": {

    "default": {

      "expiry": "8760h"

    },

    "profiles": {

      "kubernetes": {

        "usages": [

            "signing",

            "key encipherment",

            "server auth",

            "client auth"

        ],

        "expiry": "8760h"

      }

    }

  }

}

EOF

2.4创建用来生成 CA 证书签名请求(CSR)的 JSON 配置文件

cat > ca-csr.json <<EOF

{

  "CN": "kubernetes",

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "BeiJing",

      "L": "BeiJing",

      "O": "k8s",

      "OU": "System"

    }

  ]

}

EOF

2.5生成CA证书(ca.pem)和密钥(ca-key.pem)

cd /usr/local/src/ssl

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

ls -l ca*

2.6分发证书

cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl

scp ca.csr ca.pem ca-key.pem ca-config.json k8s-node1:/opt/kubernetes/ssl

scp ca.csr ca.pem ca-key.pem ca-config.json k8s-node2:/opt/kubernetes/ssl

 

三、部署Etcd集群

3.1解压安装分发etcd二进制包

cd /usr/local/src

tar zxvf etcd-v3.3.10-linux-amd64.tar.gz

cd etcd-v3.3.10-linux-amd64/

chmod +x etcdctl

cp etcd etcdctl /opt/kubernetes/bin/

scp etcd etcdctl k8s-node1:/opt/kubernetes/bin/

scp etcd etcdctl k8s-node2:/opt/kubernetes/bin/

 

3.2 生成证书

创建以下三个文件:

cat > etcd-csr.json << EOF

{

  "CN": "etcd",

  "hosts": [

    "127.0.0.1",

"192.168.0.114",

"192.168.0.29",

"192.168.0.232"

  ],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "BeiJing",

      "L": "BeiJing",

      "O": "k8s",

      "OU": "System"

    }

  ]

}

EOF

生成证书:

#cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \

  -ca-key=/opt/kubernetes/ssl/ca-key.pem \

  -config=/opt/kubernetes/ssl/ca-config.json \

  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

# ls *pem

-rw-r--r-- 1 root root 1062 May 28 15:39 etcd.csr

-rw-r--r-- 1 root root  287 May 28 15:38 etcd-csr.json

-rw------- 1 root root 1679 May 28 15:39 etcd-key.pem

-rw-r--r-- 1 root root 1436 May 28 15:39 etcd.pem

证书这块知道怎么生成、怎么用即可,建议暂时不必过多研究。

3.3将证书移动到/opt/kubernetes/ssl目录下:

cp etcd*.pem /opt/kubernetes/ssl

scp etcd*.pem k8s-node1:/opt/kubernetes/ssl

scp etcd*.pem k8s-node2:/opt/kubernetes/ssl

rm -f etcd.csr etcd-csr.json

3.4 创建etcd配置文件(3节点)

!!!!! 注意这里需要修改相关对应服务器的IP地址 !!!!!

vim /opt/kubernetes/cfg/etcd.conf

#[member]

ETCD_NAME="etcd-01" # 修改节点对应名称

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

#ETCD_SNAPSHOT_COUNTER="10000"

#ETCD_HEARTBEAT_INTERVAL="100"

#ETCD_ELECTION_TIMEOUT="1000"

ETCD_LISTEN_PEER_URLS="https://192.168.0.114:2380" # 修改对应服务器IP

ETCD_LISTEN_CLIENT_URLS="https://192.168.0.114:2379,https://127.0.0.1:2379"

#ETCD_MAX_SNAPSHOTS="5"

#ETCD_MAX_WALS="5"

#ETCD_CORS=""

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.114:2380" # 修改对应服务器IP

# if you use different ETCD_NAME (e.g. test),

# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."

ETCD_INITIAL_CLUSTER="etcd-01=https://192.168.0.114:2380,etcd-02=https://192.168.0.29:2380,etcd-03=https://192.168.0.232:2380"

ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.114:2379" # 修改对应服务器IP

#[security]

CLIENT_CERT_AUTH="true"

ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"

ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"

ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

PEER_CLIENT_CERT_AUTH="true"

ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"

ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"

ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

 

3.5systemd管理etcd3节点

# cat  > /etc/systemd/system/etcd.service << EOF

[Unit]

Description=Etcd Server

After=network.target

[Service]

Type=simple

WorkingDirectory=/var/lib/etcd

EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf

# set GOMAXPROCS to number of processors

ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"

Type=notify

[Install]

WantedBy=multi-user.target

EOF

3.6启动并设置开启启动3节点

systemctl daemon-reload

systemctl enable etcd

在所有节点上创建etcd存储目录并启动etcd(注意这里需要三台尽量同时启动)

mkdir /var/lib/etcd

systemctl start etcd

systemctl status etcd

都部署完成后,检查etcd集群状态:

etcdctl --endpoints=https://192.168.0.114:2379 \

  --ca-file=/opt/kubernetes/ssl/ca.pem \

  --cert-file=/opt/kubernetes/ssl/etcd.pem \

  --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health

如果输出上面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message journalctl -u etcd

四、部署Flannel网络

工作原理:

 

 

 

 

4.1Flannel生成证书

vim flanneld-csr.json

{

  "CN": "flanneld",

  "hosts": [],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "BeiJing",

      "L": "BeiJing",

      "O": "k8s",

      "OU": "System"

    }

  ]

}

4.2生成证书

cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \

   -ca-key=/opt/kubernetes/ssl/ca-key.pem \

   -config=/opt/kubernetes/ssl/ca-config.json \

   -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

4.3分发证书

cp flanneld*.pem /opt/kubernetes/ssl/

scp flanneld*.pem k8s-node1:/opt/kubernetes/ssl/

scp flanneld*.pem k8s-node2:/opt/kubernetes/ssl/

4.4Flannel软件包准备

cd /usr/local/src

tar zxf flannel-v0.10.0-linux-amd64.tar.gz

cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/

scp flanneld mk-docker-opts.sh k8s-node1:/opt/kubernetes/bin/

scp flanneld mk-docker-opts.sh k8s-node2:/opt/kubernetes/bin/

cd /usr/local/src/kubernetes/cluster/centos/node/bin/

cp remove-docker0.sh /opt/kubernetes/bin/

scp remove-docker0.sh k8s-node1:/opt/kubernetes/bin/

scp remove-docker0.sh k8s-node2:/opt/kubernetes/bin/

4.5配置Flannel

vim /opt/kubernetes/cfg/flannel

FLANNEL_ETCD="-etcd-endpoints=https://192.168.0.114:2379,https://192.168.0.29:2379,https://192.168.0.232:2379"

FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"

FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"

FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"

FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"

scp /opt/kubernetes/cfg/flannel k8s-node1:/opt/kubernetes/cfg/

scp /opt/kubernetes/cfg/flannel k8s-node2:/opt/kubernetes/cfg/

4.6设置Flannel系统服务

vim /usr/lib/systemd/system/flannel.service

[Unit]

Description=Flanneld overlay address etcd agent

After=network.target

Before=docker.service

 

[Service]

EnvironmentFile=-/opt/kubernetes/cfg/flannel

ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh

ExecStart=/opt/kubernetes/bin/flanneld --ip-masq ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}

ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker

 

Type=notify

 

[Install]

WantedBy=multi-user.target

RequiredBy=docker.service

复制系统服务脚本到其它节点上

scp /usr/lib/systemd/system/flannel.service k8s-node1:/usr/lib/systemd/system/

scp /usr/lib/systemd/system/flannel.service k8s-node2:/usr/lib/systemd/system/

4.5Flannel CNI集成

下载CNI插件

wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz

 

mkdir /opt/kubernetes/bin/cni 

tar zxf cni-plugins-amd64-v0.7.1.tgz -C /opt/kubernetes/bin/cni

#其他两个节点也创建该目录

mkdir /opt/kubernetes/bin/cni 

scp -r /opt/kubernetes/bin/cni/* k8s-node1:/opt/kubernetes/bin/cni/

scp -r /opt/kubernetes/bin/cni/* k8s-node2:/opt/kubernetes/bin/cni/

4.6创建Etcd的key3节点)

/opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \

      --no-sync -C https://192.168.0.114:2379,https://192.168.0.29:2379,https://192.168.0.232:2379 \

mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}' >/dev/null 2>&1

4.7启动flannel(3节点)

systemctl daemon-reload

systemctl enable flannel

chmod +x /opt/kubernetes/bin/*

systemctl start flannel

systemctl status flannel

4.8配置Docker使用Flannel(master节点)

vim /usr/lib/systemd/system/docker.service

[Unit] #Unit下面修改After和增加Requires

After=network-online.target firewalld.service flannel.service

Wants=network-online.target

Requires=flannel.service

 

[Service] #增加EnvironmentFile=-/run/flannel/docker

Type=not

EnvironmentFile=-/run/flannel/docker

ExecStart=/usr/bin/dockerd $DOCKER_OPTS

 

#将配置复制到另外的节点

scp /usr/lib/systemd/system/docker.service k8s-node1:/usr/lib/systemd/system/

scp /usr/lib/systemd/system/docker.service k8s-node2:/usr/lib/systemd/system/

4.9重启Docker(3节点)

systemctl daemon-reload

systemctl restart docker

ip a | egrep "flannel|docker0"

6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN

    inet 10.2.70.0/32 scope global flannel.1

7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1400 qdisc noqueue state DOWN

    inet 10.2.70.1/24 brd 10.2.70.255 scope global docker0

如果能通说明Flannel部署成功。如果不通检查下日志:journalctl -u flannel

 

五、在Master节点部署组件

在部署Kubernetes之前一定要确保etcdflanneldocker是正常工作的,否则先解决问题再继续。

5.1 软件包准备

cd /usr/local/src/kubernetes

cp server/bin/kube-apiserver /opt/kubernetes/bin/

cp server/bin/kube-controller-manager /opt/kubernetes/bin/

cp server/bin/kube-scheduler /opt/kubernetes/bin/

5.2创建生成CSRJSON 配置文件

cat > kubernetes-csr.json << EOF

{

  "CN": "kubernetes",

  "hosts": [

    "127.0.0.1",

    "192.168.0.114",

    "10.1.0.1",

    "kubernetes",

    "kubernetes.default",

    "kubernetes.default.svc",

    "kubernetes.default.svc.cluster",

    "kubernetes.default.svc.cluster.local"

  ],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "BeiJing",

      "L": "BeiJing",

      "O": "k8s",

      "OU": "System"

    }

  ]

}

EOF

5.3生成 kubernetes 证书和私钥

cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \

   -ca-key=/opt/kubernetes/ssl/ca-key.pem \

   -config=/opt/kubernetes/ssl/ca-config.json \

   -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

cp kubernetes*.pem /opt/kubernetes/ssl/

scp kubernetes*.pem k8s-node1:/opt/kubernetes/ssl/

scp kubernetes*.pem k8s-node2:/opt/kubernetes/ssl/

5.4创建 kube-apiserver 使用的客户端 token 文件

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

c980b8fd73e0326707f9fe0bb646d76a

vim /opt/kubernetes/ssl/bootstrap-token.csv

c980b8fd73e0326707f9fe0bb646d76a,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

5.5创建基础用户名/密码认证配置

vim /opt/kubernetes/ssl/basic-auth.csv

admin,admin,1

readonly,readonly,2

5.6部署apiserver组件

systemd管理apiserver

# cat /usr/lib/systemd/system/kube-apiserver.service

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

 

[Service]

ExecStart=/opt/kubernetes/bin/kube-apiserver \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \

  --bind-address=192.168.0.114 \

  --insecure-bind-address=127.0.0.1 \

  --authorization-mode=Node,RBAC \

  --runtime-config=rbac.authorization.k8s.io/v1 \

  --kubelet-https=true \

  --anonymous-auth=false \

  --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \

  --enable-bootstrap-token-auth \

  --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \

  --service-cluster-ip-range=10.1.0.0/16 \

  --service-node-port-range=20000-40000 \

  --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \

  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \

  --client-ca-file=/opt/kubernetes/ssl/ca.pem \

  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \

  --etcd-cafile=/opt/kubernetes/ssl/ca.pem \

  --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \

  --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \

  --etcd-servers=https://192.168.0.114:2379,https://192.168.0.29:2379,https://192.168.0.232:2379 \

  --enable-swagger-ui=true \

  --allow-privileged=true \

  --audit-log-maxage=30 \

  --audit-log-maxbackup=3 \

  --audit-log-maxsize=100 \

  --audit-log-path=/opt/kubernetes/log/api-audit.log \

  --event-ttl=1h \

  --v=2 \

  --logtostderr=false \

  --log-dir=/opt/kubernetes/log

Restart=on-failure

RestartSec=5

Type=notify

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

启动:

systemctl daemon-reload

systemctl enable kube-apiserver

systemctl restart kube-apiserver
systemctl status kube-apiserver

5.7 部署scheduler组件

systemd管理schduler组件:

# cat /usr/lib/systemd/system/kube-scheduler.service

[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

 

[Service]

ExecStart=/opt/kubernetes/bin/kube-scheduler \

  --address=127.0.0.1 \

  --master=http://127.0.0.1:8080 \

  --leader-elect=true \

  --v=2 \

  --logtostderr=false \

  --log-dir=/opt/kubernetes/log

 

Restart=on-failure

RestartSec=5

 

[Install]

WantedBy=multi-user.target

启动:

systemctl daemon-reload

systemctl enable kube-scheduler

systemctl restart kube-scheduler

systemctl status kube-scheduler

5.8 部署controller-manager组件

systemd管理controller-manager组件:

# cat /usr/lib/systemd/system/kube-controller-manager.service

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

 

[Service]

ExecStart=/opt/kubernetes/bin/kube-controller-manager \

  --address=127.0.0.1 \

  --master=http://127.0.0.1:8080 \

  --allocate-node-cidrs=true \

  --service-cluster-ip-range=10.1.0.0/16 \

  --cluster-cidr=10.2.0.0/16 \

  --cluster-name=kubernetes \

  --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \

  --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \

  --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \

  --root-ca-file=/opt/kubernetes/ssl/ca.pem \

  --leader-elect=true \

  --v=2 \

  --logtostderr=false \

  --log-dir=/opt/kubernetes/log

 

Restart=on-failure

RestartSec=5

 

[Install]

WantedBy=multi-user.target

启动:

systemctl daemon-reload

systemctl enable kube-controller-manager

systemctl restart kube-controller-manager

systemctl status kube-controller-manager

5.9部署kubectl 命令行工具

准备二进制命令包

cd /usr/local/src/kubernetes/client/bin

cp kubectl /opt/kubernetes/bin/

创建 admin 证书签名请求fds

cd /usr/local/src/ssl/

vim admin-csr.json

{

  "CN": "admin",

  "hosts": [],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "BeiJing",

      "L": "BeiJing",

      "O": "system:masters",

      "OU": "System"

    }

  ]

}

生成 admin 证书和私钥

cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \

   -ca-key=/opt/kubernetes/ssl/ca-key.pem \

   -config=/opt/kubernetes/ssl/ca-config.json \

   -profile=kubernetes admin-csr.json | cfssljson -bare admin

ls -l admin* 

mv admin*.pem /opt/kubernetes/ssl/

设置集群参数

kubectl config set-cluster kubernetes \

   --certificate-authority=/opt/kubernetes/ssl/ca.pem \

   --embed-certs=true \

   --server=https://192.168.0.114:6443

设置客户端认证参数

kubectl config set-credentials admin \

   --client-certificate=/opt/kubernetes/ssl/admin.pem \

   --embed-certs=true \

   --client-key=/opt/kubernetes/ssl/admin-key.pem

设置上下文参数

kubectl config set-context kubernetes \

   --cluster=kubernetes \

   --user=admin

设置默认上下文

kubectl config use-context kubernetes

使用kubectl工具

#kubectl get cs

NAME                 STATUS    MESSAGE             ERROR

controller-manager   Healthy   ok                  

scheduler            Healthy   ok                  

etcd-1               Healthy   {"health":"true"}   

etcd-2               Healthy   {"health":"true"}   

etcd-0               Healthy   {"health":"true"}

六、在Node节点部署组件

Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

认证大致工作流程如图所示:

 

6.1 部署kubelet

软件包准备

cd /usr/local/src/kubernetes/server/bin/

cp kubelet kube-proxy /opt/kubernetes/bin/

scp kubelet kube-proxy k8s-node1:/opt/kubernetes/bin/

scp kubelet kube-proxy k8s-node2:/opt/kubernetes/bin/

 

创建角色绑定

kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests):

cd /usr/local/src/ssl

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

 

--user=kubelet-bootstrap 是在 /opt/kubernetes/ssl/bootstrap-token.csv 文件中指定的用户名,同时也写入了bootstrap.kubeconfig 文件;

 

创建 kubelet bootstrapping kubeconfig 文件 设置集群参数

kubectl config set-cluster kubernetes \

   --certificate-authority=/opt/kubernetes/ssl/ca.pem \

   --embed-certs=true \

   --server=https://192.168.0.114:6443 \

   --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \

   --token=c980b8fd73e0326707f9fe0bb646d76a \

   --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数

kubectl config set-context default \

   --cluster=kubernetes \

   --user=kubelet-bootstrap \

   --kubeconfig=bootstrap.kubeconfig

#选择默认上下文

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

 

cp bootstrap.kubeconfig /opt/kubernetes/cfg

scp bootstrap.kubeconfig k8s-node1:/opt/kubernetes/cfg

scp bootstrap.kubeconfig k8s-node2:/opt/kubernetes/cfg

 

配置CNI支持

mkdir -p /etc/cni/net.d

vim /etc/cni/net.d/10-default.conf

{

        "name": "flannel",

        "type": "flannel",

        "delegate": {

            "bridge": "docker0",

            "isDefaultGateway": true,

            "mtu": 1400

        }

}

 

创建kubelet所需目录

mkdir /var/lib/kubelet

 

创建kubelet系统服务

 cat > /usr/lib/systemd/system/kubelet.service  << EOF

[Unit]

Description=Kubernetes Kubelet

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service

 

[Service]

WorkingDirectory=/var/lib/kubelet

ExecStart=/opt/kubernetes/bin/kubelet \

  --address=192.168.0.29 \

  --hostname-override=192.168.0.29 \

  --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \

  --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \

  --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \

  --cert-dir=/opt/kubernetes/ssl \

  --network-plugin=cni \

  --cni-conf-dir=/etc/cni/net.d \

  --cni-bin-dir=/opt/kubernetes/bin/cni \

  --cluster-dns=10.1.0.2 \

  --cluster-domain=cluster.local. \

  --hairpin-mode hairpin-veth \

  --allow-privileged=true \

  --fail-swap-on=false \

  --logtostderr=true \

  --v=2 \

  --logtostderr=false \

  --log-dir=/opt/kubernetes/log

Restart=on-failure

RestartSec=5

 

[Install]

WantedBy=multi-user.target

EOF

 

启动并查看服务状态

systemctl daemon-reload

systemctl enable kubelet

systemctl start kubelet

systemctl status kubelet

 

查看csr请求 注意是在linux-node1上执行。  

kubectl get csr

 

批准kubelet TLS 证书请求

kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve

#kubectl get nodes

NAME           STATUS   ROLES    AGE   VERSION

192.168.0.29   Ready    <none>   12s   v1.12.1

 

node2节点执行步骤4~9

6.2 部署kube-proxy

配置kube-proxy使用LVS

yum install -y ipvsadm ipset conntrack

 

创建 kube-proxy 证书请求

cd /usr/local/src/ssl/

cat > kube-proxy-csr.json << EOF

{

  "CN": "system:kube-proxy",

  "hosts": [],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "BeiJing",

      "L": "BeiJing",

      "O": "k8s",

      "OU": "System"

    }

  ]

}

EOF

 

生成证书

cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \

   -ca-key=/opt/kubernetes/ssl/ca-key.pem \

   -config=/opt/kubernetes/ssl/ca-config.json \

   -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

 

分发证书到所有Node节点

cp kube-proxy*.pem /opt/kubernetes/ssl/

scp kube-proxy*.pem k8s-node1:/opt/kubernetes/ssl/

scp kube-proxy*.pem k8s-node2:/opt/kubernetes/ssl/

 

创建kube-proxy配置文件

# 设置集群参数

kubectl config set-cluster kubernetes \

   --certificate-authority=/opt/kubernetes/ssl/ca.pem \

   --embed-certs=true \

   --server=https://192.168.0.114:6443 \

   --kubeconfig=kube-proxy.kubeconfig

# 设置客户端认证参数

kubectl config set-credentials kube-proxy \

   --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \

   --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \

   --embed-certs=true \

   --kubeconfig=kube-proxy.kubeconfig

# 设置上下文参数

kubectl config set-context default \

   --cluster=kubernetes \

   --user=kube-proxy \

   --kubeconfig=kube-proxy.kubeconfig

# 设置默认上下文

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

 

分发kubeconfig配置文件

cp kube-proxy.kubeconfig /opt/kubernetes/cfg/

scp kube-proxy.kubeconfig k8s-node1:/opt/kubernetes/cfg/

scp kube-proxy.kubeconfig k8s-node2:/opt/kubernetes/cfg/

 

创建kube-proxy系统服务

mkdir /var/lib/kube-proxy

vim /usr/lib/systemd/system/kube-proxy.service

[Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

 

[Service]

WorkingDirectory=/var/lib/kube-proxy

ExecStart=/opt/kubernetes/bin/kube-proxy \

  --bind-address=192.168.0.29 \

  --hostname-override=192.168.0.29 \

  --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \

  --masquerade-all \

  --feature-gates=SupportIPVSProxyMode=true \

  --proxy-mode=ipvs \

  --ipvs-min-sync-period=5s \

  --ipvs-sync-period=5s \

  --ipvs-scheduler=rr \

  --logtostderr=true \

  --v=2 \

  --logtostderr=false \

  --log-dir=/opt/kubernetes/log

 

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

 

启动Kubernetes Proxy并查看状态

systemctl daemon-reload

systemctl enable kube-proxy

systemctl start kube-proxy

systemctl status kube-proxy

 

验证node节点是否正常部署(node2按照以上步骤启动服务即可178)

七、创建K8s应用

7.1使用deployment创建应用

创建一个测试用的deployment

kubectl run net-test --image=alpine --replicas=3 sleep 360000

查看创建情况

#kubectl get pods -o wide

NAME                        READY   STATUS    RESTARTS   AGE    IP          NODE      NOMINATED NODE

net-test-5786f8b986-9q89p   1/1     Running   0          2m2s   10.2.19.3   192.168.0.29    <none>

net-test-5786f8b986-jtfqz   1/1     Running   0          2m2s   10.2.19.2   192.168.0.29    <none>

net-test-5786f8b986-pf4kb   1/1     Running   0          2m2s   10.2.78.2   192.168.0.232   <none>

 

# 使用kubectl创建了一个名为net-test deployment, 镜像是alpine 副本数为3

#kubectl get deployments

NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE

net-test   3         3         3            3           3m30s

测试联通性

# ping 10.2.19.3

PING 10.2.19.3 (10.2.19.3) 56(84) bytes of data.

64 bytes from 10.2.19.3: icmp_seq=1 ttl=63 time=0.465 ms

64 bytes from 10.2.19.3: icmp_seq=2 ttl=63 time=0.323 ms

64 bytes from 10.2.19.3: icmp_seq=3 ttl=63 time=0.381 ms

 

7.2通过yaml文件部署应用

编写一个nginx-deploym

cat nginx-deployment.yaml  

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-deployment

  labels:

    app: nginx

spec:

  replicas: 3

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: nginx

        image: nginx:1.10.3

        ports:

        - containerPort: 80

 

执行创建

kubectl create -f nginx-deployment.yaml

# 查看deployment

kubectl get deployment/nginx-deployment

# 查看deployment详情

kubectl describe deployment/nginx-deployment

#查看pod

kubectl get pod -o wide

# 查看某个pod详情

kubectl describe pod/nginx-deployment-75d56bb955-255w2

curl -I http://10.2.70.196

 

#更新deployment

kubectl set image  deployment/nginx-deployment nginx=nginx:1.12.2 --record

#滚动更新

kubectl get deployment/nginx-deployment -o wide

curl -I 10.2.97.192

 

# 查看deployment更新历史

kubectl rollout history deployment/nginx-deployment

# 查看版本详情(--revison=VERSION_NUM)

kubectl rollout history deployment/nginx-deployment --revision=1

 

# 版本回滚至上一个版本:

kubectl rollout undo deployment/nginx-deployment

kubectl get deployment/nginx-deployment -o wide

 

# 扩容pod数量

kubectl scale --replicas=20 deployment nginx-deployment

kubectl get deployment/nginx-deployment -o wide

 

7.3创建Service

Service可以将pod IP封装起来,即使Pod发生重建,依然可以通过Service来访问Pod提供的服务。此外,Service还解决了负载均衡的问题,大家可以多访问几次Service,然后通过kubectl logs 来查看Nginx Pod的访问日志来确认

 

创建一个对于nginx deployment的一个service

 

编写nginx-service.yaml文件

cat > nginx-service.yaml << EOF

kind: Service

apiVersion: v1

metadata:

  name: nginx-service

spec:

  selector:

    app: nginx

  ports:

  - protocol: TCP

    port: 80

    targetPort: 80

EOF

 

执行创建操作:

kubectl create -f nginx-service.yaml

kubectl get service -o wide

curl -I http://10.1.146.223

 

# 在节点2上查看LVS,通过k8s封装LVS来实现了负载均衡

ipvsadm -Ln

八、CoreDNS创建&DashBoard

8.1CoreDNS创建

创建coredns.yaml文件:

 

cat > coredns.yaml << EOF

apiVersion: v1

kind: ServiceAccount

metadata:

  name: coredns

  namespace: kube-system

  labels:

      kubernetes.io/cluster-service: "true"

      addonmanager.kubernetes.io/mode: Reconcile

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  labels:

    kubernetes.io/bootstrapping: rbac-defaults

    addonmanager.kubernetes.io/mode: Reconcile

  name: system:coredns

rules:

- apiGroups:

  - ""

  resources:

  - endpoints

  - services

  - pods

  - namespaces

  verbs:

  - list

  - watch

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  annotations:

    rbac.authorization.kubernetes.io/autoupdate: "true"

  labels:

    kubernetes.io/bootstrapping: rbac-defaults

    addonmanager.kubernetes.io/mode: EnsureExists

  name: system:coredns

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: system:coredns

subjects:

- kind: ServiceAccount

  name: coredns

  namespace: kube-system

---

apiVersion: v1

kind: ConfigMap

metadata:

  name: coredns

  namespace: kube-system

  labels:

      addonmanager.kubernetes.io/mode: EnsureExists

data:

  Corefile: |

    .:53 {

        errors

        health

        kubernetes cluster.local. in-addr.arpa ip6.arpa {

            pods insecure

            upstream

            fallthrough in-addr.arpa ip6.arpa

        }

        prometheus :9153

        proxy . /etc/resolv.conf

        cache 30

    }

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: coredns

  namespace: kube-system

  labels:

    k8s-app: coredns

    kubernetes.io/cluster-service: "true"

    addonmanager.kubernetes.io/mode: Reconcile

    kubernetes.io/name: "CoreDNS"

spec:

  replicas: 2

  strategy:

    type: RollingUpdate

    rollingUpdate:

      maxUnavailable: 1

  selector:

    matchLabels:

      k8s-app: coredns

  template:

    metadata:

      labels:

        k8s-app: coredns

    spec:

      serviceAccountName: coredns

      tolerations:

        - key: node-role.kubernetes.io/master

          effect: NoSchedule

        - key: "CriticalAddonsOnly"

          operator: "Exists"

      containers:

      - name: coredns

        image: coredns/coredns:1.0.6

        imagePullPolicy: IfNotPresent

        resources:

          limits:

            memory: 170Mi

          requests:

            cpu: 100m

            memory: 70Mi

        args: [ "-conf", "/etc/coredns/Corefile" ]

        volumeMounts:

        - name: config-volume

          mountPath: /etc/coredns

        ports:

        - containerPort: 53

          name: dns

          protocol: UDP

        - containerPort: 53

          name: dns-tcp

          protocol: TCP

        livenessProbe:

          httpGet:

            path: /health

            port: 8080

            scheme: HTTP

          initialDelaySeconds: 60

          timeoutSeconds: 5

          successThreshold: 1

          failureThreshold: 5

      dnsPolicy: Default

      volumes:

        - name: config-volume

          configMap:

            name: coredns

            items:

            - key: Corefile

              path: Corefile

---

apiVersion: v1

kind: Service

metadata:

  name: coredns

  namespace: kube-system

  labels:

    k8s-app: coredns

    kubernetes.io/cluster-service: "true"

    addonmanager.kubernetes.io/mode: Reconcile

    kubernetes.io/name: "CoreDNS"

spec:

  selector:

    k8s-app: coredns

  clusterIP: 10.1.0.2  #!!!!!!!关键点!!!!!!!

  ports:

  - name: dns

    port: 53

    protocol: UDP

  - name: dns-tcp

    port: 53

    protocol: TCP

 

EOF

# 注意 cluster ip 需要跟service ip是在一个网段

 

运行创建:

kubectl create -f coredns.yaml

 

查看创建结果:

kubectl get deployment -n kube-system

kubectl get service -n kube-system

ipvsadm -Ln | grep -v ":80"

kubectl get pod -n kube-system -o wide

 

验证创建结果:

kubectl exec -it nginx-deployment-75d56bb955-b4z4k /bin/sh

# cat /etc/resolv.conf

nameserver 10.1.0.2

search default.svc.cluster.local. svc.cluster.local. cluster.local. openstacklocal

options ndots:5

8.2DashBoard创建

1.拉取镜像

#拉取镜像

docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0

#重新打标签

docker tag registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

#删除无用镜像

docker image rm registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0

#kubernetes-dashboard.yaml路径

https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.0/src/deploy/recommended/kubernetes-dashboard.yaml

#内容如下:

# Copyright 2017 The Kubernetes Authors.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

 

# ------------------- Dashboard Secret ------------------- #

 

apiVersion: v1

kind: Secret

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard-certs

  namespace: kube-system

type: Opaque

 

---

# ------------------- Dashboard Service Account ------------------- #

 

apiVersion: v1

kind: ServiceAccount

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

 

---

# ------------------- Dashboard Role & Role Binding ------------------- #

 

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: kubernetes-dashboard-minimal

  namespace: kube-system

rules:

  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.

- apiGroups: [""]

  resources: ["secrets"]

  verbs: ["create"]

  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

  resources: ["configmaps"]

  verbs: ["create"]

  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.

- apiGroups: [""]

  resources: ["secrets"]

  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]

  verbs: ["get", "update", "delete"]

  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

  resources: ["configmaps"]

  resourceNames: ["kubernetes-dashboard-settings"]

  verbs: ["get", "update"]

  # Allow Dashboard to get metrics from heapster.

- apiGroups: [""]

  resources: ["services"]

  resourceNames: ["heapster"]

  verbs: ["proxy"]

- apiGroups: [""]

  resources: ["services/proxy"]

  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]

  verbs: ["get"]

 

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

  name: kubernetes-dashboard-minimal

  namespace: kube-system

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: Role

  name: kubernetes-dashboard-minimal

subjects:

- kind: ServiceAccount

  name: kubernetes-dashboard

  namespace: kube-system

 

---

# ------------------- Dashboard Deployment ------------------- #

 

kind: Deployment

apiVersion: apps/v1beta2

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

spec:

  replicas: 1

  revisionHistoryLimit: 10

  selector:

    matchLabels:

      k8s-app: kubernetes-dashboard

  template:

    metadata:

      labels:

        k8s-app: kubernetes-dashboard

    spec:

      containers:

      - name: kubernetes-dashboard

        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

        ports:

        - containerPort: 8443

          protocol: TCP

        args:

          - --auto-generate-certificates

          # Uncomment the following line to manually specify Kubernetes API server Host

          # If not specified, Dashboard will attempt to auto discover the API server and connect

          # to it. Uncomment only if the default does not work.

          # - --apiserver-host=http://my-address:port

        volumeMounts:

        - name: kubernetes-dashboard-certs

          mountPath: /certs

          # Create on-disk volume to store exec logs

        - mountPath: /tmp

          name: tmp-volume

        livenessProbe:

          httpGet:

            scheme: HTTPS

            path: /

            port: 8443

          initialDelaySeconds: 30

          timeoutSeconds: 30

      volumes:

      - name: kubernetes-dashboard-certs

        secret:

          secretName: kubernetes-dashboard-certs

      - name: tmp-volume

        emptyDir: {}

      serviceAccountName: kubernetes-dashboard

      # Comment the following tolerations if Dashboard must not be deployed on master

      tolerations:

      - key: node-role.kubernetes.io/master

        effect: NoSchedule

 

---

# ------------------- Dashboard Service ------------------- #

 

kind: Service

apiVersion: v1

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

spec:

  ports:

    - port: 443

      targetPort: 8443

  selector:

    k8s-app: kubernetes-dashboard

 

 

2.开始安装

kubectl create -f kubernetes-dashboard.yaml

 

3.查看dashboard的POD是否正常启动,如果正常说明安装成功

kubectl get pods --namespace=kube-system

4.配置外网访问(不配置的话默认只能集群内访问)

修改service配置,将type: ClusterIP改成NodePort

kubectl edit service  kubernetes-dashboard --namespace=kube-system

查看外网暴露端口(我们可以看到外网端口是32240)

[root@node1 ~]# kubectl get service --namespace=kube-system

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE

kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP   47h

kubernetes-dashboard   NodePort    10.101.221.220   <none>        443:32240/TCP   17h

5.访问dashboard

创建dashboard用户

创建admin-token.yaml文件,文件内容如下:

 

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

  name: admin

  annotations:

    rbac.authorization.kubernetes.io/autoupdate: "true"

roleRef:

  kind: ClusterRole

  name: cluster-admin

  apiGroup: rbac.authorization.k8s.io

subjects:

- kind: ServiceAccount

  name: admin

  namespace: kube-system

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: admin

  namespace: kube-system

  labels:

    kubernetes.io/cluster-service: "true"

    addonmanager.kubernetes.io/mode: Reconcile

创建用户

kubectl create -f admin-token.yaml

获取登陆token

 kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system

 

eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1kbnpwYiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImEyNGE2MzE0LWZiYzItMTFlOS05NTljLWZhMTYzZTUzZThlZCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.W9B--Y1rfxMYSGV_VYmLCzuwR71TJwPrwY2gaN7XVIJ28-GLztWkUvj1A46jWWs3Bz0oMc3UMmFWL5JBxMr4QM1W8o0I6aUe-16zXMlcpUEFnlvMBVd65ZhfDtjYHpNN42USVTAYo5XF1UOxSnwgTZwJUgnPlJcx_dnePdKrpiwoTvZARWUmIsilBQuHJmj0J8td6PCVTc4g_y_qc5sP5SvClZRo3nbD0-NIK5S9ZZLMWVanELQFJi4o2ETfwWVKi4KvykgDpO0Z_CH_kabHQHLAqfe45U4Yad3hODMiAdA8tohZGYqd9yNdLXZgtMgpjY1z3eYDi5ILukkp3xen6w

 

打开浏览器输入访问地址

访问地址:https://192.168..:37042   格式:https://节点IP:bashboard暴露端口认证方式选择口令,输入刚才获取到的token,即可登陆成功

 

进入后就会看到如下界面

 

 

 

 

九、 监控展示界面安装

9.1安装Heapster、InfluxDB和Grafana

1. 简介

Heapster提供了整个集群的资源监控,并支持持久化数据存储到InfluxDBGoogle Cloud Monitoring或者其他的存储后端。

Heapsterkubelet提供的API采集节点和容器的资源占用。另外,Heapster/metrics API提供了Prometheus格式的数据。

InfluxDB是一个开源分布式时序、事件和指标数据库;而Grafana则是InfluxDBdashboard,提供了强大的图表展示功能。它们常被组合使用展示图表化的监控数据,也可以将Zabbix作为数据源,进行zabbix的监控数据展示。

HeapsterInfluxDBGrafana均以Pod的形式启动和运行,其中Heapster需要与Kubernetes Master进行安全连接。

下载HeapsterInfluxDBGrafana镜像

 

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v4.4.3

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v4.4.3 gcr.io/google_containers/heapster-grafana-amd64:v4.4.3

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-amd64:v1.5.3

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-amd64:v1.5.3 gcr.io/google_containers/heapster-amd64:v1.5.3

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.3.3

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.3.3 gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3

 下载heapsteryaml文件

wget https://github.com/kubernetes/heapster/archive/v1.5.3.tar.gz

tar -zxvf v1.5.3.tar.gz

cd heapster-1.5.3/deploy/kube-config/influxdb/

ll influxdb/

 -rw-rw-r-- 1 root root 2290 May  1 05:13 grafana.yaml

-rw-rw-r-- 1 root root 1114 May  1 05:13 heapster.yaml

-rw-rw-r-- 1 root root  974 May  1 05:13 influxdb.yaml

ll rbac/

-rw-rw-r-- 1 root root 264 May  1 05:13 heapster-rbac.yaml

关键点说明:

more influxdb/heapster.yaml

......

......

     - /heapster

        - --source=kubernetes:https://kubernetes.default

        - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086

......

......

    source:配置采集源,为Master URL地址:–source=kubernetes:https://kubernetes.default

    sink:配置后端存储系统,使用InfluxDB系统:–sink=influxdb:http://monitoring-influxdb:8086

这里保持默认即可

【注意】:URL中的主机名地址使用的是InfluxDB的Service名字,这需要DNS服务正常工作,如果没有配置DNS服务,则也可以使用Service的ClusterIP地址。

另外,InfluxDB服务的名称没有加上命名空间,是因为Heapster服务与InfluxDB服务属于相同的命名空间kube-system。也可以使用上命名空间的全服务名,例如:http://monitoring-influxdb.kube-system:8086

修改 grafana.yaml文件

vim influxdb/grafana.yaml

.....

apiVersion: v1

k

kind: Service

metadata:

  labels:

    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

    # If you are NOT using this as an addon, you should comment out this line.

    kubernetes.io/cluster-service: 'true'

    kubernetes.io/name: monitoring-grafana

  name: monitoring-grafana

  namespace: kube-system

spec:

  # In a production setup, we recommend accessing Grafana through an external Loadbalancer

  # or through a public IP.

  # type: LoadBalancer

  # You could also use NodePort to expose the service at a randomly-generated port

  type: NodePort # 去掉注释即可

  ports:

  - port: 80

    targetPort: 3000

  selector:

    k8s-app: grafana

......

义端口类型为 NodePort,将Grafana暴露在宿主机Node的端口上,以便后续浏览器访问 grafana 的 admin UI 界面

 执行

cd heapster-1.5.3/deploy/kube-config

kubectl create -f influxdb/ && kubectl create -f rbac/

 

检查执行结果

kubectl get deployments -n kube-system | grep -E 'heapster|monitoring'

kubectl get pods -n kube-system | grep -E 'heapster|monitoring'

kubectl get service -n kube-system -o wide | grep -E 'heapster|monitoring'        

检查 kubernets dashboard 界面,看是显示各 Nodes、Pods 的 CPU、内存、负载等利用率曲线图;

       

9.2配置grafana监控图形

1.上面我们不是修改了官方提供的那个grafana.yaml中的那个NodePort嘛,所以我们访问就是

http://NodeIP:NodePort

 

2.在安装好 Grafana 之后我们使用的是默认的 template 配置,页面上的 namespace 选择里只有 default 和 kube-system,并不是说其他的 namespace 里的指标没有得到监控,只是我们没有在 Grafana 中开启他它们的显示而已

 

3.Templating 中的 namespace Data source 设置为 influxdb-datasourceRefresh 设置为 on Dashboard Load 保存设置,刷新浏览器,即可看到其他 namespace 选项。

 

 

 

 

4.配置influxdb数据源

  

5.导入“Kubernetes Node Statistics”dashabord 

(模版可以从该网址上下载 https://grafana.com/dashboards/3646/revisions)

 

 

 

 

上传刚刚下载的模版

 

 

 

 

   效果图:

 

6.导入“Kubernetes Pod Statistics”dashabord 

(模版可以从该网址上下载https://grafana.com/dashboards/3649/revisions)

 

 

 

 

效果图:

 

 

十、 PV使用(NFS方式)

概念

PV 的全称是:PersistentVolume(持久化卷),是对底层的共享存储的一种抽象,PV 由管理员进行创建和配置,它和具体的底层的共享存储技术的实现方式有关,比如 CephGlusterFSNFS 等,都是通过插件机制完成与共享存储的对接。

10.1 NFS服务安装和验证

1.安装配置 nfs(master节点)

  yum -y install nfs-utils rpcbind

2.共享目录设置权限

  chmod 755 /data/k8s/

3.配置 nfsnfs 的默认配置文件在 /etc/exports 文件下,在该文件中添加下面的配置信息

$ vi /etc/exports

/data/k8s  *(rw,sync,no_root_squash)

4.配置说明

/data/k8s:是共享的数据目录

*:表示任何人都有权限连接,当然也可以是一个网段,一个 IP,也可以是域名

rw:读写的权限

sync:表示文件同时写入硬盘和内存

no_root_squash:当登录 NFS 主机使用共享目录的使用者是 root 时,其权限将被转换成为匿名使用者,通常它的 UID GID,都会变成 nobody 身份

 

启动服务 nfs 需要向 rpc 注册,rpc 一旦重启了,注册的文件都会丢失,向他注册的服务都需要重启

注意启动顺序,先启动 rpcbind

$ systemctl start rpcbind.service

$ systemctl enable rpcbind

$ systemctl status rpcbind

pcbind.service - RPC bind service

   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; disabled; vendor preset: enabled)

   Active: active (running) since Tue 2018-07-10 20:57:29 CST; 1min 54s ago

  Process: 17696 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)

 Main PID: 17697 (rpcbind)

    Tasks: 1

   Memory: 1.1M

   CGroup: /system.slice/rpcbind.service

           └─17697 /sbin/rpcbind -w

 

Jul 10 20:57:29 master systemd[1]: Starting RPC bind service...

Jul 10 20:57:29 master systemd[1]: Started RPC bind service.

看到上面的 Started 证明启动成功了。

 

然后启动 nfs 服务:

$ systemctl start nfs.service

$ systemctl enable nfs

$ systemctl status nfs

nfs-server.service - NFS server and services

   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)

  Drop-In: /run/systemd/generator/nfs-server.service.d

           └─order-with-mounts.conf

   Active: active (exited) since Tue 2018-07-10 21:35:37 CST; 14s ago

 Main PID: 32067 (code=exited, status=0/SUCCESS)

   CGroup: /system.slice/nfs-server.service

 

Jul 10 21:35:37 master systemd[1]: Starting NFS server and services...

Jul 10 21:35:37 master systemd[1]: Started NFS server and services.

同样看到 Started 则证明 NFS Server 启动成功了。

 

另外还可以通过下面的命令确认下:

$ rpcinfo -p|grep nfs

    100003    3   tcp   2049  nfs

    100003    4   tcp   2049  nfs

    100227    3   tcp   2049  nfs_acl

    100003    3   udp   2049  nfs

    100003    4   udp   2049  nfs

    100227    3   udp   2049  nfs_acl

查看具体目录挂载权限:

$ cat /var/lib/nfs/etab

/data/k8s    *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,secure,no_root_squash,no_all_squash)

到这里nfs server就安装成功了,接下来我们在node节点上来安装 nfs 的客户端来验证下 nfs

安装 nfs 当前也需要先关闭防火墙:

$ systemctl stop firewalld.service

$ systemctl disable firewalld.service

然后安装 nfs

yum -y install nfs-utils rpcbind

安装完成后,和上面的方法一样,先启动 rpc、然后启动 nfs

systemctl start rpcbind.service

systemctl enable rpcbind.service

systemctl start nfs.service    

systemctl enable nfs.service

挂载数据目录 客户端启动完成后,我们在客户端来挂载下 nfs 测试下:

首先检查下 nfs 是否有共享目录:

$ showmount -e k8s-master

Export list for k8s-master:/data/k8s *

然后我们在客户端上新建目录:

mkdir /data

nfs 共享目录挂载到上面的目录:

$ mount -t nfs k8s-master:/data/k8s /data

挂载成功后,在客户端上面的目录中新建一个文件,然后我们观察下 nfs 服务端的共享目录下面是否也会出现该文件:

$ touch /data/test.txt

然后在 nfs 服务端查看:

$ ls -ls /data/k8s/

total 4

4 -rw-r--r--. 1 root root 4 Jul 10 21:50 test.txt

10.2 PV配置和验证

有了上面的 NFS 共享存储,下面我们就可以来使用 PV PVC 了。PV 作为存储资源,主要包括存储能力、访问模式、存储类型、回收策略等关键信息,下面我们来新建一个 PV 对象,使用 nfs 类型的后端存储,1G 的存储空间,访问模式为 ReadWriteOnce,回收策略为 Recyle,对应的 YAML 文件如下:(pv1-demo.yaml)

apiVersion: v1

kind: PersistentVolume

metadata:

  name:  pv1

spec:

  capacity:

    storage: 1Gi

  accessModes:

  - ReadWriteOnce

  persistentVolumeReclaimPolicy: Recycle

  nfs:

    path: /data/k8s

    server: k8s-master

 

Kubernetes 支持的 PV 类型有很多,比如常见的 CephGlusterFsNFS,甚至 HostPath也可以,不过 HostPath 我们之前也说过仅仅可以用于单机测试,更多的支持类型可以前往 Kubernetes PV 官方文档进行查看,因为每种存储类型都有各自的特点,所以我们在使用的时候可以去查看相应的文档来设置对应的参数。

 

然后同样的,直接使用 kubectl 创建即可:

 

$ kubectl create -f pv1-demo.yaml

persistentvolume "pv1" created

$ kubectl get pv

NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS   REASON    AGE

pv1       1Gi        RWO            Recycle          Available

 

我们可以看到 pv1 已经创建成功了,状态是 Available,表示 pv1 就绪,可以被 PVC 申请。我们来分别对上面的属性进行一些解读。

 

Capacity(存储能力)

一般来说,一个 PV 对象都要指定一个存储能力,通过 PV capacity属性来设置的,目前只支持存储空间的设置,就是我们这里的 storage=1Gi,不过未来可能会加入 IOPS、吞吐量等指标的配置。

 

AccessModes(访问模式)

AccessModes 是用来对 PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:

 

ReadWriteOnceRWO):读写权限,但是只能被单个节点挂载

ReadOnlyManyROX):只读权限,可以被多个节点挂载

ReadWriteManyRWX):读写权限,可以被多个节点挂载

注意:一些 PV 可能支持多种访问模式,但是在挂载的时候只能使用一种访问模式,多种访问模式是不会生效的。

 

下图是一些常用的 Volume 插件支持的访问模式:

 

 

persistentVolumeReclaimPolicy(回收策略)

我这里指定的 PV 的回收策略为 Recycle,目前 PV 支持的策略有三种:

 

Retain(保留)- 保留数据,需要管理员手工清理数据

Recycle(回收)- 清除 PV 中的数据,效果相当于执行 rm -rf /thevoluem/*

Delete(删除)- PV 相连的后端存储完成 volume 的删除操作,当然这常见于云服务商的存储服务,比如 ASW EBS

不过需要注意的是,目前只有 NFS HostPath 两种类型支持回收策略。当然一般来说还是设置为 Retain 这种策略保险一点。

 

status(状态)

一个 PV 的生命周期中,可能会处于4中不同的阶段:

 

Available(可用):表示可用状态,还未被任何 PVC 绑定

Bound(已绑定):表示 PV 已经被 PVC 绑定

Released(已释放):PVC 被删除,但是资源还未被集群重新声明

Failed(失败): 表示该 PV 的自动回收失败

 

 

参考链接:https://blog.sctux.com/categories/k8s/

                  https://jimmysong.io/kubernetes-handbook/cloud-native/cncf.html

 

 

 

 

 

 

 

 

 

posted @ 2019-11-02 11:56  hzw2019  阅读(85)  评论(0)    收藏  举报