二进制+证书部署kubernetes1.10.1集群并通过haproxy+keepalived实现HA

Kubernetes简介

Kubernetes是谷歌开源的容器集群管理系统,是Google多年大规模容器管理技术Borg的 开源版本,主要功能包括:

  • 基于容器的应用部署、维护和滚动升级
  • 负载均衡和服务发现
  • 跨机器和跨地区的集群调度
  • 自动伸缩
  • 无状态服务和有状态服务
  • 广泛的Volume支持
  • 插件机制保证扩展性

系统初始化配置

kubernetes集群用到的工具版本如下:

centos   version:        7.4
docker   version:        18.03.1-ce
kubectl  version:        v1.10.1
etcdctl  version:        3.2.18
Flannel  version:       v0.10.0

工具准备地址:https://pan.baidu.com/s/1MhFIiFXvkWhrN0CIpizjrw#list/path=%2F

服务主机架构:

主机名 IP地址 部署的服务
linux-node1(master) 172.16.1.201 etcd/docker/kube-apiserver/kube-controller-manager/kube-scheduler/flannel
linux-node2(node) 172.16.1.202 etcd/docker/kube-proxy/kubelet/flannel
linux-node3(node) 172.16.1.203 etcd/docker/kube-proxy/kubelet/flannel

 

 

 

 

关闭selinux和防火墙,在所有节点上设置时间同步(略)

设置主机名并配置hosts文件

hostnamectl --static set-hostname  linux-node1.goser.com
hostnamectl --static set-hostname  linux-node2.goser.com
hostnamectl --static set-hostname  linux-node3.goser.com
#在master和node节点配置的hosts文件如下:
172.16.1.201            linux-node1  linux-node1.goser.com
172.16.1.202            linux-node2  linux-node2.goser.com
172.16.1.203            linux-node3  linux-node3.goser.com

设置部署节点(linux-node1节点作为部署节点)到其它所有节点的SSH免密码登录,为后面通过scp分发证书和配置文件做准备

[root@linux-node1 ~]# ssh-keygen -t rsa 
[root@linux-node1 ~]# ssh-copy-id linux-node1
[root@linux-node1 ~]# ssh-copy-id linux-node2
[root@linux-node1 ~]# ssh-copy-id linux-node3

在master和node节点上安装docker,这里安装的是docker-ce-18.03.1-ce版本

#docker-ce的安装方式:
#方式一:如果安装最新的docker-ce的话,直接通过docker-ce的yum源来安装即可
[root@linux-node1 yum.repos.d]#  wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@linux-node1 yum.repos.d]# yum install -y docker-ce
#方式二:这里使用的是直接下载docker-ce-18.03.1.ce来安装
[root@linux-node1 ~]# wget  https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.03.1.ce-1.el7.centos.x86_64.rpm
[root@linux-node1 ~]# yum  install  docker-ce-18.03.1.ce-1.el7.centos.x86_64.rpm -y
#在所以节点安装完docer-ce后,添加docker加速,这里使用阿里云来做docker加速
[root@linux-node1 docker]# vim  /etc/docker/daemon.json
{
  "registry-mirrors": ["https://0wtxe175.mirror.aliyuncs.com"]
}
#最后在所有节点启动docker
[root@linux-node1 ~]# systemctl enable docker
[root@linux-node1 ~]# systemctl start docker
[root@linux-node1 ~]# systemctl status docker

在所有节点准备kubernetes部署目录,存放kubernetes的配置文件、证书和二进制命令等

mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}

 通过工具集 cfssl来创建自签证书

kubernetes 系统的各组件需要使用 TLS 证书对通信进行加密,本文档使用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 和其它证书。

 下载并安装 CFSSL

[root@linux-node1 ~]# cd /usr/local/src
[root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@linux-node1 src]# chmod +x cfssl*
[root@linux-node1 src]# mv cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo
[root@linux-node1 src]# mv cfssljson_linux-amd64  /opt/kubernetes/bin/cfssljson
[root@linux-node1 src]# mv cfssl_linux-amd64  /opt/kubernetes/bin/cfssl
#分发cfssl工具集到其他节点(本文档用不到,看实际情况而定)
[root@linux-node1 src]# scp /opt/kubernetes/bin/cfssl*  linux-node2:/opt/kubernetes/bin/
[root@linux-node1 src]# scp /opt/kubernetes/bin/cfssl*  linux-node3:/opt/kubernetes/bin/

 在所有节点设置环境变量,将kubernetes部署的二进制命令加到PATH环境中

#在所有节点上配置环境变量
[root@linux-node1 src]# vim  /etc/profile
export PATH=/opt/kubernetes/bin/:$PATH
[root@linux-node1 src]# source  /etc/profile

 创建临时证书目录并 初始化cfssl(即:创建证书需要的模板)

[root@linux-node1 src]# mkdir ssl && cd ssl
[root@linux-node1 ssl]# cfssl print-defaults config > config.json
[root@linux-node1 ssl]# cfssl print-defaults csr > csr.json

 创建用来生成 CA 文件的 JSON 配置文件

[root@linux-node1 ssl]# vim  ca-config.json
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "8760h"
      }
    }
  }
}

 部分字段说明:

  •   ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile;
  •     signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE;
  •     server auth:表示client可以用该 CA 对server提供的证书进行验证;
  •   client auth:表示server可以用该 CA 对client提供的证书进行验证。

 创建用来生成 CA 证书签名请求(CSR)的 JSON 配置文件

[root@linux-node1 ssl]# vim   ca-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

 部分字段说明:

  • “CN”:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;
  • “O”:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);

 生成CA证书(ca.pem)和密钥(ca-key.pem)

[root@linux-node1 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@linux-node1 ssl]# ls
ca-config.json  ca-csr.json  ca.pem       csr.json
ca.csr          ca-key.pem   config.json
[root@linux-node1 ssl]# cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl
#分发到其他节点
[root@linux-node1 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json linux-node2:/opt/kubernetes/ssl/
[root@linux-node1 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json linux-node3:/opt/kubernetes/ssl/

部署ETCD集群

etcd 是 CoreOS 团队发起的开源项目,基于 Go 语言实现,做为一个分布式键值对存储,通过分布式锁,leader选举和写屏障(write barriers)来实现可靠的分布式协作。

kubernetes系统使用etcd存储所有数据。CoreOS官方推荐集群规模5个为宜,我这里使用了3个节点。

 准备etcd软件包并分发etcd文件

[root@linux-node1 ssl]# cd /usr/local/src
[root@linux-node1 src]# tar xf  etcd-v3.2.18-linux-amd64.tar.gz 
[root@linux-node1 src]# cd  etcd-v3.2.18-linux-amd64
[root@linux-node1 etcd-v3.2.18-linux-amd64]# cp  etcd etcdctl /opt/kubernetes/bin/
#将etcd二进制文件分发到其他需要部署etcd的节点上
[root@linux-node1 etcd-v3.2.18-linux-amd64]# scp etcd etcdctl  linux-node2:/opt/kubernetes/bin/
[root@linux-node1 etcd-v3.2.18-linux-amd64]# scp etcd etcdctl  linux-node3:/opt/kubernetes/bin/

 创建 etcd 证书签名请求

[root@linux-node1 etcd-v3.2.18-linux-amd64]# cd /usr/local/src/ssl
[root@linux-node1 ssl]# vim  etcd-csr.json
{
  "CN": "etcd",
  "hosts": [
  "127.0.0.1",
  "172.16.1.201",
  "172.16.1.202",
  "172.16.1.203"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

 生成 etcd 证书和私钥

[root@linux-node1 ssl]#  cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
-ca-key=/opt/kubernetes/ssl/ca-key.pem \
-config=/opt/kubernetes/ssl/ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd

[root@linux-node1 ssl]# cp etcd*.pem /opt/kubernetes/ssl
#将etcd证书分发到其他节点
[root@linux-node1 ssl]# scp etcd*.pem  linux-node2:/opt/kubernetes/ssl/
[root@linux-node1 ssl]# scp etcd*.pem  linux-node3:/opt/kubernetes/ssl/

 在master节点创建ETCD 配置文件,然后将此配置文件分发到其他节点上,修改 ETCD_NAME 和 INTERNAL_IP 变量对应的值

[root@linux-node1 ssl]# vim /opt/kubernetes/cfg/etcd.conf
#[member]
ETCD_NAME="linux-node1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://172.16.1.201:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.1.201:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.1.201:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="linux-node1=https://172.16.1.201:2380,linux-node2=https://172.16.1.202:2380,linux-node3=https://172.16.1.203:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.1.201:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
linux-node1---etcd配置文件
[root@linux-node2 ~]# cat /opt/kubernetes/cfg/etcd.conf
#[member]
ETCD_NAME="linux-node2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://172.16.1.202:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.1.202:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.1.202:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="linux-node1=https://172.16.1.201:2380,linux-node2=https://172.16.1.202:2380,linux-node3=https://172.16.1.203:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.1.202:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
linux-node2---etcd配置文件
#[member]
ETCD_NAME="linux-node3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://172.16.1.203:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.1.203:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.1.203:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="linux-node1=https://172.16.1.201:2380,linux-node2=https://172.16.1.202:2380,linux-node3=https://172.16.1.203:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.1.203:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
linux-node3---etcd配置文件

 针对上面etcd.conf配置参数做下简单的解释:

  • --name:方便理解的节点名称,默认为default,在集群中应该保持唯一,可以使用 hostname
  • --data-dir:服务运行数据保存的路径,默认为 ${name}.etcd
  • --snapshot-count:指定有多少事务(transaction)被提交时,触发截取快照保存到磁盘
  • --heartbeat-interval:leader 多久发送一次心跳到 followers。默认值是 100ms
  • --eletion-timeout:重新投票的超时时间,如果 follow 在该时间间隔没有收到心跳包,会触发重新投票,默认为 1000 ms
  • --listen-peer-urls:和同伴通信的地址,比如 http://ip:2380。如果有多个,使用逗号分隔。需要所有节点都能够访问,所以不要使用 localhost!
  • --listen-client-urls:对外提供服务的地址:比如 http://ip:2379,http://127.0.0.1:2379,客户端会连接到这里和 etcd 交互
  • --advertise-client-urls:对外公告的该节点客户端监听地址,这个值会告诉集群中其他节点
  • --initial-advertise-peer-urls:该节点同伴监听地址,这个值会告诉集群中其他节点
  • --initial-cluster:集群中所有节点的信息,格式为 node1=http://ip1:2380,node2=http://ip2:2380,…。注意:这里的 node1 是节点的 --name 指定的名字;后面的 ip1:2380 是 --initial-advertise-peer-urls 指定的值
  • --initial-cluster-state:新建集群的时候,这个值为new;假如已经存在的集群,这个值为 existing
  • --initial-cluster-token:创建集群的token,这个值每个集群保持唯一。这样的话,如果你要重新创建集群,即使配置和之前一样,也会再次生成新的集群和节点 uuid;否则会导致多个集群之间的冲突,造成未知的错误

 所有以--init开头的配置都是在bootstrap集群的时候才会用到,后续节点的重启会被忽略。

 创建ETCD系统服务

[root@linux-node1 ssl]# vim  /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"
Type=notify

[Install]
WantedBy=multi-user.target
#将etcd服务文件分发到其他节点
[root@linux-node1 ssl]# scp  /etc/systemd/system/etcd.service  linux-node2:/etc/systemd/system/
[root@linux-node1 ssl]# scp  /etc/systemd/system/etcd.service  linux-node3:/etc/systemd/system/

 通过上面的etcd.service系统服务文件的定义,需要工作目录/var/lib/etcd,所以在启动etcd之前,需要创建这个目录,不然etcd启动不起来

 在需要启动etcd服务的主机上创建/var/lib/etcd目录

[root@linux-node1 ssl]# mkdir  /var/lib/etcd
[root@linux-node2 ~]# mkdir  /var/lib/etcd
[root@linux-node3 ~]# mkdir  /var/lib/etcd

 加载并启动etcd系统服务(先启动node节点的ectd服务,然后在启动master端的etcd服务,避免timeout.)

#在需要部署etcd集群的主机上启动etcd服务
[root@linux-node1 ssl]# systemctl daemon-reload
[root@linux-node1 ssl]# systemctl enable etcd
[root@linux-node1 ssl]# systemctl start etcd

 验证etcd集群

[root@linux-node1 ~]# etcdctl --endpoints=https://172.16.1.201:2379 \        
--ca-file=/opt/kubernetes/ssl/ca.pem \
--cert-file=/opt/kubernetes/ssl/etcd.pem \
--key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health

member 67cbab95178ad59d is healthy: got healthy result from https://172.16.1.201:2379
member afdf6eb7d886e0be is healthy: got healthy result from https://172.16.1.202:2379
member ded57301415e4af5 is healthy: got healthy result from https://172.16.1.203:2379
cluster is healthy
#通过下面命令可以看出哪个是etcd的leader
[root@linux-node1 ~]# etcdctl --endpoints=https://172.16.1.201:2379 \
--ca-file=/opt/kubernetes/ssl/ca.pem \
--cert-file=/opt/kubernetes/ssl/etcd.pem \
--key-file=/opt/kubernetes/ssl/etcd-key.pem member list

67cbab95178ad59d: name=linux-node1 peerURLs=https://172.16.1.201:2380 clientURLs=https://172.16.1.201:2379 isLeader=true
afdf6eb7d886e0be: name=linux-node2 peerURLs=https://172.16.1.202:2380 clientURLs=https://172.16.1.202:2379 isLeader=false
ded57301415e4af5: name=linux-node3 peerURLs=https://172.16.1.203:2380 clientURLs=https://172.16.1.203:2379 isLeader=false

kubernetes集群的master节点部署

Kubernetes Master节点需要部署四个服务:kube-apiserver、kube-controller-manager、kube-scheduler和kubectl命令工具

 从https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md 页面下载 client 或 server tar包 文件,解压安装包,并添加可执行文件

[root@linux-node1 src]# tar  xf  kubernetes-server-linux-amd64.tar.gz 
[root@linux-node1 src]# tar  xf  kubernetes.tar.gz 
[root@linux-node1 src]# tar  xf  kubernetes-node-linux-amd64.tar.gz 
[root@linux-node1 src]# tar  xf  kubernetes-client-linux-amd64.tar.gz 
[root@linux-node1 src]# cd /usr/local/src/kubernetes
[root@linux-node1 kubernetes]# cp  server/bin/kube-apiserver /opt/kubernetes/bin/
[root@linux-node1 kubernetes]# cp  server/bin/kube-scheduler /opt/kubernetes/bin/
[root@linux-node1 kubernetes]# cp  server/bin/kube-controller-manager  /opt/kubernetes/bin/

 创建生成CSR的 JSON 配置文件

[root@linux-node1 kubernetes]# cd /usr/local/src/ssl/
[root@linux-node1 ssl]# vim  kubernetes-csr.json
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "172.16.1.201",
    "10.1.0.1",
    "10.254.0.2",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

 生成 kubernetes 证书和私钥,并分发到所有节点.

[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
-ca-key=/opt/kubernetes/ssl/ca-key.pem \
-config=/opt/kubernetes/ssl/ca-config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
[root@linux-node1 ssl]# cp kubernetes*.pem /opt/kubernetes/ssl/
[root@linux-node1 ssl]# scp kubernetes*.pem  linux-node2:/opt/kubernetes/ssl/
[root@linux-node1 ssl]# scp kubernetes*.pem  linux-node3:/opt/kubernetes/ssl/

 创建 kube-apiserver 使用的客户端 token 文件

[root@linux-node1 ssl]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
e17fd3885b0d4fc24267b0fc03ed61dd
[root@linux-node1 ssl]# vim /opt/kubernetes/ssl/bootstrap-token.csv
e17fd3885b0d4fc24267b0fc03ed61dd,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

 创建基础用户名/密码认证配置(暂时没有添加,在配置文件中去掉--basic-auth-file)

[root@linux-node1 ssl]# vim /opt/kubernetes/ssl/basic-auth.csv 
admin,admin,1
readonly,readonly,2

 创建Kubernetes API Server,并启动kube-apiserver服务

[root@linux-node1 ssl]# vim  /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
  --bind-address=172.16.1.201 \
  --insecure-bind-address=127.0.0.1 \
  --authorization-mode=Node,RBAC \
  --runtime-config=rbac.authorization.k8s.io/v1 \
  --kubelet-https=true \
  --anonymous-auth=false \
  --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \
  --enable-bootstrap-token-auth \
  --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
  --service-cluster-ip-range=10.1.0.0/16 \
  --service-node-port-range=20000-40000 \
  --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
  --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --etcd-cafile=/opt/kubernetes/ssl/ca.pem \
  --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://172.16.1.201:2379,https://172.16.1.202:2379,https://172.16.1.203:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/opt/kubernetes/log/api-audit.log \
  --event-ttl=1h \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
#启动apiserver服务
[root@linux-node1 ssl]# systemctl daemon-reload
[root@linux-node1 ssl]# systemctl enable kube-apiserver
[root@linux-node1 ssl]# systemctl start kube-apiserver
[root@linux-node1 ssl]# systemctl status kube-apiserver

 [参数说明]

  • --authorization-mode=RBAC 指定在安全端口使用 RBAC 授权模式,拒绝未通过授权的请求;
  • kube-scheduler、kube-controller-manager 一般和 kube-apiserver 部署在同一台机器上,它们使用非安全端口和 kube-apiserver通信;
  • kubelet、kube-proxy、kubectl 部署在其它 Node 节点上,如果通过安全端口访问 kube-apiserver,则必须先通过 TLS 证书认证,再通过 RBAC 授权;
  • kube-proxy、kubectl 通过在使用的证书里指定相关的 User、Group 来达到通过 RBAC 授权的目的;
  • 如果使用了 kubelet TLS Boostrap 机制,则不能再指定 --kubelet-certificate-authority、--kubelet-client-certificate 和 --kubelet-client-key 选项,否则后续 kube-apiserver 校验 kubelet 证书时出现 ”x509: certificate signed by unknown authority“ 错误;
  • --admission-control 值必须包含 ServiceAccount;
  • --bind-address 不能为 127.0.0.1;
  • runtime-config配置为rbac.authorization.k8s.io/v1beta1,表示运行时的apiVersion;
  • --service-cluster-ip-range 指定 Service Cluster IP 地址段,该地址段不能路由可达;
  • 缺省情况下 kubernetes 对象保存在 etcd /registry 路径下,可以通过 --etcd-prefix 参数进行调整。

 部署Controller Manager服务并启动

[root@linux-node1 ~]# vim  /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-controller-manager \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --allocate-node-cidrs=true \
  --service-cluster-ip-range=10.1.0.0/16 \
  --cluster-cidr=10.2.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --root-ca-file=/opt/kubernetes/ssl/ca.pem \
  --leader-elect=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
#启动kube-controller-manager服务
[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl enable kube-controller-manager
[root@linux-node1 ~]# systemctl start kube-controller-manager
[root@linux-node1 ~]# systemctl status kube-controller-manager

 [参数说明]

  • --service-cluster-ip-range 参数指定 Cluster 中 Service 的CIDR范围,该网络在各 Node 间必须路由不可达,必须和 kube-apiserver 中的参数一致;
  • --cluster-signing-* 指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥;
  • --root-ca-file 用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件;
  • --address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器

 部署Kubernetes Scheduler服务并启动

[root@linux-node1 ~]# vim  /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --leader-elect=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
#启动kube-scheduler服务
[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl enable kube-scheduler
[root@linux-node1 ~]# systemctl start kube-scheduler
[root@linux-node1 ~]# systemctl status kube-scheduler

 [参数说明]

  • --address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器。

 部署kubectl 命令行工具

#准备kubectl二进制文件
[root@linux-node1 ssl]# cp  /usr/local/src/kubernetes/server/bin/kubectl /opt/kubernetes/bin/
#创建 admin 证书签名请求
[root@linux-node1 ssl]# vim  admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
#生成 admin 证书和私钥
[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
-ca-key=/opt/kubernetes/ssl/ca-key.pem \
-config=/opt/kubernetes/ssl/ca-config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@linux-node1 ssl]# cp  admin*.pem /opt/kubernetes/ssl/
#设置集群参数
[root@linux-node1 ssl]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://172.16.1.201:6443
#设置客户端认证参数
[root@linux-node1 ssl]# kubectl config set-credentials admin \
--client-certificate=/opt/kubernetes/ssl/admin.pem \
--embed-certs=true \
--client-key=/opt/kubernetes/ssl/admin-key.pem
#设置上下文参数
[root@linux-node1 ssl]#  kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin
#设置默认上下文
[root@linux-node1 ssl]# kubectl config use-context kubernetes
Switched to context "kubernetes".
#使用kubectl工具
[root@linux-node1 ssl]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   

 说明:

  • 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;
  • kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Groupsystem:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;
  • OU 指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的system:masters,所以被授予访问所有 API 的权限。

 kubernetes集群node节点部署

 将kubelet、kube-proxy二进制文件拷贝到node节点的相应目录

[root@linux-node1 ssl]# cd /usr/local/src/kubernetes/server/bin
[root@linux-node1 bin]# scp kubelet kube-proxy  linux-node2:/opt/kubernetes/bin/
[root@linux-node1 bin]# scp kubelet kube-proxy  linux-node3:/opt/kubernetes/bin/

 创建角色绑定

[root@linux-node1 bin]# cd /usr/local/src/ssl/
[root@linux-node1 ssl]# kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io "kubelet-bootstrap" created

 创建 kubelet bootstrap.kubeconfig 文件 设置集群参数

[root@linux-node1 ssl]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://172.16.1.201:6443 \                
--kubeconfig=bootstrap.kubeconfig
Cluster "kubernetes" set.

 设置客户端认证参数,token值为之前生成的

[root@linux-node1 ssl]# kubectl config set-credentials kubelet-bootstrap \
--token=e17fd3885b0d4fc24267b0fc03ed61dd \
--kubeconfig=bootstrap.kubeconfig

Sets a user entry in kubeconfig 

 设置上下文参数

[root@linux-node1 ssl]# kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
Context "default" created.

 选择默认上下文并向node节点分发在master端生成的bootstrap.kubeconfig文件

  kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests):

[root@linux-node1 ssl]#  kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
Switched to context "default".
[root@linux-node1 ssl]#  cp bootstrap.kubeconfig /opt/kubernetes/cfg
[root@linux-node1 ssl]# scp bootstrap.kubeconfig  linux-node2:/opt/kubernetes/cfg/
[root@linux-node1 ssl]# scp bootstrap.kubeconfig  linux-node3:/opt/kubernetes/cfg/

 部署kubelet(仅在node节点操作).

  1、设置CNI支持.

[root@linux-node2 ~]# mkdir -p /etc/cni/net.d
[root@linux-node2 ~]# vim  /etc/cni/net.d/10-default.conf
{
        "name": "flannel",
        "type": "flannel",
        "delegate": {
            "bridge": "docker0",
            "isDefaultGateway": true,
            "mtu": 1400
        }
}

  2、创建kubelet工作目录,kubelet服务中会定义

[root@linux-node2 ~]# mkdir /var/lib/kubelet

  3、创建kubelet服务配置并启动kubelet服务

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \
  --address=172.16.1.202 \
  --hostname-override=172.16.1.202 \
  --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
  --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
  --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
  --cert-dir=/opt/kubernetes/ssl \
  --network-plugin=cni \
  --cni-conf-dir=/etc/cni/net.d \
  --cni-bin-dir=/opt/kubernetes/bin/cni \
  --cluster-dns=10.1.0.2 \
  --cluster-domain=cluster.local. \
  --hairpin-mode hairpin-veth \
  --allow-privileged=true \
  --fail-swap-on=false \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
linux-node2---kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \
  --address=172.16.1.203 \
  --hostname-override=172.16.1.203 \
  --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
  --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
  --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
  --cert-dir=/opt/kubernetes/ssl \
  --network-plugin=cni \
  --cni-conf-dir=/etc/cni/net.d \
  --cni-bin-dir=/opt/kubernetes/bin/cni \
  --cluster-dns=10.1.0.2 \
  --cluster-domain=cluster.local. \
  --hairpin-mode hairpin-veth \
  --allow-privileged=true \
  --fail-swap-on=false \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
linux-node3---kubelet.service
[root@linux-node2 ~]# systemctl daemon-reload
[root@linux-node2 ~]# systemctl enable kubelet
[root@linux-node2 ~]# systemctl start kubelet
[root@linux-node2 ~]# systemctl status kubelet

  4、在maste上执行查看csr请求

[root@linux-node1 ssl]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-DWqlTfZICnEa8tnmDMC40IQqb5Ga79xa4wvTdOBHmJo   54s       kubelet-bootstrap   Pending
node-csr-fLVu9tSx9OGYnmfJxNwlxPTOFPIu_CQtC5xSWXK4PBI   57s       kubelet-bootstrap   Pending

  5、批准kubelet 的 TLS 证书请求

[root@linux-node1 ssl]#  kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
certificatesigningrequest.certificates.k8s.io "node-csr-DWqlTfZICnEa8tnmDMC40IQqb5Ga79xa4wvTdOBHmJo" approved
certificatesigningrequest.certificates.k8s.io "node-csr-fLVu9tSx9OGYnmfJxNwlxPTOFPIu_CQtC5xSWXK4PBI" approved

  执行完毕后,查看节点状态如果是Ready的状态就说明一切正常

[root@linux-node1 ssl]# kubectl get node
NAME           STATUS    ROLES     AGE       VERSION
172.16.1.202   Ready     <none>    17s       v1.10.1
172.16.1.203   Ready     <none>    17s       v1.10.1

 部署Kubernetes Proxy(仅在node节点配置

  1、配置kube-proxy使用LVS

[root@linux-node2 ~]# yum install -y ipvsadm ipset conntrack

  2、在master节点创建 kube-proxy 证书请求

[root@linux-node1 ssl]# cd /usr/local/src/ssl/
[root@linux-node1 ssl]# vim  kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

  3、生成证书,并分发至node节点

[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
-ca-key=/opt/kubernetes/ssl/ca-key.pem \
-config=/opt/kubernetes/ssl/ca-config.json \
-profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@linux-node1 ssl]# cp kube-proxy*.pem /opt/kubernetes/ssl/
#分发kube-proxy证书到node节点
[root@linux-node1 ssl]# scp kube-proxy*.pem  linux-node2:/opt/kubernetes/ssl/
[root@linux-node1 ssl]# scp kube-proxy*.pem  linux-node3:/opt/kubernetes/ssl/

  4、创建kube-proxy配置文件

[root@linux-node1 ssl]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://172.16.1.201:6443 \             
--kubeconfig=kube-proxy.kubeconfig

Cluster "kubernetes" set.
[root@linux-node1 ssl]# kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

User "kube-proxy" set.
[root@linux-node1 ssl]# kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

Context "default" created.
[root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

Switched to context "default".
[root@linux-node1 ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
#将kube-proxy配置文件kubeconfig分发到各个node节点
[root@linux-node1 ssl]# scp kube-proxy.kubeconfig linux-node2:/opt/kubernetes/cfg/
[root@linux-node1 ssl]# scp kube-proxy.kubeconfig linux-node3:/opt/kubernetes/cfg/

  5、创建kube-proxy服务配置

  首先创建kube-proxy服务的工作目录:mkdir /var/lib/kube-proxy

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \
  --bind-address=172.16.1.202 \
  --hostname-override=172.16.1.202 \
  --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \
--masquerade-all \
  --feature-gates=SupportIPVSProxyMode=true \
  --proxy-mode=ipvs \
  --ipvs-min-sync-period=5s \
  --ipvs-sync-period=5s \
  --ipvs-scheduler=rr \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
linux-node2---kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \
  --bind-address=172.16.1.203 \
  --hostname-override=172.16.1.203 \
  --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \
--masquerade-all \
  --feature-gates=SupportIPVSProxyMode=true \
  --proxy-mode=ipvs \
  --ipvs-min-sync-period=5s \
  --ipvs-sync-period=5s \
  --ipvs-scheduler=rr \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
linux-node3---kube-proxy.service

  6、启动Kubernetes Proxy

[root@linux-node2 ~]# systemctl daemon-reload
[root@linux-node2 ~]# systemctl enable kube-proxy
[root@linux-node2 ~]#  systemctl start kube-proxy
[root@linux-node2 ~]# systemctl status kube-proxy

  7、检查LVS状态

[root@linux-node2 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.0.1:443 rr persistent 10800
  -> 172.16.1.201:6443            Masq    1      0          0         

  8、下面的命令可以检查状态

[root@linux-node1 ssl]# kubectl get nodes
NAME           STATUS    ROLES     AGE       VERSION
172.16.1.202   Ready     <none>    7m        v1.10.1
172.16.1.203   Ready     <none>    7m        v1.10.1

kubernetes集群flannel网络部署

 Flannel介绍

  • Flannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址。
  • 在默认的Docker配置中,每个节点上的Docker服务会分别负责所在节点容器的IP分配。这样导致的一个问题是,不同节点上容器可能获得相同的内外IP地址。
  • Flannel的设计目的就是为集群中的所有节点重新规划IP地址的使用规则,从而使得不同节点上的容器能够获得“同属一个内网”且”不重复的”IP地址,并让属于不同节点上的容器能够直接通过内网IP通信。
  • Flannel实质上是一种“覆盖网络(overlay network)”,也就是将TCP数据包装在另一种网络包里面进行路由转发和通信,目前已经支持udp、vxlan、host-gw、aws-vpc、gce和alloc路由等数据转发方式,默认的节点间数据通信方式是UDP转发。

 在Flannel的GitHub页面有如下的一张原理图:

  • 数据从源容器中发出后,经由所在主机的docker0虚拟网卡转发到flannel0虚拟网卡,这是个P2P的虚拟网卡,flanneld服务监听在网卡的另外一端。(Flannel通过ETCD服务维护了一张节点间的路由表);
  • 源主机的flanneld服务将原本的数据内容UDP封装后根据自己的路由表投递给目的节点的flanneld服务,数据到达后被解包,然后直接进入目的节点的flannel0虚拟网卡,然后被转发到目的主机的docker0虚拟网卡;
  • 最后就像本机容器通信一样由docker0路由到目标容器,这样整个数据包的传递就完成了。

 为Flannel生成证书

[root@linux-node1 ssl]# cd /usr/local/src/ssl/
[root@linux-node1 ssl]# vim  flanneld-csr.json
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

 生成flannel证书并分发证书到各个node节点

[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
-ca-key=/opt/kubernetes/ssl/ca-key.pem \
-config=/opt/kubernetes/ssl/ca-config.json \
-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
[root@linux-node1 ssl]# cp flanneld*.pem /opt/kubernetes/ssl/
#分发flannel证书到各个node节点
[root@linux-node1 ssl]# scp flanneld*.pem  linux-node2:/opt/kubernetes/ssl/
[root@linux-node1 ssl]# scp flanneld*.pem  linux-node3:/opt/kubernetes/ssl/

 下载Flannel软件包并将flannel二进制文件分发到各个node节点

[root@linux-node1 ssl]# cd /usr/local/src
[root@linux-node1 ssl]# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
[root@linux-node1 src]# tar  xf  flannel-v0.10.0-linux-amd64.tar.gz 
[root@linux-node1 src]# cp  flanneld  mk-docker-opts.sh   /opt/kubernetes/bin/
#分发flannel二进制文件到各个node节点
[root@linux-node1 src]# scp flanneld mk-docker-opts.sh  linux-node2:/opt/kubernetes/bin/
[root@linux-node1 src]# scp flanneld mk-docker-opts.sh  linux-node3:/opt/kubernetes/bin/

 分发remove-docker0.sh脚本各个主机的/opt/kubernetes/bin目录下

[root@linux-node1 src]#  cd /usr/local/src/kubernetes/cluster/centos/node/bin/
[root@linux-node1 bin]# cp remove-docker0.sh /opt/kubernetes/bin/
[root@linux-node1 bin]# scp  remove-docker0.sh linux-node2:/opt/kubernetes/bin/
[root@linux-node1 bin]# scp  remove-docker0.sh linux-node3:/opt/kubernetes/bin/

 创建Flannel配置文件并分发到各个node节点

[root@linux-node1 bin]# vim  /opt/kubernetes/cfg/flannel
FLANNEL_ETCD="-etcd-endpoints=https://172.16.1.201:2379,https://172.16.1.202:2379,https://172.16.1.203:2379"
FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"
FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"
FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"
FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"
[root@linux-node1 bin]# scp /opt/kubernetes/cfg/flannel  linux-node2:/opt/kubernetes/cfg/
[root@linux-node1 bin]# scp /opt/kubernetes/cfg/flannel  linux-node3:/opt/kubernetes/cfg/

 设置Flannel系统服务并分发到各个node节点

[root@linux-node1 bin]# vim /usr/lib/systemd/system/flannel.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
Before=docker.service

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/flannel
ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh
ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker

Type=notify

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
#分发flannel服务文件到各个node节点
[root@linux-node1 bin]# scp /usr/lib/systemd/system/flannel.service  linux-node2:/usr/lib/systemd/system/
[root@linux-node1 bin]# scp /usr/lib/systemd/system/flannel.service  linux-node3:/usr/lib/systemd/system/

 Flannel CNI集成软件

#下载CNI插件
[root@linux-node1 bin]# cd /usr/local/src/
[root@linux-node1 src]# wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
#在所有节点创建该目录,存放CNI插件文件.
[root@linux-node1 src]# mkdir /opt/kubernetes/bin/cni
[root@linux-node1 src]# tar xf  cni-plugins-amd64-v0.7.1.tgz -C /opt/kubernetes/bin/cni
[root@linux-node1 src]# scp -r /opt/kubernetes/bin/cni/* linux-node2:/opt/kubernetes/bin/cni/
[root@linux-node1 src]# scp -r /opt/kubernetes/bin/cni/* linux-node3:/opt/kubernetes/bin/cni/

 在master节点创建Etcd的key,仅在master端执行即可

[root@linux-node1 src]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \
--no-sync -C https://172.16.1.201:2379,https://172.16.1.202:2379,https://172.16.1.203:2379 \ 
mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}'
{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}

 启动flannel

[root@linux-node1 src]# systemctl daemon-reload
[root@linux-node1 src]# systemctl enable flannel
[root@linux-node1 src]# systemctl start flannel
[root@linux-node1 src]# systemctl status flannel

 配置Docker使用Flannel

  1.在Unit段中的After后面添加flannel.service参数,在Wants下面添加Requires=flannel.service.
  2.[Service]段中Type后面添加EnvironmentFile=-/run/flannel/docker段,在ExecStart后面添加$DOCKER_OPTS参数.

[root@linux-node1 src]# vim  /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service flannel.service
Wants=network-online.target
Requires=flannel.service

[Service]
Type=notify
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_OPTS
...

 将docker配置分发到另外两个节点中

[root@linux-node1 src]# rsync -av  /usr/lib/systemd/system/docker.service linux-node2:/usr/lib/systemd/system/docker.service
[root@linux-node1 src]# rsync -av  /usr/lib/systemd/system/docker.service linux-node3:/usr/lib/systemd/system/docker.service

 重启Docker服务,如果docker0和flannel在一个网段,则表示正常。

[root@linux-node1 ~]# systemctl restart  docker
[root@linux-node1 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.2.10.1  netmask 255.255.255.0  broadcast 10.2.10.255
        ether 02:42:38:7d:85:db  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.2.10.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::e8f5:7aff:fec1:e252  prefixlen 64  scopeid 0x20<link>
        ether ea:f5:7a:c1:e2:52  txqueuelen 0  (Ethernet)
        RX packets 862  bytes 180110 (175.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1628  bytes 402660 (393.2 KiB)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

 当然这时候可以创建第一个K8S应用,测试集群节点是否通信,比如创建nginx服务

[root@linux-node1 ~]# kubectl run nginx-service --image=nginx --replicas=3
[root@linux-node1 ~]# kubectl get pod -o wide                             
NAME                             READY     STATUS    RESTARTS   AGE       IP          NODE
nginx-service-75ccfbd5c9-5qp4w   1/1       Running   0          22m       10.2.5.3    172.16.1.202
nginx-service-75ccfbd5c9-hfbmw   1/1       Running   0          22m       10.2.13.5   172.16.1.203
nginx-service-75ccfbd5c9-tpsxv   1/1       Running   0          22m       10.2.13.4   172.16.1.203
[root@linux-node1 ~]# ping  10.2.5.3
[root@linux-node1 ~]# ping  10.2.13.5
[root@linux-node1 ~]# ping  10.2.13.4

kubernetes集群CoreDNS和Dashboard部署

 在没部署kube-dns之前访问pod的时候为:通过podIP和containerPort来访问服务,但是这样存在几个问题:

  • 每次收到获取podIP太扯了,总不能每次都要手动改程序或者配置才能访问服务吧,要怎么提前知道podIP呢?
  • Pod在运行中可能会重建,Pod的IP地址会随着Pod的重启而变化,并 不建议直接拿Pod的IP来交互
  • 如何在多个Pod中实现负载均衡

 使用k8s Service就可以解决。Service为一组Pod(通过labels来选择)提供一个统一的入口,并为它们提供负载均衡和自动服务发现。在创建完一个service后,仍需要获取Service的Cluster-IP,再结合Port访问服务。而在Kubernetes cluster外面,则只能通过http://node-ip:映射的port来访问。

 虽然Service解决了Pod的服务发现和负载均衡问题,但存在着类似的问题:不提前知道Service的IP,还是需要改程序或配置啊。kube-dns就是用来解决上面这个问题的。

 kube-dns可以解决Service的发现问题,k8s将Service的名称当做域名注册到kube-dns中,通过Service的名称就可以访问其提供的服务。也就是说其他应用能够直接使用服务的名字,不需要关心它实际的 ip 地址,中间的转换能够自动完成。名字和 ip 之间的转换就是 DNS 系统的功能。

 kubu-dns 服务不是独立的系统服务,而是一种 addon ,作为插件来安装的,不是 kubernetes 集群必须的(但是非常推荐安装)。可以把它看做运行在集群上的应用,只不过这个应用比较特殊而已。

 之前已经下载了kubernetes的软件包,coreDNS的文件也包括在里面,可以直接用,也可以网上下载

[root@linux-node1 ~]# cd /usr/local/src/kubernetes/cluster/addons/dns
[root@linux-node1 dns]# cp  coredns.yaml.base  coredns.yaml
#将配置文件coredns.yaml中,修改如下两个地方为自己的domain和cluster ip地址.
#1.kubernetes __PILLAR__DNS__DOMAIN__改为 kubernetes cluster.local.
#2.clusterIP: __PILLAR__DNS__SERVER__改为clusterIP: 10.1.0.2
[root@linux-node1 dns]# vim  coredns.yaml

 创建coredns服务

[root@linux-node1 dns]# kubectl create -f coredns.yaml
serviceaccount "coredns" created
clusterrole.rbac.authorization.k8s.io "system:coredns" created
clusterrolebinding.rbac.authorization.k8s.io "system:coredns" created
configmap "coredns" created
deployment.extensions "coredns" created
service "coredns" created

 查看服务状态

[root@linux-node1 dns]# kubectl get pod -n kube-system -o wide
NAME                       READY     STATUS    RESTARTS   AGE       IP          NODE
coredns-77c989547b-7k26g   1/1       Running   0          19s       10.2.5.4    172.16.1.202
coredns-77c989547b-czbjv   1/1       Running   0          19s       10.2.13.6   172.16.1.203

 coreDNS解析测试,如下显示说明coredns解析正常

[root@linux-node1 dns]#  kubectl run -i --tty busybox --image=docker.io/busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # nslookup www.baidu.com
Server:         10.1.0.2
Address:        10.1.0.2:53

 接着验证coreDNS是否可以正常解析上面部署的nginx的service,查看nginx的service名称,可以得到service对应的cluster-ip为10.1.34.17(通过cluster-ip起到负载均衡的作用)

[root@linux-node1 ~]# kubectl get services --all-namespaces |grep  nginx
default       nginx-service          NodePort    10.1.34.17     <none> 

[root@linux-node1 ~]# kubectl get svc -o wide
NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE       SELECTOR
kubernetes      ClusterIP   10.1.0.1     <none>        443/TCP        1d        <none>
nginx-service   NodePort    10.1.34.17   <none>        80:30081/TCP   1d        name=nginx-pod

 再创建一个应用检测是否可以ping通nginx的服务名称nginx-service,即进一步检查coreDNS是否可以正常解析。通过下面显示ping nginx-service的话coreDNS将其解析成cluster-iip了,说明coreDNS服务正常

[root@linux-node1 ~]# kubectl run -i --tty mybusybox --image=docker.io/busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ping  nginx-service
PING nginx-service (10.1.34.17): 56 data bytes
64 bytes from 10.1.34.17: seq=0 ttl=64 time=0.139 ms
64 bytes from 10.1.34.17: seq=1 ttl=64 time=0.116 ms
64 bytes from 10.1.34.17: seq=2 ttl=64 time=0.117 ms
64 bytes from 10.1.34.17: seq=3 ttl=64 time=0.119 ms

 创建Dashboard

  下载dashborad文件,如下地址的dashboard已经修改好了,可以直接使用

[root@linux-node1 ~]# git clone https://github.com/unixhot/salt-kubernetes.git

  创建dashborad服务

[root@linux-node1 addons]# cd dashboard/
[root@linux-node1 dashboard]# kubectl create -f kubernetes-dashboard.yaml 
[root@linux-node1 dashboard]# kubectl  create -f  admin-user-sa-rbac.yaml 
[root@linux-node1 dashboard]# kubectl  create -f  ui-admin-rbac.yaml
[root@linux-node1 dashboard]# kubectl  create -f  ui-read-rbac.yaml 
[root@linux-node1 addons]# kubectl cluster-info
Kubernetes master is running at https://172.16.1.201:6443
CoreDNS is running at https://172.16.1.201:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
kubernetes-dashboard is running at https://172.16.1.201:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

  通过上面的kubectl cluster-info信息可以使用apiserver访问dashboard: https://172.16.1.201:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy  使用前面定义的basic-auth.csv认证信息:用户名admin  密码admin登录dashboard,然后输入token的值进入dashboard界面。

  也可以通过dashborad对外映射端口使用nodeip对应的映射端口来访问dashboard

[root@linux-node1 addons]# kubectl get svc -o wide --all-namespaces
NAMESPACE     NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE       SELECTOR
default       kubernetes             ClusterIP   10.1.0.1      <none>        443/TCP         2h        <none>
kube-system   coredns                ClusterIP   10.1.0.2      <none>        53/UDP,53/TCP   17m       k8s-app=coredns
kube-system   kubernetes-dashboard   NodePort    10.1.52.161   <none>        443:27907/TCP   1m        k8s-app=kubernetes-dashboard

  使用火狐浏览器打开 https://nodeip:27907 进行访问 如:https://172.16.1.203:27907/

  使用令牌登录,然后在master端执行如下命令,生成认证token登录

[root@linux-node1 addons]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-2kgbx
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=admin-user
              kubernetes.io/service-account.uid=d47d62da-944f-11e8-b638-000c29dc96b2

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTJrZ2J4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkNDdkNjJkYS05NDRmLTExZTgtYjYzOC0wMDBjMjlkYzk2YjIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.RlcxTsZUDsWjTvYIxG38wwMAYpwpJkDcTMlS-DlCdTMd6FpISe_0aZduPd90KZaDRWcUS75V7IZQm0b01HwTIc6K681vFGDsVl8L4Cf5Jx8xANSei4A7QKWT4jRSn5zHpvD82DiZZa2gDzx6lguEWejYi3chE_T-u3B4B6adsSgbfPQDcyYfgSSJ8uhuB-9T-6btxacEdiBUUF_P5KoCsfZKUm0wHO8etL2qHT98m9wMxHAC6eqj1iWsmJT8mX1cjNOI3BB_uN1BKexrl805EMxiETHM_6KAsaE6WBWdHeCJe16BMjriA8ipjd9JqIP0f0rQY4jCPNZ9NX1oPTAElQ
ca.crt:     1359 bytes
namespace:  11 bytes

  登录后效果如下:

 部署heapster监控插件

  上面部署的dashboard界面默认是没有监控插件的,看不到cpu、内存的使用情况的展示图,所以需要使用heapser插件来实现

  Heapster是一个收集者,将每个Node上的cAdvisor的数据进行汇总,然后导到第三方工具(如InfluxDB)。Heapster 是通过调用 kubelet 的 http API 来获取 cAdvisor 的 metrics 数据的

  heapster的详细部署可参考:https://github.com/opsnull/follow-me-install-kubernetes-cluster/blob/master

  下载 heapster-v1.5.3.tar.gz

[root@linux-node1 ~]# cd /usr/local/src
[root@linux-node1 src]# wget https://github.com/kubernetes/heapster/archive/v1.5.3.tar.gz
[root@linux-node1 src]# tar xf  heapster-v1.5.3.tar.gz 
[root@linux-node1 src]# cd  heapster-1.5.3/deploy/kube-config/influxdb/

  修改配置如下:

[root@linux-node1 influxdb]# cp  grafana.yaml grafana.yaml.ori
[root@linux-node1 influxdb]# cp  heapster.yaml heapster.yaml.ori
[root@linux-node1 influxdb]# cp  influxdb.yaml influxdb.yaml.ori
[root@linux-node1 influxdb]# vim  grafana.yaml
#修改grafana.yaml的镜像为goser/heapster-grafana-amd64-v4.4.3:v4.4.3
#打开type: NodePort

[root@linux-node1 influxdb]# vim heapster.yaml
#修改heapster.yaml的镜像为goser/heapster-amd64:v1.5.3
#设置为- --source=kubernetes:https://172.16.1.201:6443

[root@linux-node1 influxdb]# vim  influxdb.yaml
#修改influxdb.yaml的镜像为goser/heapster-influxdb-amd64:v1.3.3

  执行所有定义文件

#首先要执行influxdb.yaml
[root@linux-node1 influxdb]# kubectl create -f  influxdb.yaml
deployment.extensions "monitoring-influxdb" created
service "monitoring-influxdb" created
#然后再执行heapster.yaml和grafana.yaml
[root@linux-node1 influxdb]# kubectl create -f  heapster.yaml
serviceaccount "heapster" created
deployment.extensions "heapster" created
service "heapster" created
[root@linux-node1 influxdb]# kubectl create -f  grafana.yaml
deployment.extensions "monitoring-grafana" created
service "monitoring-grafana" created
#最后执行heapster-rbac.yaml
#将 serviceAccount kube-system:heapster 与 ClusterRole system:kubelet-api-admin 绑定,授予它调用 kubelet API 的权限
[root@linux-node1 rbac]# vim  heapster-rbac.yaml


kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster-kubelet-api
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kubelet-api-admin
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system
[root@linux-node1 rbac]# kubectl  create  -f  heapster-rbac.yaml
clusterrolebinding.rbac.authorization.k8s.io "heapster-kubelet-api" created

  检查执行结果

[root@linux-node1 rbac]# kubectl   get pods -n kube-system | grep -E 'heapster|monitoring'
heapster-7648ffc7c9-btmww               1/1       Running   0          57s
monitoring-grafana-779bd4dd7b-47scf     1/1       Running   0          53s
monitoring-influxdb-f75847d48-6z7hz     1/1       Running   0          1m

  检查 kubernets dashboard 界面,稍等片刻,就可可以正确显示各 Nodes、Pods 的 CPU、内存、负载等统计数据和图表。选择kube-system、选择容器组显示如下:

 安装ingress插件

  Ingress其实就是从kuberenets集群外部访问集群的一个入口,将外部的请求转发到集群内不同的Service 上,其实就相当于nginx、apache 等负载均衡代理服务器,再加上一个规则定义,路由信息的刷新需要靠Ingress controller来提供。

  Ingress controller可以理解为一个监听器,通过不断地与kube-apiserver打交道,实时的感知后端service、pod 等的变化,当得到这些变化信息后,Ingress controller再结合Ingress的配置,更新反向代理负载均衡器,达到服务发现的作用。其实这点和服务发现工具consulconsul-template非常类似。

  这里ingress用的是traefik,Traefik是一款开源的反向代理与负载均衡工具。它最大的优点是能够与常见的微服务系统直接整合,可以实现自动化动态配置。目前支持Docker、Swarm、Mesos/Marathon、 Mesos、Kubernetes、Consul、Etcd、Zookeeper、BoltDB、Rest API等等后端模型。

  部署ingress服务

#权限授予
[root@linux-node1 ~]# vim ingress-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress
  namespace: kube-system

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: ingress
subjects:
  - kind: ServiceAccount
    name: ingress
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
[root@linux-node1 ~]# kubectl create -f ingress-rbac.yaml
#服务创建 
[root@linux-node1 ~]# vim traefik.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: traefik-ingress-lb
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      terminationGracePeriodSeconds: 60
      hostNetwork: true
      restartPolicy: Always
      serviceAccountName: ingress
      containers:
      - image: traefik:alpine
        name: traefik-ingress-lb
        resources:
          limits:
            cpu: 200m
            memory: 30Mi
          requests:
            cpu: 100m
            memory: 20Mi
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8580
          hostPort: 8580
        args:
        - --web
        - --web.address=:8580
        - --kubernetes
[root@linux-node1 ~]# kubectl create -f traefik.yaml
#UI 创建(host 字段可以替换成自己想要的域名,这里以 ui.traefik.kubernetes.local 为例说明)
[root@linux-node1 ~]# vim ui.yaml
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
  - name: web
    port: 80
    targetPort: 8580
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  rules:
  - host: ui.traefik.kubernetes.local
    http:
      paths:
      - path: /
        backend:
          serviceName: traefik-web-ui
          servicePort: web
#[root@linux-node1 ~]# kubectl create -f ui.yaml

  验证 ingress,查看traefik 工作是否正常,在宿主机上修改hosts文件,将上面ui.yaml中定义的host加入到hosts文件中做解析172.16.1.203 ui.traefik.kubernetes.local  然后通过这个域名访问traefik,界面如下:

  下面使用nginx服务来验证ingress服务是否正常工作,创建nginx的deployment和service对应的yaml文件如下:

[root@linux-node1 ~]# vim  nginx1-8.yaml 
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
spec:
  ports:
    - port: 8888
      targetPort: 80
  selector:
    app: nginx1-8
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx1-8-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx1-8
    spec:
      containers:
      - name: nginx
        image: nginx:1.8
        ports:
        - containerPort: 80
[root@linux-node1 ~]# kubectl  create  -f  nginx1-8.yaml 

  接下来使用ingress对nginx服务做对外暴露。创建一个nginx的ingress文件如下:

[root@linux-node1 ~]# vim  nginx-ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-ingress
spec:
  rules:
  - host: traefik.nginx.io
    http:
      paths:
      - path: /
        backend:
          serviceName: my-nginx
          servicePort: 8888
[root@linux-node1 ~]# kubectl  create -f nginx-ingress.yaml 

  在宿主机的hosts文件中修改域名解析,验证从外部是否可以通过域名来访问内部nginx的service : 172.16.1.203 ui.traefik.kubernetes.local traefik.nginx.io

  当然也可以通过ingress官方来下载ingress部署的yaml文件,地址:https://github.com/kubernetes/ingress-nginx

  ingress-nginx文件位于deploy目录下,各文件的作用:  

    • configmap.yaml:提供configmap可以在线更行nginx的配置
    • default-backend.yaml:提供一个缺省的后台错误页面 404
    • namespace.yaml:创建一个独立的命名空间 ingress-nginx
    • rbac.yaml:创建对应的role rolebinding 用于rbac
    • tcp-services-configmap.yaml:修改L4负载均衡配置的configmap
    • udp-services-configmap.yaml:修改L4负载均衡配置的configmap
    • with-rbac.yaml:有应用rbac的nginx-ingress-controller组件

  也可以通过ingress tls的方式实现https的访问

 kubernetes集群高可用架构

高可用架构图如下:

 此架构是通过独立的haproxy+keepalived服务实现整个kubernetes集群的高可用,在部署此高可用架构要注意的事项:

  1、客户端工具kubectl指定的master地址为keepalived定义的vip

  2、每个master端kube-controller-manager和kube-scheduller要生成证书,指定的master端地址为vip,而apiserver组件定义的地址依然为各自监听网卡的ip

  3、每个node端指定kubelet、kube-proxy组件指定的apiserver地址也应该是vip

 参考文章:http://www.cnblogs.com/netonline/tag/kubernetes/

posted @ 2018-07-31 18:59  goser  阅读(352)  评论(0)    收藏  举报