Kubernetes集群二进制包部署

(转载)原文链接:2-Kubernetes集群二进制包部署 https://qinzc.me/post-250.html

 

一、Kubernetes平台环境规划

1.环境

软件版本
Linux操作系统 CentOS7.6_x64
Kubernetes 1.15.3
Docker 19.03.1
Etcd 3.x
Flannel 0.10

 

2.组件分配规划

角色IP组件
Master01 192.168.1.244 etcd、Kube-apiserver、Kube-controller-manager、Kube-scheduler、docker、flannel
Master02 192.168.1.245 etcd、Kube-apiserver、Kube-controller-manager、Kube-scheduler、docker、flannel
Node01 192.168.1.246 etcd、kubelet、Kube-Proxy、docker、flannel
Node02 192.168.1.247 kubelet、Kube-Proxy、docker、flannel
Load Balancer  (Master) 192.168.1.248、192.168.1.241(VIP) Nginx  keepalibed
Load Balancer  (Backup) 192.168.1.249、192.168.1.241(VIP) Nginx  keepalibed

 

 

 

  • 单集群架构图

 

 

  • 多Master集群架构图

1566912675986

二、官方提供的三种部署方式

 

1.minikube

Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用。 部署地址:https://kubernetes.io/docs/setup/minikube/

2.kubeadm

Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。 部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

3.二进制包

推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。 下载地址:https://github.com/kubernetes/kubernetes/releases

 

4.部署前准备(重要!!)

  1. #关闭
  2. 1.setenforce 0
  3.  
  4. #关闭防火墙
  5. 2.systemctl stop firewalld
  6.  
  7. #修改主机名
  8. 3.hostname master01
  9.  
  10. #时间同步
  11. 4.
  12. yum -y install ntpdate
  13. ntpdate time2.aliyun.com
  14.  
  15. #(1)永久关闭swap分区
  16. 5.sed -ri 's/.*swap.*/#&/' /etc/fstab
  17.  
  18. #(2)临时关闭swap分区, 重启失效;
  19. swapoff -a

 

三、自签SSL证书

组件使用的证书
etcd ca.pem,server.pem,server-key.pem
flannel ca.pem,server.pem,server-key.pem
kube-apiserver ca.pem,server.pem,server-key.pem
kubelet ca.pem,ca-key.pem
kube-proxy ca.pem,kube-proxy.pem,kube-proxy-key.pem
kubectl ca.pem,admin.pem,admin-key.pem

1.生成etcd证书

  1. $mkdir k8s
  2. $cd k8s
  3. $mkdir etcd-cert k8s-cert
  4. $cd etcd-cert
  5. 更改etcd节点IP后运行一下两个脚本(cfssl.shetcd-cert.sh
  6. $sh ./cfssl.sh
  7. $sh ./etcd-cert.sh
cfssl.sh      #生成证书用到的工具cfssl
  1. curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
  2. curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
  3. curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
  4. chmod +/usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
etcd-cert.sh   #开始创建证书
  1. cat > ca-config.json <<EOF
  2. {
  3.   "signing": {
  4.     "default": {
  5.       "expiry": "87600h"
  6.     },
  7.     "profiles": {
  8.       "www": {
  9.          "expiry": "87600h",
  10.          "usages": [
  11.             "signing",
  12.             "key encipherment",
  13.             "server auth",
  14.             "client auth"
  15.         ]
  16.       }
  17.     }
  18.   }
  19. }
  20. EOF
  21.  
  22. cat > ca-csr.json <<EOF
  23. {
  24.     "CN": "etcd CA",
  25.     "key": {
  26.         "algo": "rsa",
  27.         "size": 2048
  28.     },
  29.     "names": [
  30.         {
  31.             "C": "CN",
  32.             "L": "Beijing",
  33.             "ST": "Beijing"
  34.         }
  35.     ]
  36. }
  37. EOF
  38.  
  39. cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  40.  
  41. #-----------------------
  42.  
  43. cat > server-csr.json <<EOF
  44. {
  45.     "CN": "etcd",
  46.     "hosts": [
  47.     "192.168.1.244",
  48.     "192.168.1.245",
  49.     "192.168.1.246"
  50.     ],
  51.     "key": {
  52.         "algo": "rsa",
  53.         "size": 2048
  54.     },
  55.     "names": [
  56.         {
  57.             "C": "CN",
  58.             "L": "BeiJing",
  59.             "ST": "BeiJing"
  60.         }
  61.     ]
  62. }
  63. EOF
  64.  
  65. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

 

 

四.Etcd数据库集群部署

 

1.二进制包下载地址

https://github.com/etcd-io/etcd/releases

etcd-v3.3.10-linux-amd64.tar.gz

2.解压安装

  1. wget https://github.com/etcd-io/etcd/releases/download/v3.3.15/etcd-v3.3.15-linux-amd64.tar.gz
  2. tar -zxvf etcd-v3.3.15-linux-amd64.tar.gz
  3. cd etcd-v3.3.15-linux-amd64
  4. mkdir -/opt/etcd/{ssl,cfg,bin}
  5. mv etcd etcdctl /opt/etcd/bin/
  6. #拷贝证书到指定目录
  7. cp /root/k8s/etcd-cert/{ca,server-key,server}.pem /opt/etcd/ssl

 

部署配置ETCD,创建配置文件,和启动文件

  1. sh ./etcd.sh etcd01 192.168.1.244 etcd02=https://192.168.1.245:2380,etcd03=https://192.168.1.246:2380

etcd.sh

  1. #!/bin/bash
  2. # example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380
  3.  
  4. ETCD_NAME=$1
  5. ETCD_IP=$2
  6. ETCD_CLUSTER=$3
  7.  
  8. WORK_DIR=/opt/etcd
  9.  
  10. cat <<EOF >$WORK_DIR/cfg/etcd
  11. #[Member]
  12. ETCD_NAME="${ETCD_NAME}"
  13. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  14. ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
  15. ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
  16.  
  17. #[Clustering]
  18. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
  19. ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
  20. ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
  21. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  22. ETCD_INITIAL_CLUSTER_STATE="new"
  23. EOF
  24.  
  25. cat <<EOF >/usr/lib/systemd/system/etcd.service
  26. [Unit]
  27. Description=Etcd Server
  28. After=network.target
  29. After=network-online.target
  30. Wants=network-online.target
  31.  
  32. [Service]
  33. Type=notify
  34. EnvironmentFile=${WORK_DIR}/cfg/etcd
  35. ExecStart=${WORK_DIR}/bin/etcd \
  36. --name=\${ETCD_NAME} \
  37. --data-dir=\${ETCD_DATA_DIR} \
  38. --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
  39. --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
  40. --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
  41. --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
  42. --initial-cluster=\${ETCD_INITIAL_CLUSTER} \
  43. --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
  44. --initial-cluster-state=new \
  45. --cert-file=${WORK_DIR}/ssl/server.pem \
  46. --key-file=${WORK_DIR}/ssl/server-key.pem \
  47. --peer-cert-file=${WORK_DIR}/ssl/server.pem \
  48. --peer-key-file=${WORK_DIR}/ssl/server-key.pem \
  49. --trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
  50. --peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
  51. Restart=on-failure
  52. LimitNOFILE=65536
  53.  
  54. [Install]
  55. WantedBy=multi-user.target
  56. EOF
  57.  
  58. systemctl daemon-reload
  59. systemctl enable etcd
  60. systemctl restart etcd

 

  1. #拷贝至其他etcd节点
  2. scp -/opt/etcd/ root@192.168.1.245:/opt/
  3. scp -/opt/etcd/ root@192.168.1.246:/opt/
  4.  
  5. scp /usr/lib/systemd/system/etcd.service root@192.168.1.245:/usr/lib/systemd/system/etcd.service
  6.  
  7. scp /usr/lib/systemd/system/etcd.service root@192.168.1.246:/usr/lib/systemd/system/etcd.service
  8.  
  9.  
  10. ##并编辑对应etcd名称
  11. vim /opt/etcd/cfg/etcd
  12.  
  13. ETCD_NAME="etcd02"
  14. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  15. ETCD_LISTEN_PEER_URLS="https://192.168.1.245:2380"
  16. ETCD_LISTEN_CLIENT_URLS="https://192.168.1.245:2379"
  17.  
  18. #[Clustering]
  19. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.245:2380"
  20. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.245:2379"              
  21.  
  22. vim /opt/etcd/cfg/etcd
  23.  
  24. ETCD_NAME="etcd03"
  25. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  26. ETCD_LISTEN_PEER_URLS="https://192.168.1.246:2380"
  27. ETCD_LISTEN_CLIENT_URLS="https://192.168.1.246:2379"
  28.  
  29. #[Clustering]
  30. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.246:2380"
  31. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.246:2379"              
  32.  
  33.  
  34.  
  35. #启动
  36. systemctl start etcd

 

 

 

3.查看集群状态

  1. /opt/etcd/bin/etcdctl \
  2. --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
  3. --endpoints="https://192.168.1.244:2379,https://192.168.1.245:2379,https://192.168.1.246:2379" \
  4. cluster-health

 

五、Node安装Docker

 

1566912846202

官网:https://docs.docker.com

  1. step 1: 安装必要的一些系统工具
  2. sudo yum install -y yum-utils device-mapper-persistent-data lvm2
  3.  
  4. Step 2: 添加软件源信息
  5. sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  6.  
  7. Step 3: 更新并安装 Docker-CE
  8. sudo yum makecache fast
  9. sudo yum -y install docker-ce
  10.  
  11. Step 4: 镜像下载加速配置:https://www.daocloud.io/mirror
  12.  
  13. curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
  14.  
  15. Step 5: 开启Docker服务
  16. sudo systemctl restart docker 
  17. sudo systemctl enable docker
  18.  
  19. step 6: 查看docker版本号
  20. docker version

 

 

六、部署Kubernetes网络

Kubernetes网络模型设计基本要求

  • 一个Pod一个IP

  • 每个Pod独立IP,Pod内所有容器共享网络(同一个IP)

  • 所有容器都可以与所有其他容器通信

  • 所有节点都可以与所有容器通信

Container Network Interface(CNI):容器网络接口,Google和CoreOS主导。

 

主流技术:

1566913010040

 

1566913016954

Overlay Network

覆盖网络,在基础网络上叠加的一种虚拟网络 技术模式,该网络中的主机通过虚拟链路连接起来。

安装Flannel

是Overlay网络的一种,也是将源数据包封装在另一种网 络包里面进行路由转发和通信,目前已经支持UDP、VXLAN(常用)、Host-GW(不支持跨网段)、AWS、VPC和GCE路由等数据转发方式。

1.写入分配的子网段到etcd,供flanneld使用

  1. /opt/etcd/bin/etcdctl \
  2. --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
  3. --endpoints="https://192.168.1.244:2379,https://192.168.1.245:2379,https://192.168.1.246:2379" \
  4. set /coreos.com/network/config '{ "Network": "10.0.0.0/16", "Backend": {"Type": "vxlan"}}'

2.下载二进制包

  1. https://github.com/coreos/flannel/releases

3.部署与配置Flannel

  1. wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
  2.  
  3. mkdir /opt/kubernetes/{bin,cfg,ssl} -
  4. tar -zxvf flannel-v0.11.0-linux-amd64.tar.gz
  5. mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/
  6.  
  7.  
  8. ###systemd管理Flannel
  9. ###配置Docker使用Flannel生成的子网
  10.  
  11. sh ./flannel.sh https://192.168.1.244:2379,https://192.168.1.245:2379,https://192.168.1.246:2379

flannel.sh

  1. #!/bin/bash
  2.  
  3. ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
  4.  
  5. cat <<EOF >/opt/kubernetes/cfg/flanneld
  6.  
  7. FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
  8. -etcd-cafile=/opt/etcd/ssl/ca.pem \
  9. -etcd-certfile=/opt/etcd/ssl/server.pem \
  10. -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  11.  
  12. EOF
  13.  
  14. cat <<EOF >/usr/lib/systemd/system/flanneld.service
  15. [Unit]
  16. Description=Flanneld overlay address etcd agent
  17. After=network-online.target network.target
  18. Before=docker.service
  19.  
  20. [Service]
  21. Type=notify
  22. EnvironmentFile=/opt/kubernetes/cfg/flanneld
  23. ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
  24. ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -/run/flannel/subnet.env
  25. Restart=on-failure
  26.  
  27. [Install]
  28. WantedBy=multi-user.target
  29.  
  30. EOF
  31.  
  32. cat <<EOF >/usr/lib/systemd/system/docker.service
  33.  
  34. [Unit]
  35. Description=Docker Application Container Engine
  36. Documentation=https://docs.docker.com
  37. After=network-online.target firewalld.service
  38. Wants=network-online.target
  39.  
  40. [Service]
  41. Type=notify
  42. EnvironmentFile=/run/flannel/subnet.env
  43. ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
  44. ExecReload=/bin/kill -s HUP \$MAINPID
  45. LimitNOFILE=infinity
  46. LimitNPROC=infinity
  47. LimitCORE=infinity
  48. TimeoutStartSec=0
  49. Delegate=yes
  50. KillMode=process
  51. Restart=on-failure
  52. StartLimitBurst=3
  53. StartLimitInterval=60s
  54.  
  55. [Install]
  56. WantedBy=multi-user.target
  57.  
  58. EOF
  59.  
  60. systemctl daemon-reload
  61. systemctl enable flanneld
  62. systemctl restart flanneld
  63. systemctl restart docker

4.启动Flannel

  1. systemctl start flanneld.service
  2.  
  3. #######拷贝至另一node节点
  4. scp -/opt/etcd/ root@192.168.1.246:/opt/
  5. scp -/opt/kubernetes/ root@192.168.1.246:/opt/
  6.  
  7. scp -/usr/lib/systemd/system/{docker,flanneld}.service root@192.168.1.246:/usr/lib/systemd/system/
  8.  
  9. #另一个节点也启动Flannel
  10.  
  11. systemctl daemon-reload
  12. systemctl enable flanneld
  13. systemctl start flanneld.service
  14. systemctl restart flannesld
  15. systemctl restart docker
  16.  
  17.  
  18.  
  19.  
  20. #查看配置好的子网(master上运行)
  21.  
  22. /opt/etcd/bin/etcdctl \
  23. --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
  24. --endpoints="https://192.168.1.244:2379,https://192.168.1.245:2379,https://192.168.1.246:2379" \
  25. ls /coreos.com/network/subnets
  26.  
  27.  
  28. /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.1.244:2379,https://192.168.1.245:2379,https://192.168.1.246:2379" get /coreos.com/network/subnets/172.17.19.0-24
  29.  
  30. ip route

5.测试容器间通信

  1. docker run -it busybox

 

七、部署Master组件

官网:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md

 

  1. wget https://dl.k8s.io/v1.16.1/kubernetes-server-linux-amd64.tar.gz

 

生成apiserver证书

  1. #执行生成证书的脚本
  2. $sh k8s-cert.sh
  3.  
  4. #复制证书至对应目录
  5. $cp ca-key.pem ca.pem server.pem server-key.pem /opt/kubernetes/ssl
  6.  
  7.  
  8. # 创建 TLS Bootstrapping Token
  9. #使用如下命令生成随机字符
  10. #BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
  11. BOOTSTRAP_TOKEN=8440d1ad1c6184d4ca456eb345d0feff
  12.  
  13. cat > token.csv <<EOF
  14. ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  15. EOF
  16.  
  17. $mv token.csv /opt/kubernetes/cfg/

k8s-cert.sh

  1. #修改ip 脚本,修改允许接入apiserver的IP
  2. cat > ca-config.json <<EOF
  3. {
  4.   "signing": {
  5.     "default": {
  6.       "expiry": "87600h"
  7.     },
  8.     "profiles": {
  9.       "kubernetes": {
  10.          "expiry": "87600h",
  11.          "usages": [
  12.             "signing",
  13.             "key encipherment",
  14.             "server auth",
  15.             "client auth"
  16.         ]
  17.       }
  18.     }
  19.   }
  20. }
  21. EOF
  22.  
  23. cat > ca-csr.json <<EOF
  24. {
  25.     "CN": "kubernetes",
  26.     "key": {
  27.         "algo": "rsa",
  28.         "size": 2048
  29.     },
  30.     "names": [
  31.         {
  32.             "C": "CN",
  33.             "L": "Beijing",
  34.             "ST": "Beijing",
  35.            "O": "k8s",
  36.             "OU": "System"
  37.         }
  38.     ]
  39. }
  40. EOF
  41.  
  42. cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  43.  
  44. #-----------------------
  45.  
  46. cat > server-csr.json <<EOF
  47. {
  48.     "CN": "kubernetes",
  49.     "hosts": [
  50.       "192.168.1.244",
  51.       "127.0.0.1",
  52.       "10.0.0.1",
  53.       "192.168.1.241",
  54.       "192.168.1.242",
  55.       "192.168.1.243",
  56.       "192.168.1.245",
  57.       "192.168.1.246",
  58.       "192.168.1.247",
  59.       "192.168.1.248",
  60.       "192.168.1.249",
  61.       "kubernetes",
  62.       "kubernetes.default",
  63.       "kubernetes.default.svc",
  64.       "kubernetes.default.svc.cluster",
  65.       "kubernetes.default.svc.cluster.local"
  66.     ],
  67.     "key": {
  68.         "algo": "rsa",
  69.         "size": 2048
  70.     },
  71.     "names": [
  72.         {
  73.             "C": "CN",
  74.             "L": "BeiJing",
  75.             "ST": "BeiJing",
  76.             "O": "k8s",
  77.             "OU": "System"
  78.         }
  79.     ]
  80. }
  81. EOF
  82.  
  83. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
  84.  
  85. #-----------------------
  86.  
  87. cat > admin-csr.json <<EOF
  88. {
  89.   "CN": "admin",
  90.   "hosts": [],
  91.   "key": {
  92.     "algo": "rsa",
  93.     "size": 2048
  94.   },
  95.   "names": [
  96.     {
  97.       "C": "CN",
  98.       "L": "BeiJing",
  99.       "ST": "BeiJing",
  100.       "O": "system:masters",
  101.       "OU": "System"
  102.     }
  103.   ]
  104. }
  105. EOF
  106.  
  107. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
  108.  
  109. #-----------------------
  110.  
  111. cat > kube-proxy-csr.json <<EOF
  112. {
  113.   "CN": "system:kube-proxy",
  114.   "hosts": [],
  115.   "key": {
  116.     "algo": "rsa",
  117.     "size": 2048
  118.   },
  119.   "names": [
  120.     {
  121.       "C": "CN",
  122.       "L": "BeiJing",
  123.       "ST": "BeiJing",
  124.       "O": "k8s",
  125.       "OU": "System"
  126.     }
  127.   ]
  128. }
  129. EOF
  130.  
  131. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

 

1.kube-apiserver安装

  1. tar -zxvf kubernetes-server-linux-amd64.tar.gz
  2. mkdir /opt/kubernetes/{bin,cfg,ssl} -p
  3. cd kubernetes/server/bin/
  4. cp kube-controller-manager kube-apiserver kube-scheduler /opt/kubernetes/bin/
  5. cp kubectl /usr/bin/
  6.  
  7.  
  8. cd /脚本目录
  9. sh apiserver.sh 192.168.1.244 https://192.168.1.244:2379,https://192.168.1.245:2379,https://192.168.1.246:2379

apiserver.sh

  1. #!/bin/bash
  2.  
  3. MASTER_ADDRESS=$1
  4. ETCD_SERVERS=$2
  5.  
  6. cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
  7.  
  8. KUBE_APISERVER_OPTS="--logtostderr=true \\
  9. --v=4 \\
  10. --etcd-servers=${ETCD_SERVERS} \\
  11. --bind-address=${MASTER_ADDRESS} \\
  12. --secure-port=6443 \\
  13. --advertise-address=${MASTER_ADDRESS} \\
  14. --allow-privileged=true \\
  15. --service-cluster-ip-range=10.0.0.0/24 \\
  16. --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
  17. --authorization-mode=RBAC,Node \\
  18. --kubelet-https=true \\
  19. --enable-bootstrap-token-auth \\
  20. --token-auth-file=/opt/kubernetes/cfg/token.csv \\
  21. --service-node-port-range=30000-50000 \\
  22. --tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
  23. --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
  24. --client-ca-file=/opt/kubernetes/ssl/ca.pem \\
  25. --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  26. --etcd-cafile=/opt/etcd/ssl/ca.pem \\
  27. --etcd-certfile=/opt/etcd/ssl/server.pem \\
  28. --etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  29.  
  30. EOF
  31.  
  32. cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
  33. [Unit]
  34. Description=Kubernetes API Server
  35. Documentation=https://github.com/kubernetes/kubernetes
  36.  
  37. [Service]
  38. EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
  39. ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
  40. Restart=on-failure
  41.  
  42. [Install]
  43. WantedBy=multi-user.target
  44. EOF
  45.  
  46. systemctl daemon-reload
  47. systemctl enable kube-apiserver
  48. systemctl restart kube-apiserver

 

改日志路径

  1. #查看apiserver配置文件
  2. cat /opt/kubernetes/cfg/kube-apiserver
  3.  
  4. #########默认日志保存在/var/log/meesgess下,如需要自定义如下:
  5.  
  6. mkdir /opt/kubernetes/logs
  7. vim /opt/kubernetes/cfg/kube-apiserver
  8.  
  9. KUBE_APISERVER_OPTS="--logtostderr=true \  改为
  10.  
  11. KUBE_APISERVER_OPTS="--logtostderr=false  \
  12. --log-dir=/opt/kubernetes/logs \

 

 

 

 

2.kube-controller-manager 安装

 

  1. sh controller-manager.sh 127.0.0.1
  2.  
  3. #########默认日志保存在/var/log/meesgess下,如需要自定义,看apiserver安装

controller-manager.sh

  1. #!/bin/bash
  2.  
  3. MASTER_ADDRESS=$1
  4.  
  5. cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
  6.  
  7.  
  8. KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
  9. --v=4 \\
  10. --master=${MASTER_ADDRESS}:8080 \\
  11. --leader-elect=true \\
  12. --address=127.0.0.1 \\
  13. --service-cluster-ip-range=10.0.0.0/24 \\
  14. --cluster-name=kubernetes \\
  15. --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
  16. --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
  17. --root-ca-file=/opt/kubernetes/ssl/ca.pem \\
  18. --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  19. --experimental-cluster-signing-duration=87600h0m0s"
  20.  
  21. EOF
  22.  
  23. cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
  24. [Unit]
  25. Description=Kubernetes Controller Manager
  26. Documentation=https://github.com/kubernetes/kubernetes
  27.  
  28. [Service]
  29. EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
  30. ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
  31. Restart=on-failure
  32.  
  33. [Install]
  34. WantedBy=multi-user.target
  35. EOF
  36.  
  37. systemctl daemon-reload
  38. systemctl enable kube-controller-manager
  39. systemctl restart kube-controller-manager

 

3.kube-scheduler安装

  1. sh scheduler.sh 127.0.0.1
  2. #########默认日志保存在/var/log/meesgess下,如需要自定义看apiserver安装
  1. #!/bin/bash
  2.  
  3. MASTER_ADDRESS=$1
  4.  
  5. cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
  6.  
  7. KUBE_SCHEDULER_OPTS="--logtostderr=true \\
  8. --v=4 \\
  9. --master=${MASTER_ADDRESS}:8080 \\
  10. --leader-elect"
  11.  
  12. EOF
  13.  
  14. cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
  15. [Unit]
  16. Description=Kubernetes Scheduler
  17. Documentation=https://github.com/kubernetes/kubernetes
  18.  
  19. [Service]
  20. EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
  21. ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
  22. Restart=on-failure
  23.  
  24. [Install]
  25. WantedBy=multi-user.target
  26. EOF
  27.  
  28. systemctl daemon-reload
  29. systemctl enable kube-scheduler
  30. systemctl restart kube-scheduler

配置文件 -> systemd管理组件 -> 启动

 

  1. 缩写
  2. kubectl api-resources

4.增加Master

 

复制master1 文件到新增的master

  1. scp -/opt/kubernetes root@192.168.1.245:/opt/
  2.  
  3. scp /usr/lib/systemd/system/{kube-apiserver,kube-scheduler,kube-controller-manager}.service root@192.168.1.245:/usr/lib/systemd/system/
  4.  
  5. scp /usr/bin/kubectl root@192.168.1.245:/usr/bin/
  6.  
  7. scp -/opt/etcd/ssl/ root@192.168.1.245:/opt/etcd/

修改配置文件

  1. [root@master02 cfg]# grep 244 *
  2.  
  3. vim kube-apiserver  
  4. 改为 master IP

启动

  1. systemctl daemon-reload
  2. systemctl restart kube-apiserver
  3. systemctl restart kube-scheduler
  4. systemctl restart kube-controller-manager
  5.  
  6. kubectl get componentstatus

 

八、部署Node组件

1567004758361

 

1.将kubelet-bootstrap用户绑定到系统集群角色(Master上执行)

  1. #为token.csv赋予权限
  2.  
  3. kubectl create clusterrolebinding kubelet-bootstrap \
  4. --clusterrole=system:node-bootstrapper \
  5. --user=kubelet-bootstrap

 

2.创建kubeconfig文件(Master上执行)

  1. ##sh kubeconfig.sh APISERVER  证书目录  
  2.  
  3. $sh kubeconfig.sh 192.168.1.244  /root/k8s/k8s-cert/

kubeconfig.sh

  1. APISERVER=$1
  2. SSL_DIR=$2
  3. #这个位置填写生成token.csv时的那段随机字符
  4. BOOTSTRAP_TOKEN=8440d1ad1c6184d4ca456eb345d0feff
  5. # 创建kubelet bootstrapping kubeconfig 
  6. export KUBE_APISERVER="https://$APISERVER:6443"
  7.  
  8. # 设置集群参数
  9. kubectl config set-cluster kubernetes \
  10.   --certificate-authority=$SSL_DIR/ca.pem \
  11.   --embed-certs=true \
  12.   --server=${KUBE_APISERVER} \
  13.   --kubeconfig=bootstrap.kubeconfig
  14.  
  15. # 设置客户端认证参数
  16. kubectl config set-credentials kubelet-bootstrap \
  17.   --token=${BOOTSTRAP_TOKEN} \
  18.   --kubeconfig=bootstrap.kubeconfig
  19.  
  20. # 设置上下文参数
  21. kubectl config set-context default \
  22.   --cluster=kubernetes \
  23.   --user=kubelet-bootstrap \
  24.   --kubeconfig=bootstrap.kubeconfig
  25.  
  26. # 设置默认上下文
  27. kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
  28.  
  29. #----------------------
  30.  
  31. # 创建kube-proxy kubeconfig文件
  32.  
  33. kubectl config set-cluster kubernetes \
  34.   --certificate-authority=$SSL_DIR/ca.pem \
  35.   --embed-certs=true \
  36.   --server=${KUBE_APISERVER} \
  37.   --kubeconfig=kube-proxy.kubeconfig
  38.  
  39. kubectl config set-credentials kube-proxy \
  40.   --client-certificate=$SSL_DIR/kube-proxy.pem \
  41.   --client-key=$SSL_DIR/kube-proxy-key.pem \
  42.   --embed-certs=true \
  43.   --kubeconfig=kube-proxy.kubeconfig
  44.  
  45. kubectl config set-context default \
  46.   --cluster=kubernetes \
  47.   --user=kube-proxy \
  48.   --kubeconfig=kube-proxy.kubeconfig
  49.  
  50. kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  1. ###会生成 bootstrap.kubeconfig 、kube-proxy.kubeconfig 后 拷贝到node###
  2.  
  3. #拷贝到node1:
  4. scp bootstrap.kubeconfig  kube-proxy.kubeconfig  root@192.168.1.246:/opt/kubernetes/cfg/
  5.  
  6. ##节点kubelet  kube-proxy 也拷贝过去(在kubernetes-server-linux-amd64.tar.gz中)
  7. scp kubelet kube-proxy  root@192.168.1.246:/opt/kubernetes/bin/
  8.  
  9.  
  10. #拷贝到node2:
  11. scp bootstrap.kubeconfig  kube-proxy.kubeconfig  root@192.168.1.247:/opt/kubernetes/cfg/
  12. ##节点kubelet  kube-proxy 也拷贝过去(在kubernetes-server-linux-amd64.tar.gz中)
  13. scp kubelet kube-proxy  root@192.168.1.247:/opt/kubernetes/bin/

 

3.部署kubelet,kube-proxy组件(192.168.1.246  Node上执行 加入master)

  1. 脚本:(kubelet.sh  proxy.sh
  2. $sh kubelet.sh 192.168.1.246
  3. $sh proxy.sh 192.168.1.246

kubelet.sh

  1. #!/bin/bash
  2.  
  3. NODE_ADDRESS=$1
  4. DNS_SERVER_IP=${2:-"10.0.0.2"}
  5.  
  6. cat <<EOF >/opt/kubernetes/cfg/kubelet
  7.  
  8. KUBELET_OPTS="--logtostderr=true \\
  9. --v=4 \\
  10. --hostname-override=${NODE_ADDRESS} \\
  11. --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
  12. --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
  13. --config=/opt/kubernetes/cfg/kubelet.config \\
  14. --cert-dir=/opt/kubernetes/ssl \\
  15. --pod-infra-container-image=docker.io/kubernetes/pause:latest"
  16.  
  17. EOF
  18.  
  19. cat <<EOF >/opt/kubernetes/cfg/kubelet.config
  20.  
  21. kind: KubeletConfiguration
  22. apiVersion: kubelet.config.k8s.io/v1beta1
  23. address: ${NODE_ADDRESS}
  24. port: 10250
  25. readOnlyPort: 10255
  26. cgroupDriver: cgroupfs
  27. clusterDNS:
  28. - ${DNS_SERVER_IP} 
  29. clusterDomain: cluster.local.
  30. failSwapOn: false
  31. authentication:
  32.   anonymous:
  33.     enabled: true
  34. EOF
  35.  
  36. cat <<EOF >/usr/lib/systemd/system/kubelet.service
  37. [Unit]
  38. Description=Kubernetes Kubelet
  39. After=docker.service
  40. Requires=docker.service
  41.  
  42. [Service]
  43. EnvironmentFile=/opt/kubernetes/cfg/kubelet
  44. ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
  45. Restart=on-failure
  46. KillMode=process
  47.  
  48. [Install]
  49. WantedBy=multi-user.target
  50. EOF
  51.  
  52. systemctl daemon-reload
  53. systemctl enable kubelet
  54. systemctl restart kubelet

proxy.sh

  1. #!/bin/bash
  2.  
  3. NODE_ADDRESS=$1
  4.  
  5. cat <<EOF >/opt/kubernetes/cfg/kube-proxy
  6.  
  7. KUBE_PROXY_OPTS="--logtostderr=true \\
  8. --v=4 \\
  9. --hostname-override=${NODE_ADDRESS} \\
  10. --cluster-cidr=10.0.0.0/24 \\
  11. --proxy-mode=ipvs \\
  12. --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
  13.  
  14. EOF
  15.  
  16. cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
  17. [Unit]
  18. Description=Kubernetes Proxy
  19. After=network.target
  20.  
  21. [Service]
  22. EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
  23. ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
  24. Restart=on-failure
  25.  
  26. [Install]
  27. WantedBy=multi-user.target
  28. EOF
  29.  
  30. systemctl daemon-reload
  31. systemctl enable kube-proxy
  32. systemctl restart kube-proxy

 

4.master节点执行命令批准证书

  1. $kubectl get csr
  2.  
  3. $kubectl certificate approve node-csr-NK3xFo5gaa3-k6gLyytKmUW2sUHZxnouyD9Kn2arJmk
  4.  
  5.  
  6.  
  7.  
  8. ##node节点验证
  9. node节点ssl目录可以看到,多了4kubelet的证书文件
  10. ll /opt/kubernetes/ssl
  11.  
  12.  
  13.  
  14. #########默认日志保存在/var/log/meesgess下,如需要自定义如下:
  15. $vim  /opt/kubernetes/cfg/kubelet
  16. $vim /opt/kubernetes/cfg/kube-proxy
  17.  
  18. $mkdir -/opt/kubernetes/logs
  19.  
  20. KUBELET_OPTS="--logtostderr=true \  改为
  21. KUBELET_OPTS="--logtostderr=false  \
  22. --log-dir=/opt/kubernetes/logs \

 

node 重新加入集群需删除 kubelet.kubeconfig与ssl证书

九、部署一个测试示例

  1. # kubectl run nginx --image=nginx --replicas=3 
  2. # kubectl get pod
  3. # kubectl scale deployment nginx --replicas=5
  4. # kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort 
  5. # kubectl get svc nginx
  6.  
  7.  
  8. #授权:不然无法exec 登录容器,查看容器日志等问题。
  9. kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
  1. #手工启动
  2.  
  3. /opt/kubernetes/bin/kubelet --logtostderr=false --log-dir=/opt/kubernetes/logs --v=4 --hostname-override=192.168.1.246 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=docker.io/kubernetes/pause:latest

删除节点从新加入Master

  1. kubectl delete nodes  192.168.1.246
  2. systemctl stop kubelet kube-proxy
  3. systemctl stop kubelet kube-proxy
  4.  
  5. #ssl证书
  6. rm -fr /opt/kubernetes/ssl/*
  7.  
  8. #重新生成证书加入并启动
  9. sh kubelet.sh 192.168.1.246
  10. sh proxy.sh 192.168.1.246
  11.  
  12.  
  13. #许可加入master 
  14. $kubectl get csr
  15.  
  16. $kubectl certificate approve node-csr-NK3xFo5gaa3-k6gLyytKmUW2sUHZxnouyD9Kn2arJmk

 

十、部署集群内部DNS解析服务(CoreDNS)

1.修改一些参数

修改参数有3个地方,一个是ip6.arpa 指定,一个是更改成国内镜像源,一个是定义clusterIP,具体如下

 

  • ip6.arpa修改为kubernetes cluster.local. in-addr.arpa ip6.arpa

  • 国内镜像修改为 coredns/coredns:1.2.6

  • clusterIP修改为自己集群设置的IP范围内的,我集群的是 10.0.0.0/24,所以设置为10.0.0.2(部署node时候kubelet配置文件默认指定的地址。且不为已使用的IP)

    具体的yaml如下:(我们只需修改clusterIP的IP为自己集群的IP范围内,且不重复的)

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4.   name: coredns
  5.   namespace: kube-system
  6.   labels:
  7.       kubernetes.io/cluster-service: "true"
  8.       addonmanager.kubernetes.io/mode: Reconcile
  9. ---
  10. apiVersion: rbac.authorization.k8s.io/v1
  11. kind: ClusterRole
  12. metadata:
  13.   labels:
  14.     kubernetes.io/bootstrapping: rbac-defaults
  15.     addonmanager.kubernetes.io/mode: Reconcile
  16.   name: system:coredns
  17. rules:
  18. - apiGroups:
  19.   - ""
  20.   resources:
  21.   - endpoints
  22.   - services
  23.   - pods
  24.   - namespaces
  25.   verbs:
  26.   - list
  27.   - watch
  28. ---
  29. apiVersion: rbac.authorization.k8s.io/v1
  30. kind: ClusterRoleBinding
  31. metadata:
  32.   annotations:
  33.     rbac.authorization.kubernetes.io/autoupdate: "true"
  34.   labels:
  35.     kubernetes.io/bootstrapping: rbac-defaults
  36.     addonmanager.kubernetes.io/mode: EnsureExists
  37.   name: system:coredns
  38. roleRef:
  39.   apiGroup: rbac.authorization.k8s.io
  40.   kind: ClusterRole
  41.   name: system:coredns
  42. subjects:
  43. - kind: ServiceAccount
  44.   name: coredns
  45.   namespace: kube-system
  46. ---
  47. apiVersion: v1
  48. kind: ConfigMap
  49. metadata:
  50.   name: coredns
  51.   namespace: kube-system
  52.   labels:
  53.       addonmanager.kubernetes.io/mode: EnsureExists
  54. data:
  55.   Corefile: |
  56.     .:53 {
  57.         errors
  58.         health
  59.         kubernetes cluster.local. in-addr.arpa ip6.arpa {
  60.             pods insecure
  61.             upstream
  62.             fallthrough in-addr.arpa ip6.arpa
  63.         }
  64.         prometheus :9153
  65.         proxy . /etc/resolv.conf
  66.         cache 30
  67.         loop
  68.         reload
  69.         loadbalance
  70.     }
  71. ---
  72. apiVersion: apps/v1
  73. kind: Deployment
  74. metadata:
  75.   name: coredns
  76.   namespace: kube-system
  77.   labels:
  78.     k8s-app: kube-dns
  79.     kubernetes.io/cluster-service: "true"
  80.     addonmanager.kubernetes.io/mode: Reconcile
  81.     kubernetes.io/name: "CoreDNS"
  82. spec:
  83.   # replicas: not specified here:
  84.   # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  85.   # 2. Default is 1.
  86.   # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  87.   strategy:
  88.     type: RollingUpdate
  89.     rollingUpdate:
  90.       maxUnavailable: 1
  91.   selector:
  92.     matchLabels:
  93.       k8s-app: kube-dns
  94.   template:
  95.     metadata:
  96.       labels:
  97.         k8s-app: kube-dns
  98.       annotations:
  99.         seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
  100.     spec:
  101.       serviceAccountName: coredns
  102.       tolerations:
  103.         - key: node-role.kubernetes.io/master
  104.           effect: NoSchedule
  105.         - key: "CriticalAddonsOnly"
  106.           operator: "Exists"
  107.       containers:
  108.       - name: coredns
  109.         image: coredns/coredns:1.2.6
  110.         imagePullPolicy: IfNotPresent
  111.         resources:
  112.           limits:
  113.             memory: 170Mi
  114.           requests:
  115.             cpu: 100m
  116.             memory: 70Mi
  117.         args: [ "-conf", "/etc/coredns/Corefile" ]
  118.         volumeMounts:
  119.         - name: config-volume
  120.           mountPath: /etc/coredns
  121.           readOnly: true
  122.         ports:
  123.         - containerPort: 53
  124.           name: dns
  125.           protocol: UDP
  126.         - containerPort: 53
  127.           name: dns-tcp
  128.           protocol: TCP
  129.         - containerPort: 9153
  130.           name: metrics
  131.           protocol: TCP
  132.         livenessProbe:
  133.           httpGet:
  134.             path: /health
  135.             port: 8080
  136.             scheme: HTTP
  137.           initialDelaySeconds: 60
  138.           timeoutSeconds: 5
  139.           successThreshold: 1
  140.           failureThreshold: 5
  141.         securityContext:
  142.           allowPrivilegeEscalation: false
  143.           capabilities:
  144.             add:
  145.             - NET_BIND_SERVICE
  146.             drop:
  147.             - all
  148.           readOnlyRootFilesystem: true
  149.       dnsPolicy: Default
  150.       volumes:
  151.         - name: config-volume
  152.           configMap:
  153.             name: coredns
  154.             items:
  155.             - key: Corefile
  156.               path: Corefile
  157. ---
  158. apiVersion: v1
  159. kind: Service
  160. metadata:
  161.   name: kube-dns
  162.   namespace: kube-system
  163.   annotations:
  164.     prometheus.io/port: "9153"
  165.     prometheus.io/scrape: "true"
  166.   labels:
  167.     k8s-app: kube-dns
  168.     kubernetes.io/cluster-service: "true"
  169.     addonmanager.kubernetes.io/mode: Reconcile
  170.     kubernetes.io/name: "CoreDNS"
  171. spec:
  172.   selector:
  173.     k8s-app: kube-dns
  174.   clusterIP: 10.0.0.2
  175.   ports:
  176.   - name: dns
  177.     port: 53
  178.     protocol: UDP
  179.   - name: dns-tcp
  180.     port: 53
  181.     protocol: TCP

2.创建dns

  1. kubectl create -f coredns.yaml

3.检查pod和svc的情况

  1. [root@K8S-M1 ~]# kubectl get all -n kube-system
  2. NAME                           READY   STATUS    RESTARTS   AGE
  3. pod/coredns-57b8565df8-nnpcc   1/1     Running   1          9h
  4.  
  5. NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
  6. service/kube-dns   ClusterIP   10.10.10.2   <none>        53/UDP,53/TCP   9h
  7.  
  8. NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
  9. deployment.apps/coredns   1         1         1            1           9h
  10.  
  11. NAME                                 DESIRED   CURRENT   READY   AGE
  12. replicaset.apps/coredns-57b8565df8   1         1         1       9h

4.检查coreDNS服务,这里已经启动

  1. [root@K8S-M1 ~]# kubectl  cluster-info
  2. Kubernetes master is running at http://localhost:8080
  3. Heapster is running at http://localhost:8080/api/v1/namespaces/kube-system/services/heapster/proxy
  4. CoreDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  5. kubernetes-dashboard is running at http://localhost:8080/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
  6. monitoring-influxdb is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

5、验证方法1

创建一个简单的centos,busybox有点坑,测试有问题。

  1. cat >centos.yaml<<EOF
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5.   name: centoschao
  6.   namespace: default
  7. spec:
  8.   containers:
  9.   - image: centos
  10.     command:
  11.       - sleep
  12.       - "3600"
  13.     imagePullPolicy: IfNotPresent
  14.     name: centoschao
  15.   restartPolicy: Always
  16. EOF

5.1.测试

  1. kubectl create -f centos.yaml
  2. [root@K8S-M1 ~]# kubectl get pods
  3. NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
  4. kubernetes       ClusterIP   10.10.10.1     <none>        443/TCP          15d
  5. nginx            ClusterIP   10.10.10.252   <none>        80/TCP           9h
  6.     
  7. [root@master-a yaml]# kubectl get pod
  8. NAME                     READY   STATUS    RESTARTS   AGE
  9. centoschao               1/1     Running   0          76s
  10. nginx-6db489d4b7-cxljn   1/1     Running   0          4h55m
  11.  
  12. [root@K8S-M1 ~]#  kubectl exec -it centoschao sh
  13. sh-4.2# yum install bind-utils -y
  14. sh-4.2# nslookup kubernetes
  15. Server:     10.10.10.2
  16. Address:    10.10.10.2#53
  17.  
  18. Name:   kubernetes.default.svc.cluster.local
  19. Address: 10.10.10.1
  20.  
  21. sh-4.2# nslookup nginx     
  22. Server:     10.10.10.2
  23. Address:    10.10.10.2#53
  24.  
  25. Name:   nginx.default.svc.cluster.local
  26. Address: 10.10.10.252
  27.  
  28. sh-4.2# nslookup nginx.default.svc.cluster.local
  29. Server:     10.10.10.2
  30. Address:    10.10.10.2#53
  31.  
  32. Name:   nginx.default.svc.cluster.local
  33. Address: 10.10.10.252

ok,成功了

 

6.验证方式2

  1. cat >busybox.yaml<<EOF
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5.   name: busybox
  6.   namespace: default
  7. spec:
  8.   containers:
  9.   - name: busybox
  10.     image: busybox:1.28
  11.     command:
  12.       - sleep
  13.       - "3600"
  14.     imagePullPolicy: IfNotPresent
  15.   restartPolicy: Always
  16. EOF

6.1创建并测试解析kubernetes.default

  1. kubectl create -f busybox.yaml
  2. kubectl get pods busybox
  3. kubectl exec busybox -- cat /etc/resolv.conf
  4. kubectl exec -ti busybox -- nslookup kubernetes.default

 

十一、部署Web UI(Dashboard)

1.下载

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard

  1. -rw-r--r-- 1 root root  264 Oct  9 15:22 dashboard-configmap.yaml
  2. -rw-r--r-- 1 root root 1784 Oct  9 15:22 dashboard-controller.yaml
  3. -rw-r--r-- 1 root root 1353 Oct  9 15:22 dashboard-rbac.yaml
  4. -rw-r--r-- 1 root root  551 Oct  9 15:22 dashboard-secret.yaml
  5. -rw-r--r-- 1 root root  322 Oct  9 15:22 dashboard-service.yaml

 

  1. 1. vim dashboard-controller.yaml
  2. #kubernetes-dashboard 的镜像"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"FQ才能下载,所以换成下面地址:
  3. registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0
  4.  
  5. 2.暴露外部:NodePort
  6. kubectl edit svc -n kube-system kubernetes-dashboard
  7.  
  8.  
  9. 3.创建登录admin账号登录
  10. cat > k8s-admin.yaml <<EOF
  11. apiVersion: v1
  12. kind: ServiceAccount
  13. metadata:
  14.   name: dashboard-admin
  15.   namespace: kube-system
  16. ---
  17. kind: ClusterRoleBinding
  18. apiVersion: rbac.authorization.k8s.io/v1
  19. metadata:
  20.   name: dashboard-admin
  21. subjects:
  22.   - kind: ServiceAccount
  23.     name: dashboard-admin
  24.     namespace: kube-system
  25. roleRef:
  26.   kind: ClusterRole
  27.   name: cluster-admin
  28.   apiGroup: rbac.authorization.k8s.io
  29. EOF
  30.  
  31.   
  32.   
  33. 4.应用
  34. kubectl apply -.

2.查看TOKEN

  1. #查看账号
  2. kubectl get secrets  -n kube-system 
  3. #查看账号TOKEN
  4. kubectl describe secrets  -n kube-system  dashboard-admin-token-9g9hp
  5.  
  6. #kubectl get secrets  -n kube-system dashboard-admin-token-9g9hp -o yaml
  7. #echo TOKEN | base64 -d

 

3.解决dashboard证书过期问题

解决方法也很简单替换默认证书即可

  1. #先生成证书
  2. vim shengche.sh
  3.  
  4. cat > dashboard-csr.json <<EOF
  5. {
  6.     "CN": "Dashboard",
  7.     "hosts": [],
  8.     "key": {
  9.         "algo": "rsa",
  10.         "size": 2048
  11.     },
  12.     "names": [
  13.         {
  14.             "C": "CN",
  15.             "L": "BeiJing",
  16.             "ST": "BeiJing"
  17.         }
  18.     ]
  19. }
  20. EOF
  21.  
  22. K8S_CA=$1
  23. cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard
  24. kubectl delete secret kubernetes-dashboard-certs -n kube-system
  25. kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system
  1. sh shecheng.sh ca证书目录}
  2.  
  3. 执行后 会生成dashboard-key.pem  dashboard.pem 连个证书
  1. # dashboard-controller.yaml 增加证书两行,然后apply
  2. #        args:
  3. #          # PLATFORM-SPECIFIC ARGS HERE
  4. #          - --auto-generate-certificates
  5. #          - --tls-key-file=dashboard-key.pem
  6. #          - --tls-cert-file=dashboard.pem

 

#

 

 

 

十二、LB 配置(keepalived+nginx)

1.安装 keepalived + nginx

  1. yum -y install keepalived nginx

2.配置nginx

 

  1. vim /etc/nginx/nginx.conf  
  2.  
  3.  
  4. 父级增加:
  5. stream {
  6.     log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
  7.     access_log /var/log/nginx/k8s-access.log main;
  8.     
  9.     upstream k8s-apiserver {
  10.         server 192.168.1.244:6443;
  11.         server 192.168.1.245:6443;
  12.     }
  13.     server {
  14.         listen 6443;
  15.         proxy_pass k8s-apiserver;
  16.     }
  17. }   
  18.  
  19.  
  20. 启动
  21. systemctl start nginx

 

3.配置keepalived

  1. vim /etc/keepalived/keepalived.conf
  2.  
  3.  
  4. ! Configuration File for keepalived 
  5.  
  6. global_defs { 
  7.    notification_email { 
  8.      acassen@firewall.loc 
  9.      failover@firewall.loc 
  10.      sysadmin@firewall.loc 
  11.    } 
  12.    notification_email_from Alexandre.Cassen@firewall.loc  
  13.    smtp_server 127.0.0.1 
  14.    smtp_connect_timeout 30 
  15.    router_id NGINX_MASTER 
  16. } 
  17.  
  18. vrrp_script check_nginx {
  19.     script "/usr/local/nginx/sbin/check_nginx.sh"
  20. }
  21.  
  22. vrrp_instance VI_1 { 
  23.     state MASTER      #BACKUP备机设置 
  24.     interface eth0   #网卡名称
  25.     virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
  26.     priority 100    # 优先级,备服务器设置 90 
  27.     advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
  28.     authentication { 
  29.         auth_type PASS      
  30.         auth_pass 1111 
  31.     }  
  32.     virtual_ipaddress { 
  33.         192.168.7.43/24 
  34.     } 
  35.     track_script {
  36.         check_nginx
  37.     } 
  38. }
  39.  
  40. systemctl start keepalived

4.创建健康检查脚本

  1. mkdir -/usr/local/nginx/sbin/
  2. vim /usr/local/nginx/sbin/check_nginx.sh
  3.  
  4. count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
  5.  
  6. if [ "$count" -eq 0 ];then
  7.     systemctl stop keepalived
  8. fi

 

5.修改node节点指向新IP指向LB的 VIP

  1. cd /opt/kubernetes/cfg
  2. vi bootstrap.kubeconfig
  3. vi kubelet.kubeconfig
  4. vi kube-proxy.kubeconfig
  5.  
  6. 改为:192.168.1.240
  7.  
  8. systemctl restart kubelet
  9. systemctl restart kube-proxy

 

十三、kubectl远程连接K8S集群

 

进入证书目录 vim  kubectl.sh  生成 config

  1. kubectl config set-cluster kubernetes \
  2. --server=https://192.168.1.241:6443 \
  3. --embed-certs=true \
  4. --certificate-authority=/root/k8s/k8s-cert/ca.pem \
  5. --kubeconfig=config
  6.  
  7. kubectl config set-credentials cluster-admin \
  8. --certificate-authority=/root/k8s/k8s-cert/ca.pem \
  9. --embed-certs=true \
  10. --client-key=/root/k8s/k8s-cert/admin-key.pem \
  11. --client-certificate=/root/k8s/k8s-cert/admin.pem \
  12. --kubeconfig=config
  13.  
  14. kubectl config set-context default --cluster=kubernetes --user=cluster-admin --kubeconfig=config  
  15.  
  16. kubectl config use-context default --kubeconfig=config

到有kubectl 主机下执行

  1. kubectl --kubeconfig=./config get node
posted @ 2019-12-10 10:04  Noleaf  阅读(380)  评论(0)    收藏  举报