kubespray部署kubernetes高可用集群

kubespray部署kubernetes高可用集群

Kubespray是Google开源的一个部署生产级别的Kubernetes服务器集群的项目,它整合了Ansible作为部署的工具。

  • 可以部署在AWS,GCE,Azure,OpenStack,vSphere,Packet(Bare metal),Oracle Cloud Infrastructure(Experimental)或Baremetal上
  • 高可用集群
  • 可组合各种组件(例如,选择网络插件)
  • 支持最受欢迎的Linux发行版
  • 持续集成测试

官网:https://kubespray.io

项目地址:https://github.com/kubernetes-sigs/kubespray

部署环境

国内特殊的网络环境导致使用kubespray困难重重,部分镜像需要从gcr.io拉取,部分二进制文件需要从github下载,所以这里在阿里云上创建3台香港2C4G抢占模式ECS实例进行部署测试。

说明:高可用部署etcd要求3个节点,所以高可用集群最少需要3个节点。

kubespray需要一个部署节点,也可以复用集群任意一个节点,这里在第一个master节点(192.168.0.137)安装kubespray,并执行后续的所有操作。

下载kubespray

#下载正式发布的relese版本
wget https://github.com/kubernetes-sigs/kubespray/archive/v2.13.1.tar.gz
tar -zxvf v2.13.1.tar.gz

或者直接克隆
git clone https://github.com/kubernetes-sigs/kubespray.git -b v2.13.1 --depth=1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

安装依赖

cd kubespray-2.13.1/
yum install -y epel-release python3-pip
pip3 install -r requirements.txt
  • 1
  • 2
  • 3

更新 Ansible inventory file,IPS地址为3个ecs实例的内部IP:

cp -rfp inventory/sample inventory/mycluster
declare -a IPS=( 192.168.0.137 192.168.0.138 192.168.0.139)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
  • 1
  • 2
  • 3

查看自动生成的hosts.yaml,kubespray会根据提供的节点数量自动规划节点角色。这里部署2个master节点,同时3个节点也作为node,3个节点也用来部署etcd。

[root@node1 kubespray-2.13.1]# cat inventory/mycluster/hosts.yaml 
all:
  hosts:
    node1:
      ansible_host: 192.168.0.137
      ip: 192.168.0.137
      access_ip: 192.168.0.137
    node2:
      ansible_host: 192.168.0.138
      ip: 192.168.0.138
      access_ip: 192.168.0.138
    node3:
      ansible_host: 192.168.0.139
      ip: 192.168.0.139
      access_ip: 192.168.0.139
  children:
    kube-master:
      hosts:
        node1:
        node2:
    kube-node:
      hosts:
        node1:
        node2:
        node3:
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36

修改全局环境变量(默认即可)

cat inventory/mycluster/group_vars/all/all.yml
  • 1

默认安装版本较低,指定kubernetes版本

# vim inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
kube_version: v1.18.3
  • 1
  • 2

配置ssh免密,kubespray ansible节点对所有节点免密。

ssh-keygen
ssh-copy-id 192.168.0.137
ssh-copy-id 192.168.0.138
ssh-copy-id 192.168.0.139
  • 1
  • 2
  • 3
  • 4

运行kubespray playbook安装集群

ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root cluster.yml
  • 1

查看创建的集群

[root@node1 kubespray-2.13.1]# kubectl get nodes -o wide      
NAME    STATUS   ROLES    AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
node1   Ready    master   3m30s   v1.18.3   192.168.0.137   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://18.9.9
node2   Ready    master   2m53s   v1.18.3   192.168.0.138   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://18.9.9
node3   Ready    <none>   109s    v1.18.3   192.168.0.139   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://18.9.9

[root@node1 kubespray-2.13.1]# kubectl -n kube-system get pods
NAME                                          READY   STATUS    RESTARTS   AGE
calico-kube-controllers-796b886f7c-ftm7c      1/1     Running   0          75s
calico-node-bvx8m                             1/1     Running   1          97s
calico-node-c88d7                             1/1     Running   1          97s
calico-node-gdccq                             1/1     Running   1          97s
coredns-6489c7bb8b-k7gpd                      1/1     Running   0          61s
coredns-6489c7bb8b-wgmjz                      1/1     Running   0          56s
dns-autoscaler-7594b8c675-zqhv6               1/1     Running   0          58s
kube-apiserver-node1                          1/1     Running   0          3m24s
kube-apiserver-node2                          1/1     Running   0          2m47s
kube-controller-manager-node1                 1/1     Running   0          3m24s
kube-controller-manager-node2                 1/1     Running   0          2m47s
kube-proxy-d8qf8                              1/1     Running   0          111s
kube-proxy-g5f95                              1/1     Running   0          111s
kube-proxy-g5vvw                              1/1     Running   0          111s
kube-scheduler-node1                          1/1     Running   0          3m24s
kube-scheduler-node2                          1/1     Running   0          2m47s
kubernetes-dashboard-7dbcd59666-rt78s         1/1     Running   0          55s
kubernetes-metrics-scraper-6858b8c44d-9ttnd   1/1     Running   0          55s
nginx-proxy-node3                             1/1     Running   0          112s
nodelocaldns-b4thm                            1/1     Running   0          56s
nodelocaldns-rlq4v                            1/1     Running   0          56s
nodelocaldns-vx9cc                            1/1     Running   0          56s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30

清理集群

ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root reset.yml
  • 1
 
 

posted on 2020-10-19 21:29  ExplorerMan  阅读(624)  评论(0编辑  收藏  举报

导航