• 博客园logo
  • 会员
  • 周边
  • 新闻
  • 博问
  • 闪存
  • 众包
  • 赞助商
  • Chat2DB
    • 搜索
      所有博客
    • 搜索
      当前博客
  • 写随笔 我的博客 短消息 简洁模式
    用户头像
    我的博客 我的园子 账号设置 会员中心 简洁模式 ... 退出登录
    注册 登录
lummg-DAY
博客园    首页    新随笔    联系   管理    订阅  订阅

二进制搭建K8S

一、基础集群环境搭建

1.1:k8s高可用集群环境规划信息

1.1.1:多master服务器统计

类型(台数) IP地址 说明

ansible(2)

192.168.134.11/12 集群部署服务器,和maste在一起
K8Smaster(2) 192.168.134.11/12 K8s控制端,通过一个VIP做主备高可用
Harbor(1) 192.168.134.16 镜像服务器
etcd(3) 192.168.134.11/12/13 保存k8s集群数据的服务器
haproxy(2) 192.168.134.15 高可用etcd代理服务器
node(2) 192.168.134.13/14 真正运行容器的服务器

 

 

 

 

 

 

 

 

 

1.2:基础环境准备

系统主机名配置、IP配置、系统参数优化,以及依赖的负载均衡和Harbor部署

1.2.1:系统配置

主机名等系统配置略

1.2.2:高可用负载均衡

k8s高可用反向代理

1.2.2.1:keepalived

root@HA:~# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    garp_master_delay 10
    smtp_alert
    virtual_router_id 88
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.134.188 eth0 label eth0:1
        192.168.134.189 eth0 label eth0:2
    }
}

1.2.2.2:haproxy

listen stats  #状态页
 mode http
 bind 0.0.0.0:9999
 stats enable
 log global
 stats uri     /haproxy-status
 stats auth   haadmin:123456

listen k8s-api-6443     #api的HA
  bind 192.168.134.188:6443
  mode tcp
 # balance roundrobin
  server master1 192.168.134.11:6443 check inter 3s fall 3 rise 5
 # server master2 192.168.134.12:6443 check inter 3s fall 3 rise 5

1.2.2.3:Harbor之https:

harbor节点:

root@harbor:/usr/local/src/harbor# mkdir certs
root@harbor:/usr/local/src/harbor# cd certs/
openssl genrsa -out /usr/local/src/harbor/certs/harbor-ca.key ##生成私有key
 openssl req -x509 -new -nodes -key /usr/local/src/harbor/certs/harbor-ca.key  -subj "/CN=harbor.linux.com" -days 7120 -out /usr/local/src/harbor/certs/harbor-ca.crt
如果报错:
Can't load /root/.rnd int openssl req -x509 -new -nodes -key 
则:
touch /root/.rnd
再执行一遍:
/usr/local/src/harbor/certs/harbor-ca.key  -subj "/CN=harbor.linux.com" -days 7120 -out /usr/local/src/harbor/certs/harbor-ca.cro RNG #harbor.linux.com为harbor域名
这里只显示harbor.cfg文件修改处
hostname = harbor.linux.com
ui_url_protocol = https ssl_cert
= /usr/local/src/harbor/certs/harbor-ca.crt ssl_cert_key = /usr/local/src/harbor/certs/harbor-ca.key harbor_admin_password = 123456
最后执行:
./install.sh

其他节点:

client 同步在crt证书:

mkdir /etc/docker/certs.d/harbor.linux.com -p
scp 192.168.134.16:/usr/local/src/harbor/certs/harbor-ca.crt /etc/docker/certs.d/harbor.linux.com

1.3:ansible部署

1.3.1:基础环境准备

所有master、node、etc节点安装python2.7

apt-get install python2.7 -y
ln -sv /usr/bin/python2.7 /usr/bin/python

1.3.2:master节点安装ansible

apt install ansible -y
apt
-get install sshpass -y 传送公钥 #!/bin/bash #目标主机 IP=" 192.168.134.11 192.168.134.12 192.168.134.13 192.168.134.14 " for node in ${IP[@]};do echo $node sshpass -p r00tme ssh-copy-id ${node} -o StrictHostKeyChecking=no if [ $? -eq 0 ];then echo "$node copy success" else echo " $node copy failure" fi done

1.3.3:在ansible控制端编排k8s安装

https://github.com/easzlab/kubeasz/blob/master/docs/setup/00-planning_and_overall_intro.md

1.3.3.1:安装docker

#!/bin/bash
# step 1: 安装必要的一些系统工具
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# step 2: 安装GPG证书
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# Step 3: 写入软件源信息
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# Step 4: 更新并安装Docker-CE
sudo apt-get -y update
sudo apt-get -y install docker-ce  docker-ce-cli

1.3.3.2:下载离线镜像

# 下载工具脚本easzup,
export release=2.2.0
curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/easzup
chmod +x ./easzup
# 使用工具脚本下载
./easzup -D

 1.3.3.3:准备hosts文件

root@master-1:/etc/ansible# grep -v '#' hosts | grep -v '^$'
[etcd]
192.168.134.11 NODE_NAME=master-1
192.168.134.12 NODE_NAME=master-2
192.168.134.13 NODE_NAME=node-1
[kube-master]
192.168.134.11
192.168.134.12
[kube-node]
192.168.134.13
192.168.134.14
[harbor]
[ex-lb]
192.168.134.15 LB_ROLE=backup EX_APISERVER_VIP=192.168.134.188 EX_APISERVER_PORT=6443
[chrony]
[all:vars]
CONTAINER_RUNTIME="docker"
CLUSTER_NETWORK="flannel"
PROXY_MODE="ipvs"
SERVICE_CIDR="10.10.0.0/16"
CLUSTER_CIDR="172.20.0.0/16"
NODE_PORT_RANGE="30000-60000"
CLUSTER_DNS_DOMAIN="linux.local."
bin_dir="/usr/bin"
ca_dir="/etc/kubernetes/ssl"
base_dir="/etc/ansible"

1.3.4:开始按步骤部署

root@master-1:/etc/ansible# ls
01.prepare.yml     03.docker.yml       06.network.yml        22.upgrade.yml  90.setup.yml  99.clean.yml  dockerfiles  example    pics       tools
02.etcd.yml        04.kube-master.yml  07.cluster-addon.yml  23.backup.yml   91.start.yml  ansible.cfg   docs         hosts      README.md
03.containerd.yml  05.kube-node.yml    11.harbor.yml         24.restore.yml  92.stop.yml   bin           down         manifests  roles

1.3.4.1:环境初始化

root@master-1:/etc/ansible# apt install python-pip -y
root@master-1:/etc/ansible# ansible-playbook 01.prepare.yml

1.3.4.2:部署etcd集群

ansible-playbook 02.etcd.yml 

各etcd服务器验证etcd服务:

root@master-1:/etc/ansible# export NODE_IPS="192.168.134.11 192.168.134.12 192.168.134.13"
root@master-1:/etc/ansible# echo $NODE_IPS
192.168.134.11 192.168.134.12 192.168.134.13
root@master-1:/etc/ansible# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem endpoint health; done
https://192.168.134.11:2379 is healthy: successfully committed proposal: took = 17.538879ms
https://192.168.134.12:2379 is healthy: successfully committed proposal: took = 21.189926ms
https://192.168.134.13:2379 is healthy: successfully committed proposal: took = 23.069667ms

1.3.4.3:部署docker

root@master-1:/etc/ansible# ansible-playbook 03.docker.yml

如果需要更换版本,先在/etc/ansible/down 解压docker-19.03.8.tar,覆盖掉 cp.   /etc/ansible/bin/,最后:ansible-playbook 03.docker.yml

证书替换

1.3.4.4:部署master

root@master-1:/etc/ansible# ansible-playbook 04.kube-master.yml

1.3.4.5:部署node

1、部署时更换镜像源,最好先下载下来,上传到本地harbor
root@master-1:/etc/ansible/roles# grep 'SANDBOX_IMAGE' ./* -R
./containerd/defaults/main.yml:SANDBOX_IMAGE: "mirrorgooglecontainers/pause-amd64:3.1
2、部署node
root@master-1:/etc/ansible# ansible-playbook 05.kube-node.yml

1.3.4.6:部署网络服务flannel

ansible-playbook 06.network.yml

 保证网络没问题:

1.3.4.7:添加、删除master和node节点

https://github.com/easzlab/kubeasz/blob/master/docs/op/op-master.md

执行:

easzctl add-master 192.168.134.17  #添加master节点
easzctl del-master 192.168.134.17
#添加master节点

node节点验证

1.3.4.8:添加node节点:

easzctl add-node 192.168.134.18
easzctl del-node
192.168.134.18

1.3.4.9:集群升级:

现版本

master节点:

root@master-1:/etc/ansible# /usr/bin/kube
kube-apiserver           kube-controller-manager  kubectl                  kubelet                  kube-proxy               kube-scheduler

node 节点:

root@node-1:~# /usr/bin/kube
kubectl     kubelet     kube-proxy  

下载:要升级的二进制安装包

首先备份现有版本

cd /etc/ansible/bin
cp kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler /opt/k8s-1.17.2/ #backup 1.17.2
cp kube-apiserver kube-controller-manager kubectl kube-proxy kube-scheduler kubelet /opt/k8s-1.17.4/ #backup 1.17.4

节点少的情况下:

master节点需要停掉相关的服务,然后替换

root@master-3:~# systemctl stop kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet
别的机器复制要升级的二进制过来
scp kube-apiserver  kube-controller-manager  kubectl  kubelet  kube-proxy  kube-scheduler 192.168.134.17:/usr/bin
然后重启
systemctl start kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet

node节点需要更换kubectl、kubelet、kube-proxy组件

官方ansible方法:

https://github.com/easzlab/kubeasz/blob/master/docs/op/upgrade.md

复制替换ansible控制端目录/etc/ansible/bin对应文件

在ansible控制端执行ansible-playbook -t upgrade_k8s 22.upgrade.yml

root@master-1:/etc/ansible# easzctl upgrade

验证:

1.3.5:dashboard插件

root@master-1:# cd /etc/ansible/manifests/dashboard
mkdir  dashboard-2.0.0-rc6
cd  dashboard-2.0.0-rc6
copy进来
root@master-1:/etc/ansible/manifests/dashboard/dashboard-2.0.0-rc6# ll
total 20
drwxr-xr-x 2 root root 4096 Apr 11 23:45 ./
drwxrwxr-x 5 root root 4096 Apr 11 23:15 ../
-rw-r--r-- 1 root root  374 Mar 28 17:44 admin-user.yml
-rw-r--r-- 1 root root 7661 Apr 11 23:45 dashboard-2.0.0-rc6.yml
修改配置文件
root@master-1:/etc/ansible/manifests/dashboard/dashboard-2.0.0-rc6# grep image dashboard-2.0.0-rc6.yml
          image: harbor.linux.com/dashboard/kubernetesui/dashboard:v2.0.0-rc6
          imagePullPolicy: IfNotPresent
          image: harbor.linux.com/dashboard/kubernetesui/metrics-scraper:v1.0.3
执行:
kubectl apply -f . 
Error from server (NotFound): error when creating "admin-user.yml": namespaces "kubernetes-dashboard" not found
再执行一次
kubectl apply -f . 

验证:

获取token

root@master-1:/etc/ansible/manifests/dashboard/dashboard-2.0.0-rc6# kubectl get secret -A | grep admin
kubernetes-dashboard   admin-user-token-xx8w6                           kubernetes.io/service-account-token   3      2m46s
 kubectl describe secret admin-user-token-xx8w6 -n kubernetes-dashboard

 获取端口

 

 设置token登录会话保持时间

1.3.6:DNS服务

1.组件介绍(创建在一个容器)
kube-dns:提供service name域名的解析
dns-dnsmasq:提供DNS缓存,降低kubedns负载,提高性能
dns-sidecar:定期检查kubedns和dnsmasq的健康状态

1.3.6.1:部署kube-dns

解压之前k8s升级的二进制文件,找到kube-dns.yaml.base文件:

 

 

 与:__PILLAR__DNS__DOMAIN__相关的都改成域名

 创建CoreDNS和kube-DNS目录

cd /etc/ansible/manifests/dns
mkdir CoreDNS kube-dns

图中红框的镜像导入,创建相关的container,并上传到harbor中,同时在配置文件中把image:替换为harbor的地址。

 配置:2C、4G

 

 最后

kubectl apply -f kube-dns.yaml 

 验证:

1.3.6.2:部署coredns:

https://github.com/coredns/deployment/tree/master/kubernetes

在现有kube-dns的基础上替换为core-dns,如果没有kube-dns,直接装的话,修改core-dns的镜像地址后直接运行yml文件

git clone https://github.com/coredns/deployment.git
./deploy.sh >coredns.linux.yml
vim coredns.linux.yml #修改配置为下面截图
docker pull coredns/coredns:1.6.7
kubectl delete -f kube-dns.yaml
kubectl apply  -f  coredns.linux.yml

验证:

 

posted @ 2020-04-13 20:35  lummg-DAY  阅读(553)  评论(0)    收藏  举报
刷新页面返回顶部
博客园  ©  2004-2025
浙公网安备 33010602011771号 浙ICP备2021040463号-3