页首自改代码

Hey, Nice to meet You.  why ?

☆☆☆所谓豪杰之士,必有过人之节.人情有所不能忍者,匹夫见辱,拔剑而起,挺身而斗,此不足为勇也,天下有大勇者,猝然临之而不惊,无故加之而不怒.此其所挟持者甚大,而其志甚远也.          
返回顶部

kubeadm部署Kubernetes集群

Preface

    通过kubeadm管理工具部署Kubernetes集群,相对离线包的二进制部署集群方式而言,更为简单与便捷。以下为个人学习总结:

    两者区别在于前者部署方式使得大部分集群组件(Kube-piserver、Kube-controller-manager、Kube-proxy、Kube-scheduler、CoreDNS等)以系统资源容器的形式运行在服务器上,而后者部署方式使得组件以服务形式运行在服务器上;当集群组件异常或者down状态时,前者可通过集群机制自动拉起,而后者则需人为操作;

    当要配增集群资源时,无非是向集群中加入新增的Node节点,kubeadm部署方式则可通过简单命令实现高效添加,而二进制部署方式则需根据Node节点部署过程从头至尾进行操作;

    因国内无法访问google资源,故此实验采用作者制作的Kubernetes组件仓库资源,因服务器资源有限,故采用一台Master和一台Node完成实验,实操见下文!

Set env

Master 10.1.65.131 # kubelet, kubeadm, docker

Node 10.1.65.132 # kubelet, kubeadm, docker

# Close firewalld

systemctl stop firewalld

systemctl disable firewalld

# Close selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config

setenforce 0

# Close swap

swapoff -a # temporary change

vim /etc/fstab # permanent change

# Edit file

# cat /etc/hosts

10.1.65.131 master

10.1.65.132 node1

# Time synchronization

yum install ntpdate -y

ntpdate ntp.api.bz

Operation

# Install for all machine

# Install docker

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum localinstall -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm

yum install docker-ce-17.03.3.ce -y # The largest version is 17.03 currently supported for kubeadm;

systemctl start docker && systemctl enable docker

# Down images

# 脚本需在Master及Node节点执行,下载镜像,后续多数插件都是以docker形式运行;

# cat down-images.sh

#!/bin/bash

images=(

kube-apiserver:v1.12.0

kube-controller-manager:v1.12.0

kube-scheduler:v1.12.0

kube-proxy:v1.12.0

pause:3.1

etcd:3.2.24

coredns:1.2.2

)

 

for i in ${images[@]}

do

   docker pull   kazihuo/$i

   docker tag    kazihuo/$i   k8s.gcr.io/$i

   docker rmi  -f  kazihuo/$i

done

# Yum configuration

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

# Install kubeadm,kubelet and kubectl

yum -y install kubelet-1.12.0 kubeadm-1.12.0 kubectl-1.12.0 --disableexcludes=kubernetes

systemctl enable kubelet

# 初始化Master;

# 若初始化失败或之前有过初始化操作的,先执行以下操作:

kubeadm reset

ifconfig cni0 down && ip link delete cni0
ifconfig flannel.1 down && ip link delete flannel.1
rm -rf /var/lib/cni/

[root@master ~]# kubeadm init --kubernetes-version=v1.12.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

[init] using Kubernetes version: v1.12.0

[preflight] running pre-flight checks

[preflight/images] Pulling images required for setting up a Kubernetes cluster

[preflight/images] This might take a minute or two, depending on the speed of your internet connection

[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'

...

...

[addons] Applied essential addon: kube-proxy

 

Your Kubernetes master has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

You can now join any number of machines by running the following on each node

as root:

 

kubeadm join 10.1.65.131:6443 --token na5io2.rcducfd1bf889rzy --discovery-token-ca-cert-hash sha256:ddd3923e15175c389f92ad52070bd383648afb850c661973463b2fc60c504bd2

[root@master ~]# mkdir -p $HOME/.kube

[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 安装Pod网络插件;

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 查看token(有效时间为24h);

[root@master ~]# kubeadm token list

# 当token失效后,重新创建;

[root@master ~]# kubeadm token create

# 查看discovery-token;

[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

# 加入工作节点;

# Node节点上操作,格式:kubeadm join masterip:6443 --token:xxxx --discovery-token-ca-cert-hash sha256:xxxx

[root@node1 ~]# kubeadm join 10.1.65.131:6443 --token na5io2.rcducfd1bf889rzy --discovery-token-ca-cert-hash sha256:ddd3923e15175c389f92ad52070bd383648afb850c661973463b2fc60c504bd2

# 状态查看

[root@master ~]# kubectl get pods -n kube-system

Skills

# Node节点执行kubectl命令

[root@master ~]# scp /etc/kubernetes/admin.conf node1:/etc/kubernetes/

[root@node1 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

[root@node1 ~]# source ~/.bash_profile

[root@node1 ~]# kubectl get pods

Problems

# kubeadm init error

# 问题描述

[root@master ~]# kubeadm init --apiserver-advertise-address 10.1.65.131 --pod-network-cidr=10.10.0.0/16 --apiserver-advertise-address=10.1.65.131

[init] using Kubernetes version: v1.12.2

[preflight] running pre-flight checks

[preflight] Some fatal errors occurred:

    [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty

[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

# 问题解决

    因为之前有过etcd的部署与卸载,故在/var/lib/etcd目录下存在备份文件,故手动删除即可。

[root@master ~]# rm -rf /var/lib/etcd/*

posted on 2018-12-04 22:03  罗穆瑞  阅读(502)  评论(0编辑  收藏  举报

导航