Kubernetes之安装

1.  kubernetes官方提供安装方法

    1.  kubeadm

        简单,容易上手,

        组件完全容器化部署,kubelet没有容器化(使用systemctl管理)

    2.  二进制方法安装(生产常用方法)

        部署复杂,但是对整体结构了解,方便后期管理维护,排错

        组件都是守护进程

    3.  minikube

    4.  yum

2.  kubeadm安装

    1.  官方社区推出的一个用于快速部署kubernetes集群的工具

    2.  通过两个命令完成一个kubernetes集群的部署

        1.  创建一个master节点

            kubeadm init

        2.  将一个node节点加入到当前集群中

            kubeadm join <master节点的IP和端口>

    3.  准备工作:

        1.  禁用swap分区

            swapoff -a  临时关闭

            sed -ri 's/.*swap.*/#&/' /etc/fstab  永久关闭

        2.  关闭firewalled

            systemctl stop firewalld.service

            systemctl disable firewalld.service

        3.  关闭selinux

        4.  每个节点的hosts文件

192.168.1.171 k8s-master
192.168.1.42  k8s-node1
192.168.1.181 k8s-node2

        5.  将桥接的IPV4流量传递到iptables的链

cat > /etc/sysctl.d/k8s.conf  <<EOF
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
EOF 

sysctl -p /etc/sysctl.d/k8s.conf 

        6.  时间同步

            yum install ntpdate -y

            ntpdate time.windows.com

    4.  每个节点上安装docker

        yum remove docker*  卸载旧版本

        yum install -y yum-utils   device-mapper-persistent-data   lvm2  安装依赖包

        yum-config-manager     --add-repo     https://download.docker.com/linux/centos/docker-ce.repo  添加docker的repo源

        yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io  指定版本安装,不指定的话,就是最新版本

        yum install docker-ce-18.09.9-3.el7 docker-ce-cli-18.09.9-3.el7 -y

        systemctl enable docker.service && systemctl start docker.service 

        启动时候报错:

inotify_add_watch(7, /dev/dm-1, 10) failed: No such file or directory
解决方法:
yum update xfsprogs

        配置daemon.json文件        

cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF   

        重启docker.service服务  

    5.  每个节点添加阿里云yum源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

    6.  所有节点安装kubeadm,kubelet,kubectl

        yum install -y kubelet-1.16.0 kubeadm-1.16.0 kubectl-1.16.0
        systemctl enable kubelet

    7.  在master上部署kubernetes master

        kubeadm init --apiserver-advertise-address=192.168.1.171 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.16.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

        --apiserver-advertise-address  apiserver的IP地址

        --image-repository  指定镜像仓库,从互联网pull,改成阿里云地址

        --kubernetes-version  指定k8s的版本号

        --service-cidr=10.1.0.0/16  集群内部虚拟网络,pod统一访问入口  

        --pod-network-cidr=10.244.0.0/16  为pod设置网络

        --ignore-preflight-errors=all  忽略网络错误

        kubeadm reset  清除当前机器的初始化记录

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.171:6443 --token fto38d.tlorcnr8nulyo1zj \
--discovery-token-ca-cert-hash sha256:88ed078f3e8898bf7216b6e06801ffe281617e4706a302896f678ac90596ea8e

        kubeadm init工作流程

          1.  安装环境检查

          2.  拉取需要的镜像kubeadm config images pull

          3.  [certs]生成证书,放到/etc/kubernetes/pki

          4.  [kubeconfig]生成kubeconfig文件

          5.  [kubelet-start]生成kubelet文件,并启动kubelet。/var/lib/kubelet/config.yaml

          6.  [control-plane]启动master节点组件

          7.  将一些配置文件存储到configMap中,用于其他节点初始化拉取

          8.  [mark-control-plane]为master节点打污点

          9.  [bootstrap-token]自动为节点颁发证书

          10.  [addons]安装插件CoreDNS和kube-proxy

          最后拷贝kubectl工具用的kubeconfig到默认路径下 

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

          输出其它节点加入master的命令

kubeadm join 192.168.1.184:6443 --token 7cun2a.w9qcghrjup7u3bxy \
--discovery-token-ca-cert-hash sha256:0e467e4b61191a7fce9a2974329b177537b6661544576b015f28d26efa2363b8

    8.  使用kubectl工具

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

    9.  在master节点上,安装Pod网络插件(CNI)

        查看一下kubelet的日志,journalctl -u kubelet   

Feb 09 17:45:38 k8s-master kubelet[3566]: E0209 17:45:38.989810    3566 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:Network
Feb 09 17:45:41 k8s-master kubelet[3566]: W0209 17:45:41.548503    3566 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d

        kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml   此文件自己有的话,直接可以执行,不用下载。 

        也可以使用calico网络插件,博客园附件中有        

    10.  在两台node上执行

        kubeadm join 192.168.1.171:6443 --token dhgz53.e6x3jiea9dbbn1h5 \

        --discovery-token-ca-cert-hash sha256:1da8ea9db3be2d45d64cead9be94c08447b6331d617dc601a63697a87b5be11c

    11.  所有节点都启动kubelet

        systemctl enable kubelet && systemctl start kubelet

        kubelet启动报错,使用命令检查journalctl -xefu kubelet

    12.  查看节点

        kubectl get nodes,出现下面的结果

NAME STATUS ROLES AGE VERSION
k8s-master Ready master 17h v1.16.0
k8s-node1 Ready <none> 17h v1.16.0
k8s-node2 Ready <none> 17h v1.16.0

    13.  查看各个pods的状态

        kubectl get pods -n kube-system

NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-bkbx9 1/1 Running 0 17h
coredns-58cc8c89f4-pjkb5 1/1 Running 0 17h
etcd-k8s-master 1/1 Running 0 17h
kube-apiserver-k8s-master 1/1 Running 0 17h
kube-controller-manager-k8s-master 1/1 Running 0 17h
kube-flannel-ds-amd64-drtts 1/1 Running 0 14h
kube-flannel-ds-amd64-hvbp6 1/1 Running 0 14h
kube-flannel-ds-amd64-nvrz5 1/1 Running 0 14h
kube-proxy-59nkm 1/1 Running 0 17h
kube-proxy-dv4sd 1/1 Running 0 17h
kube-proxy-plls8 1/1 Running 0 17h
kube-scheduler-k8s-master 1/1 Running 0 17h

    14.  创建一个测试

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc

        访问地址:http://NodeIP:Port                    

    15.  在master节点安装dashboard

        kubectl apply -f dashboard.yaml

        kubectl create serviceaccount dashboard-admin -n kube-system  创建服务账号
        kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin  绑定账号到角色中
        kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')  产生token

token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InhzUXp4Ml83ZFk2WTRUNW5iMlRPQVBGRGExT0J2VEZxWFI1YXlfd2xmQzAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4temg2am4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYThjNDIwYzItM2I3ZC00MjQyLWEyMjItOTc5OTY1ZWI3OWRkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.LJXUlXQy99xseeCGhWJFAy93th19C8GfbV9ugJk4zdJraR56dws2mCIRKgRpUqM8GKsZrs7FxMyzi-1kqqgjq0KfgtdoaWbcc3hjTAK9WKz72WYEEnz_8Ibi4YXlUx1uDvlSD7A6MiWvQ02WQ2TPzYLDY-sHpTe1RZi60jRKf_yUpyLqkMsOXm_lWoj3JPFtR4ZfYovtA1RrAhj7AzkqQ5ZmY37kBQL1YsqX0_mxamop7t6rg7_Hp01j3vw64hKxlx3T_a5AuyErw63BQymNfuYOzgu32lpZYbJS-XTceVGAO3L60F1CbAkQqy7h1ydU-GncoLNAQE0kInbmlkwofA 

        查看dashboard的pod  kubectl get pod -n kubernetes-dashboard

        需要修改dashboard.yml的端口类型为nodeport    

    16.  登录dashboard

        使用火狐或者谷歌浏览器登录,https://nodeIP:nodePort

3.  常见报错

    1.  node节点加入到master的时候,报错        

[root@node4 pki]# kubeadm join 192.168.2.190:6443 --token mvkr39.euqsb8smhr51v2yr     --discovery-token-ca-cert-hash sha256:72ff15ba4505228fe4316a10056a6549b8493213147ffec5b704961c6fb3554a
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

        解决方法:  在node节点上执行命令:kubeadm reset  

posted @ 2022-05-24 17:07  奋斗史  阅读(1360)  评论(0)    收藏  举报