kubernetes集群安装
前期准备事项(3台机都需要安装)
3台服务器
1台master
2台worker
系统:Centos7
机器配置:2C4G
注意事项:
关闭swap空间(swapoff -a)
添加hosts
关闭防火墙(systemctl stop firewalld)
关闭selinux(setenforce 0)
Letting iptables see bridged traffic As a requirement for your Linux Node’s iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g. # cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF # sysctl --system
安装docker(3台机都需要安装)
1 wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm 2 3 yum -y localinstall containerd.io-1.2.6-3.3.el7.x86_64.rpm 4 5 yum -y install docker-ce docker-ce-cli 6 7 [root@af1-003 ~]# cat /etc/docker/daemon.json 8 { 9 "registry-mirrors": ["https://yd9m2h33.mirror.aliyuncs.com"], 10 "exec-opts": ["native.cgroupdriver=systemd"], 11 "log-driver": "json-file", 12 "log-opts": { 13 "max-size": "100m" 14 }, 15 "storage-driver": "overlay2" 16 } 17 18 19 [root@af1-003 ~]# systemctl enable docker 20 Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. 21 22 [root@af1-003 ~]# systemctl start docker
安装K8S
- 配置k8s的yum仓库(3台机都需要安装)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
- 安装k8s组件(3台机都需要安装)
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable --now kubelet
- 修改配置
配置kubelet的cgroup drive 确保docker 的cgroup drive 和kubelet的cgroup drive一样: If you are using a different CRI, you have to modify the file /etc/default/kubelet (/etc/sysconfig/kubelet for CentOS, RHEL, Fedora) with your cgroup-driver value, like so: KUBELET_EXTRA_ARGS=--cgroup-driver=<value> [root@af1-ops-ceph-v-001 ~]# docker info |grep -i cgroup Cgroup Driver: systemd [root@af1-ops-ceph-v-001 ~]# cat /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS=--cgroup-driver=systemd systemctl daemon-reload systemctl restart kubelet
初始化Master
-
生成初始化文件
[root@k8s-master ~]$ kubeadm config print init-defaults > kubeadm-init.yaml
该文件有两处需要修改:
- 将
advertiseAddress: 1.2.3.4修改为本机地址 - 将
imageRepository: k8s.gcr.io修改为imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
1 apiVersion: kubeadm.k8s.io/v1beta2 2 bootstrapTokens: 3 - groups: 4 - system:bootstrappers:kubeadm:default-node-token 5 token: abcdef.0123456789abcdef 6 ttl: 24h0m0s 7 usages: 8 - signing 9 - authentication 10 kind: InitConfiguration 11 localAPIEndpoint: 12 advertiseAddress: 10.2.246.153 13 bindPort: 6443 14 nodeRegistration: 15 criSocket: /var/run/dockershim.sock 16 name: af1-ops-ceph-v-003 17 taints: 18 - effect: NoSchedule 19 key: node-role.kubernetes.io/master 20 --- 21 apiServer: 22 timeoutForControlPlane: 4m0s 23 apiVersion: kubeadm.k8s.io/v1beta2 24 certificatesDir: /etc/kubernetes/pki 25 clusterName: kubernetes 26 controllerManager: {} 27 dns: 28 type: CoreDNS 29 etcd: 30 local: 31 dataDir: /var/lib/etcd 32 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers 33 kind: ClusterConfiguration 34 kubernetesVersion: v1.18.0 35 networking: 36 dnsDomain: cluster.local 37 serviceSubnet: 10.96.0.0/12 38 scheduler: {}
-
下载镜像
[root@k8s-master ~]$ kubeadm config images pull --config kubeadm-init.yaml
-
执行初始化
1 [root@af1-ops-ceph-v-003 ~]# kubeadm init --config kubeadm-init.yaml 2 W0422 11:30:19.128781 3219910 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] 3 [init] Using Kubernetes version: v1.18.0 4 [preflight] Running pre-flight checks 5 [preflight] Pulling images required for setting up a Kubernetes cluster 6 [preflight] This might take a minute or two, depending on the speed of your internet connection 7 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 8 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 9 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 10 [kubelet-start] Starting the kubelet 11 [certs] Using certificateDir folder "/etc/kubernetes/pki" 12 [certs] Generating "ca" certificate and key 13 [certs] Generating "apiserver" certificate and key 14 [certs] apiserver serving cert is signed for DNS names [af1-ops-ceph-v-003 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.2.246.153] 15 [certs] Generating "apiserver-kubelet-client" certificate and key 16 [certs] Generating "front-proxy-ca" certificate and key 17 [certs] Generating "front-proxy-client" certificate and key 18 [certs] Generating "etcd/ca" certificate and key 19 [certs] Generating "etcd/server" certificate and key 20 [certs] etcd/server serving cert is signed for DNS names [af1-ops-ceph-v-003 localhost] and IPs [10.2.246.153 127.0.0.1 ::1] 21 [certs] Generating "etcd/peer" certificate and key 22 [certs] etcd/peer serving cert is signed for DNS names [af1-ops-ceph-v-003 localhost] and IPs [10.2.246.153 127.0.0.1 ::1] 23 [certs] Generating "etcd/healthcheck-client" certificate and key 24 [certs] Generating "apiserver-etcd-client" certificate and key 25 [certs] Generating "sa" key and public key 26 [kubeconfig] Using kubeconfig folder "/etc/kubernetes" 27 [kubeconfig] Writing "admin.conf" kubeconfig file 28 [kubeconfig] Writing "kubelet.conf" kubeconfig file 29 [kubeconfig] Writing "controller-manager.conf" kubeconfig file 30 [kubeconfig] Writing "scheduler.conf" kubeconfig file 31 [control-plane] Using manifest folder "/etc/kubernetes/manifests" 32 [control-plane] Creating static Pod manifest for "kube-apiserver" 33 [control-plane] Creating static Pod manifest for "kube-controller-manager" 34 W0422 11:30:23.929499 3219910 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" 35 [control-plane] Creating static Pod manifest for "kube-scheduler" 36 W0422 11:30:23.930825 3219910 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" 37 [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" 38 [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s 39 [kubelet-check] Initial timeout of 40s passed. 40 [apiclient] All control plane components are healthy after 46.504917 seconds 41 [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace 42 [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster 43 [upload-certs] Skipping phase. Please see --upload-certs 44 [mark-control-plane] Marking the node af1-ops-ceph-v-003 as control-plane by adding the label "node-role.kubernetes.io/master=''" 45 [mark-control-plane] Marking the node af1-ops-ceph-v-003 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] 46 [bootstrap-token] Using token: abcdef.0123456789abcdef 47 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles 48 [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes 49 [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials 50 [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token 51 [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster 52 [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace 53 [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key 54 [addons] Applied essential addon: CoreDNS 55 [addons] Applied essential addon: kube-proxy 56 57 Your Kubernetes control-plane has initialized successfully! 58 59 To start using your cluster, you need to run the following as a regular user: 60 61 mkdir -p $HOME/.kube 62 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 63 sudo chown $(id -u):$(id -g) $HOME/.kube/config 64 65 You should now deploy a pod network to the cluster. 66 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: 67 https://kubernetes.io/docs/concepts/cluster-administration/addons/ 68 69 Then you can join any number of worker nodes by running the following on each as root: 70 71 kubeadm join 10.2.246.153:6443 --token abcdef.0123456789abcdef \ 72 --discovery-token-ca-cert-hash sha256:5a8f2e7db491f8711cd9752d2c356032b04a28c74253fbc5816f48d765543d9f
最后两行需要保存下来, kubeadm join ...是worker节点加入所需要执行的命令.
接下来配置环境, 让当前用户可以执行kubectl命令:
1 mkdir -p $HOME/.kube 2 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 3 sudo chown $(id -u):$(id -g) $HOME/.kube/config
测试一下: 此处的NotReady是因为网络还没配置.
[root@k8s-master kubernetes]$ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master NotReady master 3m25s v1.15.3
配置网络
- 下载描述文件
[root@k8s-master ~]$ wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml [root@k8s-master ~]$ cat kubeadm-init.yaml | grep serviceSubnet: serviceSubnet: 10.96.0.0/12
打开calico.yaml, 将192.168.0.0/16修改为10.96.0.0/12
需要注意的是, calico.yaml中的IP和kubeadm-init.yaml需要保持一致, 要么初始化前修改kubeadm-init.yaml, 要么初始化后修改calico.yaml.
执行kubectl apply -f calico.yaml初始化网络.
此时查看node信息, master的状态已经是Ready了.
[root@af1-ops-ceph-v-003 ~]# kubectl get node NAME STATUS ROLES AGE VERSION af1-ops-ceph-v-003 Ready master 15m v1.18.2
部署Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
1 [root@af1-ops-ceph-v-003 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml 2 namespace/kubernetes-dashboard created 3 serviceaccount/kubernetes-dashboard created 4 service/kubernetes-dashboard created 5 secret/kubernetes-dashboard-certs created 6 secret/kubernetes-dashboard-csrf created 7 secret/kubernetes-dashboard-key-holder created 8 configmap/kubernetes-dashboard-settings created 9 role.rbac.authorization.k8s.io/kubernetes-dashboard created 10 clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created 11 rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created 12 clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created 13 deployment.apps/kubernetes-dashboard created 14 service/dashboard-metrics-scraper created 15 deployment.apps/dashboard-metrics-scraper created 16 17 18 [root@af1-ops-ceph-v-003 ~]# kubectl get pods --all-namespaces | grep dashboard 19 kubernetes-dashboard dashboard-metrics-scraper-694557449d-kt5r8 1/1 Running 0 3m50s 20 kubernetes-dashboard kubernetes-dashboard-9774cc786-qcplm 1/1 Running 0 3m50s
[root@k8s-master kubernetes]$ kubectl get pods --all-namespaces | grep dashboard NAMESPACE NAME READY STATUS kubernetes-dashboard dashboard-metrics-scraper-fb986f88d-m9d8z 1/1 Running kubernetes-dashboard kubernetes-dashboard-6bb65fcc49-7s85s 1/1 Running
创建用户
创建一个用于登录Dashboard的用户. 创建文件dashboard-adminuser.yaml内容如下:
apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard
执行命令kubectl apply -f dashboard-adminuser.yaml.,获取用户token
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
注意:原因是由于kube-apiserver使用了TLS认证,而我们的真实物理机上的浏览器使用匿名证书(因为没有可用的证书)去访问Dashboard,导致授权失败而不无法访问。官方提供的解决方法是将kubelet的证书转化为浏览器可用的证书,然后导入进浏览器,并重启浏览器。
Note: This way of accessing Dashboard is only possible if you choose to install your user certificates in the browser. In example certificates used by kubeconfig file to contact API Server can be used.
添加worker节点
1 kubeadm join 10.2.246.153:6443 --token abcdef.0123456789abcdef \ 2 --discovery-token-ca-cert-hash sha256:5a8f2e7db491f8711cd9752d2c356032b04a28c74253fbc5816f48d765543d9f

浙公网安备 33010602011771号