10年 Java程序员,硬核人生!勇往直前,永不退缩!

欢迎围观我的git:https://github.com/R1310328554/spring_security_learn 寻找志同道合的有志于研究技术的朋友,关注本人微信公众号: 觉醒的码农,或Q群 165874185

  博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

1. 分别设置hostname

[root@localhost ~]# hostnamectl set-hostname k8s-master 

[root@localhost ~]# hostnamectl set-hostname k8s-node1 

[root@localhost ~]# hostnamectl set-hostname k8s-node2 

 

2. 修改/etc/hosts文件

[root@k8s-master ~]# echo "172.18.8.211 k8s-master 

172.18.8.212 k8s-node1 

172.18.8.210 k8s-node2" >> /etc/hosts

[root@k8s-master ~]# cat /etc/hosts

 

3. 关闭并禁用防火墙

[root@k8s-master ~]# systemctl stop firewalld 

[root@k8s-master ~]# systemctl disable firewalld

 

4. 关闭SeLinux

[root@k8s-master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config 

[root@k8s-master ~]# cat /etc/selinux/config

 

5. 禁用swap

用#注释swap

[root@k8s-master ~]# vi /etc/fstab

重新启动

[root@k8s-master ~]# reboot 

查看swap

[root@k8s-master ~]# free –h

查看selinux状态

[root@k8s-master ~]# getenforce 

Disabled

 

6. 配置Docker的yum安装源,并安装docker-ce

配置docker安装源

[root@k8s-master ~]# yum -y install yum-utils 

[root@k8s-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

(如果这里报错

 

执行mount -o remount rw / 命令,统一解决根目录的权限即可

 

查看可用的版本

[root@k8s-master ~]# yum list docker-ce --showduplicates|grep "^doc"|sort -r 

安装

[root@k8s-master ~]# yum -y install docker-ce-18.06.1.ce-3.el7

启动

[root@k8s-master ~]# systemctl start docker 

[root@k8s-master ~]# systemctl enable docker

7. 配置kubernetes阿里云yum镜像

[root@k8s-master ~]# echo "[kubernetes]

name=Kubernetes 

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 

enabled=1 

pgcheck=1 

repo_gpgcheck=1 

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 

       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg" > /etc/yum.repos.d/kubernetes.repo

 

8. 安装kubeadm,kubelet,kubectl

[root@k8s-master ~]# yum install -y kubelet-1.12.2 kubeadm-1.12.2 kubectl-1.12.2 kubernetes-cni-0.6.0 --disableexcludes=kubernetes

[root@k8s-master ~]# systemctl enable kubelet && systemctl start kubelet

 

以面的命令要在master和每个node上分别执行一遍

 

9. 安装master

从阿里获取安装所需的容器镜像

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.2 

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.2 

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.2 

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.2 

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2 

 

修改tag

[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.2 k8s.gcr.io/kube-apiserver:v1.12.2 

[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.2 k8s.gcr.io/kube-controller-manager:v1.12.2  

[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.2 k8s.gcr.io/kube-scheduler:v1.12.2 

[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.2 k8s.gcr.io/kube-proxy:v1.12.2 

[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 

[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1 

[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2 

 

[root@k8s-master ~]# docker images 

 

网络配置

[root@k8s-master ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables 

[root@k8s-master ~]# echo 1 > /proc/sys/net/ipv4/ip_forward

 

安装master

[root@k8s-master ~]# kubeadm init --kubernetes-version=1.12.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.18.8.211 --token-ttl 0

保存下面的语句,用于后面安装node

kubeadm join 172.18.8.211:6443 --token lzg2ph.sa20hw6rfbi4t4i7 --discovery-token-ca-cert-hash sha256:c51b76a216ba6b9c9855aef96c0d5b4c31f828165d7d70c1600f468d074d7a0e

按安装提示运行以下命令

[root@k8s-master ~]# mkdir -p $HOME/.kube 

[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 

[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config 

 

检查kubelet配置

[root@k8s-master ~]# cat /var/lib/kubelet/kubeadm-flags.env                                    

KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni 

 

安装flannel

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 

[root@k8s-master ~]# systemctl restart docker 

[root@k8s-master ~]# kubectl get nodes

(这里需要等一小会儿,多试几次。如果安装失败可以选择重置:[root@k8s-master ~]# kubeadm reset)

扩展命令:

yum -y remove kubelet 删除安装的kubelet

yum list kubelet --showduplicates|grep "^kub"|sort –r  查看可以使用的版本

yum list kubeadm --showduplicates|grep "^kub"|sort -r查看可以使用的版本

yum list kubectl --showduplicates|grep "^kub"|sort –r 查看可以使用的版本

 

 

10. 安装node

从阿里获取安装所需的容器镜像

[root@k8s-node1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.2 

[root@k8s-node1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 

修改tag

[root@k8s-node1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.2 k8s.gcr.io/kube-proxy:v1.12.2 

[root@k8s-node1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1

网络配置

[root@k8s-node1 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables 

[root@k8s-node1 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward 

 

用前面保存的语句安装node

[root@k8s-node1 ~]# kubeadm join 172.18.8.211:6443 --token lzg2ph.sa20hw6rfbi4t4i7 --discovery-token-ca-cert-hash sha256:c51b76a216ba6b9c9855aef96c0d5b4c31f828165d7d70c1600f468d074d7a0e

 

如果忘记可以用以下语句获取

kubeadm token create --print-join-command

 

11. 安装dashboard

获取dashboard的yaml

地址:https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

(由于国内无法访问

https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml所以从上面的地址获取到配置后上传即可)

修改kubernetes-dashboard.yaml 中的image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

检查版本

[root@k8s-master ~]# grep image kubernetes-dashboard.yaml

        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

从阿里获取镜像(master和node都要拉去这个镜像)

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0 

[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

 

安装dashboard(在master端启动)

官网地址:

https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

 

[root@k8s-master ~]# kubectl create -f kubernetes-dashboard.yaml

 

解决访问安全问题

[root@k8s-master ~]# echo "admin,admin,1" > /etc/kubernetes/pki/basic_auth.csv 

[root@k8s-master ~]# vi /etc/kubernetes/manifests/kube-apiserver.yaml

 

    - --anonymous-auth=false 

    - --basic-auth-file=/etc/kubernetes/pki/basic_auth.csv 

 

[root@k8s-master ~]# kubectl create clusterrolebinding admin --clusterrole=cluster-admin --user=admin

[root@k8s-master ~]# kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard

 

解决由于anonymous-auth=false导致apiserver频繁重启问题

[root@k8s-master ~]# vi /etc/kubernetes/manifests/kube-apiserver.yaml

 

    - --insecure-bind-address=127.0.0.1

    - --insecure-port=8080

      

    livenessProbe:

      failureThreshold: 8

      httpGet:

        host: 127.0.0.1

        path: /healthz

        port: 8080

        scheme: HTTP

      initialDelaySeconds: 15

      timeoutSeconds: 15

    name: kube-apiserver

 

以如下地址访问dashboard,用admin/admin登录,选择kubeconfig时点跳过

https://公网ip:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

 

安装wget

yum install epel-release -y 

yum install jq –y

安装JSON支持

yum install epel-release -y 

yum install jq -y 

 

常用命令

kubectl get node 

 

kubectl get pod --all-namespaces -o wide 

kubectl describe pod kube-apiserver-k8s-master --namespace=kube-system 

 

kubectl get service --namespace=kube-system 

kubectl get service --all-namespaces 

 

kubectl get apiservice 

kubectl get apiservice v2beta1.autoscaling -o yaml 

kubectl get --raw=/apis/autoscaling/v2beta1 | jq

 

 

 

journalctl -u kubelet -n100查看master节点上kubelet进程是否正常启动(master节点操作)

kubectl get componentstatus 查看组件运行状态

systemctl status kubelet 查看kubelet运行状况

kubectl get pod -n kube-system -o wide 查看pod资源情况

 

重新启动kubelet服务(所有节点)

systemctl daemon-reload 

systemctl enable kubelet

 

 

 

--------------------------------------------------------------------------

 

kubectl cluster-info 查看kub信息

 

 

--------------------------------------------------------------------------

通过创建的 ServiceAccount 获取 CA 证书和 Token:

kubectl get serviceaccount gitlab-runner -n gitlab -o json | jq -r '.secrets[0].name'

 

kubectl get secret gitlab-runner-token-9rqqp -n gitlab -o json | jq -r '.data["ca.crt"]' | base64 –d

 

kubectl get secret gitlab-runner-token-9rqqp  -n gitlab -o json | jq -r '.data.token' | base64 –d

 

 

--------------------------------------------------------------------------

 

 

扩容:

kubectl scale deployment elkwebdemo --replicas=3

 

滚动更新:

kubectl apply -f ktest3_2.yaml –record

 

apply命令是瞬间接收到apiserver返回的Response并结束的。但deployment的rolling-update过程还在进行:

kubectl describe deployment bighome-service

 

 

kubectl get rs

我们发现deployment的create和apply命令都带有一个–record参数,这是告诉apiserver记录update的历史。通过kubectl rollout history可以查看deployment的update history:

kubectl rollout history deployment bighome-service

 

同时,我们会看到old ReplicaSet并未被删除:

kubectl get rs

 

这些信息都存储在server端,方便回退!

 

Deployment下Pod的回退操作异常简单,通过rollout undo即可完成。rollout undo会将Deployment回退到record中的上一个revision(见上面rollout history的输出中有revision列):

kubectl rollout undo deployment bighome-service --namespace=default

 

history中最多保存了两个revision记录(这个Revision保存的数量应该可以设置)

终止升级
kubectl rollout pause deployment/bighome-service --namespace=default

 

继续升级

kubectl rollout resume deployment/bighome-service --namespace=default

 

回滚到指定版本

kubectl rollout undo deployment/bighome-service --to-revision=2 --namespace=default

 

posted on 2019-07-08 17:36  CanntBelieve  阅读(161)  评论(0编辑  收藏  举报