k8s

1.k8s集群架构的安装

1.1 k8s的架构

除了核心组件,还有一些推荐的Add-ons:
| 组件名称 | 说明 |
| ---- | ---- | ---- |
| kube-dns | 负责为整个集群提供DNS服务 |
| Ingress Controller | 为服务提供外网入口 |
| Heapster | 提供资源监控 |
| Dashboard | 提供GUI |
| Federation | 提供跨可用区的集群 |
| Fluentd-elasticsearch | 提供集群日志采集、存储与查询 |

1.2 修改Ip地址、主机名和host解析

10.0.0.11  k8s-master
10.0.0.12  k8s-node-1
10.0.0.13  k8s-node-2
#所有节点需要做hosts解析

1.3 master节点安装etcd

yum install etcd -y
​
vim /etc/etcd/etcd.conf
6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
​
systemctl start etcd.service
systemctl enable etcd.service
​
etcdctl set testdir/testkey0 0
etcdctl get testdir/testkey0
​
etcdctl -C http://10.0.0.11:2379 cluster-health

#etcd原生支持做集群

1.4 master节点安装kubernetes

yum install kubernetes-master.x86_64 -y
​
vim /etc/kubernetes/apiserver 
8行:  KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
11行:KUBE_API_PORT="--port=8080"
14行: KUBELET_PORT="--kubelet-port=10250"
17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379"
23行:KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
​
vim /etc/kubernetes/config
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
​
systemctl enable kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service

检查服务是否安装正常

[root@k8s-master ~]# kubectl get componentstatus 
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 

1.5 node节点安装kubernetes

yum install kubernetes-node.x86_64 -y
​
vim /etc/kubernetes/config 
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
​
vim /etc/kubernetes/kubelet
5行:KUBELET_ADDRESS="--address=0.0.0.0"
8行:KUBELET_PORT="--port=10250"
11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.12"
14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
​
systemctl enable kubelet.service
systemctl restart kubelet.service
systemctl enable kube-proxy.service
systemctl restart kube-proxy.service

在master节点检查

[root@k8s-master ~]# kubectl get nodes
NAME        STATUS    AGE
10.0.0.12   Ready     6m
10.0.0.13   Ready     3s

1.6 所有节点配置flannel网络

yum install flannel -y
sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
​
##master节点:
etcdctl mk /atomic.io/network/config   '{ "Network": "172.18.0.0/16" }'
​
yum install docker -y
systemctl enable flanneld.service 
systemctl restart flanneld.service 
systemctl  restart  docker
systemctl  enable  docker
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
​
##node节点:
systemctl enable flanneld.service 
systemctl restart flanneld.service 
systemctl  restart  docker
systemctl restart kubelet.service
systemctl restart kube-proxy.service
​
vim /usr/lib/systemd/system/docker.service
#在[Service]区域下增加一行
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
systemctl daemon-reload 
systemctl restart docker

Flannel三种模式

1.UDP
在操作系统内核和用户应用程序之间传递IP包,性能差

2.host-gw
通过类似于交换机的路由网络配置,性能最高

3.vxlan(最常用)
通过虚拟网络'隧道',进行容器间的数据包传输,性能较高

1.7配置master为镜像仓库

#所有节点
​
vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries": ["10.0.0.11:5000"]
}
​
systemctl restart docker
​
#master节点
docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry  registry

2.什么是k8s,k8s有什么功能?

k8s是一个docker集群的管理工具 
k8s是容器的编排工具

2.1 K8s的核心功能

自愈:  重新启动失败的容器,在节点不可用时,替换和重新调度节点上的容器,对用户定义的健康检查不响应的容器会被中止,并且在容器准备好服务之前不会把其向客户端广播。

弹性伸缩:  通过监控容器的cpu的负载值,如果这个平均高于80%,增加容器的数量,如果这个平均低于10%,减少容器的数量

服务的自动发现和负载均衡(svc): 不需要修改您的应用程序来使用不熟悉的服务发现机制,Kubernetes 为容器提供了自己的 IP 地址和一组容器的单个 DNS 名称,并可以在它们之间进行负载均衡。

滚动升级和一键回滚: Kubernetes 逐渐部署对应用程序或其配置的更改,同时监视应用程序运行状况,以确保它不会同时终止所有实例。 如果出现问题,Kubernetes会为您恢复更改,利用日益增长的部署解决方案的生态系统。

私密配置文件管理. web容器里面,数据库的账户密码(测试库密码)

k8s的历史

2014年  docker容器编排工具,立项

2015年7月  发布kubernetes 1.0, 加入cncf基金会   孵化

2016年,kubernetes干掉两个对手,docker swarm,mesos marathon  1.2版

2017年   1.5  -1.9

2018年   k8s 从cncf基金会  毕业项目1.10 1.11 1.12

2019年: 1.13, 1.14 ,1.15,1.16  1.17

cncf :cloud  native  compute  foundation 孵化器

kubernetes (k8s): 希腊语 舵手,领航者  容器编排领域,

谷歌15年容器使用经验,borg容器管理平台,使用golang重构borg,kubernetes 

2.3 k8s的安装方式

yum安装    1.5    最容易安装成功,最适合学习的

源码编译安装---难度最大  可以安装最新版

二进制安装---步骤繁琐    可以安装最新版       shell,ansible,saltstack

kubeadm    安装最容易, 网络    可以安装最新版

minikube    适合开发人员体验k8s,  网络

2.4 k8s的应用场景

k8s最适合跑微服务项目

3 k8s常用的资源

3.1 创建pod资源

pod是最小资源单位.

任何的一个k8s资源都可以由yml清单文件来定义

pod是什么:一组容器,它们共用ip地址,一个pod至少有2个容器,一个pod基础容器,一个业务容器

k8s yaml的主要组成

apiVersion: v1  api版本
kind: pod   资源类型
metadata:   属性
spec:       详细
#准备

上传镜像
docker load -i docker_nginx1.13.tar.gz 
docker images 
docker tag docker.io/nginx:1.13 10.0.0.11:5000/nginx:1.13
docker push 10.0.0.11:5000/nginx:1.13

上传镜像
docker load -i pod-infrastructure-latest.tar.gz 
docker images 
docker tag docker.io/tianyebj/pod-infrastructure:latest 10.0.0.11:5000/pod-infrastructure:latest
docker push 10.0.0.11:5000/pod-infrastructure:latest

cat /etc/kubernetes/kubelet
#修改第17行
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.0.11:5000/pod-infrastructure:latest"

#重启node节点的kubelet服务
systemctl restart kubelet.service



不做以上步骤的话,会有报错:failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"

pod的yaml文件例子

apiVersion: v1
kind: Pod
metadata:
  name: test
  labels:
    app: web
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
    - name: alpine
      image: 10.0.0.11:5000/alpine:latest
      command: ["sleep","1000"]

3.2 ReplicationController (rc) 资源

rc:保证指定数量的pod始终存活,rc通过标签选择器来关联pod

k8s资源的常见操作:
kubectl   create  -f   xxx.yaml
kubectl   get  pod  |   kubectl   get  pod -o wide  |  kubectl get pod --show-labels 
kubectl  describe  pod  nginx
kubectl  delete   pod  nginx   或者kubectl delete  -f  xxx.yaml
kubectl  edit  pod   nginx

创建一个rc

apiVersion: v1    #版本
kind: ReplicationController    #资源类型
metadata:      #元数据
  name: nginx
spec:      #详细
  replicas: 5  #副本5
  selector:    #标签选择器
    app: myweb
  template:  #模板
    metadata:
      labels:
        app: myweb
    spec:
      containers:    #容器
      - name: myweb
        image: 10.0.0.11:5000/nginx:1.13
        ports:
        - containerPort: 80
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx2
spec:
  replicas: 5  #副本5
  selector:
    app: myweb2
  template:  #模板
    metadata:
      labels:
        app: myweb2
    spec:
      containers:
      - name: myweb2
        image: 10.0.0.11:5000/nginx:1.15
        ports:
        - containerPort: 80

进行升级 kubectl rolling-update nginx -f nginx-rc1.15.yaml --update-period=3s
进行回滚 kubectl rolling-update nginx2 -f nginx-rc.yaml --update-period=1s

3.3 service(svc)资源

service帮助pod暴露端口

创建一个service

apiVersion: v1
kind: Service   #简称svc
metadata:
  name: myweb
spec:
  type: NodePort  #默认ClusterIP
  ports:
    - port: 80          #clusterIP
      nodePort: 30000   #node port
      targetPort: 80    #pod port
  selector:
    app: myweb2

#nodePort      宿主机端口进行映射
#targetPort    pod暴漏的端口

#type:
NodePort用于外部的用户访问集群内部的服务
ClusterIP默认为它,外部的用户无法访问,只能在集群内部互相访问
kubectl scale rc pod_name --replicas=2   #调整rc的副本数为2
kubectl  exec  -it   pod_name  /bin/bash   #进入pod容器

修改nodePort范围
vim  /etc/kubernetes/apiserver
KUBE_API_ARGS="--service-node-port-range=3000-50000"

查看rc
kubectl get rc -n namespace  (不指定-n namespace默认为default)

命令行创建service资源
kubectl expose rc rc_name --type=NodePort --port=80

查看svc
kubectl get svc

详细查看svc的信息
kubectl describe svc svc_name

service默认使用iptables来实现负载均衡, k8s 1.8新版本中推荐使用lvs(四层负载均衡 传输层tcp,udp)

#服务的自动发现和负载均衡
#svc通过标签选择器与pod相关联

3.4 deployment资源

有rc在滚动升级之后,会造成服务访问中断,于是k8s引入了deployment资源
创建deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  strategy:   
    rollingUpdate:
      maxSurge: 1  
      maxUnavailable: 1 
    type: RollingUpdate
  minReadySeconds: 30
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: 10.0.0.11:5000/nginx:1.13
        ports:
        - containerPort: 80
        resources:  
          limits:
            cpu: 100m
          requests:
            cpu: 100m
命令行创建deployment
kubectl run  nginx  --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record
PS:--record不加此参数,查看deployment历史版本的时候为none,不方便查看

命令行升级版本(使用此命令进行回滚)
kubectl set image deployment nginx nginx=10.0.0.11:5000/nginx:1.15

查看deployment所有历史版本
kubectl rollout history deployment nginx

deployment回滚到上一个版本
kubectl rollout undo deployment nginx(不建议使用,因为回滚到上一版本,history为none)

deployment回滚到指定版本
kubectl rollout undo deployment nginx --to-revision=2(不建议使用,因为回滚到上一版本,history为none)
deployment和rc的区别?
rc滚动升级依赖配置文件
修改rc里面pod模板的配置,下一次启动pod才生效
修改deployment里面pod模板的配置,立即生效
deployment通过rs来控制pod(rs是rc的升级版,rs可支持多标签,而rc仅支持一个标签)
PS:rc类型的pod升级时,当修改完pod的配置文件,并且重新启动pod后,其标签选择器发生改变,会造成服务的短时间停止,要想服务重新被用户访问,需要修改svc的标签选择器。 deployment支持多标签,立即改完pod模板的配置,立即生效

3.5 练习

tomcat+mysql

[root@k8s-master tomcat_demo]# cat mysql-rc.yml 
apiVersion: v1
kind: ReplicationController
metadata:
  namespace: tomcat
  name: mysql
spec:
  replicas: 1
  selector:
    app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: 10.0.0.11:5000/mysql:5.7
          ports:
          - containerPort: 3306
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: '123456'


[root@k8s-master tomcat_demo]# cat mysql-svc.yml 
apiVersion: v1
kind: Service
metadata:
  namespace: tomcat
  name: mysql
spec:
  ports:
    - port: 3306
      targetPort: 3306
  selector:
    app: mysql


[root@k8s-master tomcat_demo]# cat tomcat-rc.yml 
apiVersion: v1
kind: ReplicationController
metadata:
  namespace: tomcat
  name: myweb
spec:
  replicas: 1
  selector:
    app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
        - name: myweb
          image: 10.0.0.11:5000/tomcat-app:v2
          ports:
          - containerPort: 8080
          env:
          - name: MYSQL_SERVICE_HOST
            value: 'mysql'
          - name: MYSQL_SERVICE_PORT
            value: '3306'



[root@k8s-master tomcat_demo]# cat tomcat-svc.yml 
apiVersion: v1
kind: Service
metadata:
  namespace: tomcat
  name: myweb
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30008
  selector:
    app: myweb

wordpress+mysql

[root@k8s-master wp]# cat mysql-rc.yml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: wordpress
  name: mysql-wp
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql-wp
    spec:
      containers:
        - name: mysql-wp
          image: 10.0.0.11:5000/mysql:5.7
          ports:
          - containerPort: 3306
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: somewordpress
          - name: MYSQL_DATABASE
            value: wordpress
          - name: MYSQL_USER
            value: wordpress
          - name: MYSQL_PASSWORD
            value: wordpress


[root@k8s-master wp]# cat mysql-svc.yml 
apiVersion: v1
kind: Service
metadata:
  namespace: wordpress
  name: mysql-wp
spec:
  ports:
    - port: 3306
      targetPort: 3306
  selector:
    app: mysql-wp




[root@k8s-master wp]# cat wp-rc.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: wordpress
  name: myweb-wp
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myweb-wp
    spec:
      containers:
        - name: myweb-wp
          image: 10.0.0.11:5000/wordpress:latest
          ports:
          - containerPort: 80
          env:
          - name: WORDPRESS_DB_HOST
            value: 10.254.106.250
          - name: WORDPRESS_DB_USER
            value: wordpress
          - name: WORDPRESS_DB_PASSWORD
            value: wordpress


[root@k8s-master wp]# cat wp-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  namespace: wordpress
  name: myweb-wp
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30009
  selector:
    app: myweb-wp

PS:创建svc的命令也可以非交互式进行

4.k8s的附加组件

k8s集群中dns服务的作用,就是将svc的名称解析成对应VIP地址

4.1 dns服务

dns的工作原理:   首先需要指定k8s客户端kubelet的配置文件,加入dns的svc的vip,重启kubelet。 当需要解析时,node中的pod会访问访问dns,dns向Master的api-server发起请求,查找需要的pod对应svc的信息,将信息返回给dns,由dns将vip返回node中的pod

1.下载

2.导入dns_docker镜像包(node2节点10.0.0.13)

3.创建dns服务
[root@k8s-master dns]# cat skydns-svc.yaml 
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# TODO - At some point, we need to rename all skydns-*.yaml.* files to kubedns-*.yaml.*

# Warning: This is a file generated from the base underscore template file: skydns-svc.yaml.base

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.230.254
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

[root@k8s-master dns]# cat skydns.yaml 
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# TODO - At some point, we need to rename all skydns-*.yaml.* files to kubedns-*.yaml.*
# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file.

# __MACHINE_GENERATED_WARNING__

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
    spec:
      nodeName: 10.0.0.13
      containers:
      - name: kubedns
        image: gcr.io/google_containers/kubedns-amd64:1.9
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthz-kubedns
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local.
        - --dns-port=10053
        - --config-map=kube-dns
        - --kube-master-url=http://10.0.0.11:8080
        # This should be set to v=2 only after the new image (cut from 1.5) has
        # been released, otherwise we will flood the logs.
        - --v=0
        #__PILLAR__FEDERATIONS__DOMAIN__MAP__
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
      - name: dnsmasq
        image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4
        livenessProbe:
          httpGet:
            path: /healthz-dnsmasq
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --cache-size=1000
        - --no-resolv
        - --server=127.0.0.1#10053
        #- --log-facility=-
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 10Mi
      - name: dnsmasq-metrics
        image: gcr.io/google_containers/dnsmasq-metrics-amd64:1.0
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 10Mi
      - name: healthz
        image: gcr.io/google_containers/exechealthz-amd64:1.2
        resources:
          limits:
            memory: 50Mi
          requests:
            cpu: 10m
            # Note that this container shouldn't really need 50Mi of memory. The
            # limits are set higher than expected pending investigation on #29688.
            # The extra memory was stolen from the kubedns container to keep the
            # net memory requested by the pod constant.
            memory: 50Mi
        args:
        - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
        - --url=/healthz-dnsmasq
        - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
        - --url=/healthz-kubedns
        - --port=8080
        - --quiet
        ports:
        - containerPort: 8080
          protocol: TCP
      dnsPolicy: Default  # Don't use cluster DNS.

#根据自己的情况,修改nodeName的地址 和 - --kube-master-url=http://10.0.0.11:8080这一行的api-server的ip地址
​
kubectl  create  -f   skydns-rc.yaml
kubectl create -f skydns-svc.yaml


4.检查
kubectl get all --namespace=kube-system

5.修改所有node节点kubelet的配置文件
vim  /etc/kubernetes/kubelet
KUBELET_ARGS="--cluster_dns=10.254.230.254 --cluster_domain=cluster.local"
​
systemctl   restart kubelet

6.修改tomcat-rc.yml和wordpress.yaml文件
          env:
          - name: MYSQL_SERVICE_HOST
            value: 'mysql'   #修改前值是VIP


kubectl delete -f .
kubectl create -f .

7. 验证

4.2namespace命令空间

#防止冲突,且提高查询效率(不同namespace之中的资源名字可以相同,标签选择器不能跨namespace)

创建namespace
kubectl create namespace  namespace_name

查看namespace
kubectl get namespace 

查看kubectl get namespace 中的pod
kubectl get pod -n namespace_name  |  kubectl get pod --namespace=tomcat 

4.3 健康检查和可用性

4.3.1 探针的类型

livenessProbe:健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器

readinessProbe:可用性检查,周期性检查服务是否可用,不可用将从service的endpoints中移除

4.3.2 探针的检测方法

exec:执行一段命令 返回值为0, 非0
httpGet:检测某个 http 请求的返回状态码 2xx,3xx正常, 4xx,5xx错误   #最常用
tcpSocket:测试某个端口是否能够连接

4.3.3 liveness探针的exec使用

vi  nginx_pod_exec.yaml 
iapiVersion: v1
kind: Pod
metadata:
  name: exec
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      args:
        - /bin/sh
        - -c
        - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
      livenessProbe:
        exec:
          command:
            - cat
            - /tmp/healthy
        initialDelaySeconds: 5   
        periodSeconds: 5
        timeoutSeconds: 5
        successThreshold: 1
        failureThreshold: 1

initialDelaySeconds: 5   #探针检测的推迟时间,比如:tomcat服务的启动需要时间,pod启动之后,需要推迟多久进行探针的检测
periodSeconds: 5         #探针的检测周期

4.3.4 liveness探针的httpGet使用

vi   nginx_pod_httpGet.yaml 
iapiVersion: v1
kind: Pod
metadata:
  name: httpget
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      livenessProbe:
        httpGet:
          path: /index.html
          port: 80
        initialDelaySeconds: 3
        periodSeconds: 3

4.3.5 liveness探针的tcpSocket使用

vi   nginx_pod_tcpSocket.yaml
iapiVersion: v1
kind: Pod
metadata:
  name: tcpSocket
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      args:
        - /bin/sh
        - -c
        - tail -f /etc/hosts
      livenessProbe:
        tcpSocket:
          port: 80
        initialDelaySeconds: 10
        periodSeconds: 3

4.3.6 readiness探针的httpGet使用

vi   nginx-rc-httpGet.yaml
iapiVersion: v1
kind: ReplicationController
metadata:
  name: readiness
spec:
  replicas: 2
  selector:
    app: readiness
  template:
    metadata:
      labels:
        app: readiness
    spec:
      containers:
      - name: readiness
        image: 10.0.0.11:5000/nginx:1.13
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /qiangge.html
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 3

4.4 dashboard服务

1:上传并导入镜像,打标签

2:创建dashborad的deployment和service

3:访问http://10.0.0.11:8080/ui/


[root@k8s-master dashbord]# cat dashboard.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Keep the name in sync with image version and
# gce/coreos/kube-manifests/addons/dashboard counterparts
  name: kubernetes-dashboard-latest
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
        version: latest
        kubernetes.io/cluster-service: "true"
    spec:
      nodeName: 10.0.0.13
      containers:
      - name: kubernetes-dashboard
        image: 10.0.0.11:5000/kubernetes-dashboard-amd64:v1.4.1
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        ports:
        - containerPort: 9090
        args:
         -  --apiserver-host=http://10.0.0.11:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30


[root@k8s-master dashbord]# cat dashboard-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090

4.5 通过apiservicer反向代理访问service

# 基本不用或者说不用

第一种:NodePort类型 
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30008
​
第二种:ClusterIP类型
  type: ClusterIP
  ports:
    - port: 80
      targetPort: 80
      
http://10.0.0.11:8080/api/v1/proxy/namespaces/命令空间/services/service的名字/
#例子:
http://10.0.0.11:8080/api/v1/proxy/namespaces/qiangge/services/wordpress

补充daemon set

daemonset适合跑监控类型的pod,它会在每个node节点上跑一个,而deployment不可以跑监控的pod,因为它是随机运行在node上


imagePullPolicy:
Always:    总是去仓库上pull镜像,如果镜像版本是latest,不会去仓库中pull镜像,会去本地寻找镜像
Nerver:   从不去仓库中pull镜像,只从本地查找
IfnotPresent:    如果本地没有镜像,就去仓库中pull镜像

5. k8s弹性伸缩

k8s弹性伸缩,需要附加插件heapster监控(可以用第三方监控软件:prometheuse监控)

图:

原理:heapster进行数据的采集,向api-server发送请求,api-server向各个node节点的kubelet发送指令,kubelet拿到数据返回给api-server,api-server将数据返回给heapster,heapster将拿到的数据存储到influxdb数据库中,由grafana进行出图展示

5.1 安装heapster监控

1:上传并导入镜像,打标签

ls *.tar.gz
for n in `ls *.tar.gz`;do docker load -i $n ;done
docker tag docker.io/kubernetes/heapster_grafana:v2.6.0 10.0.0.11:5000/heapster_grafana:v2.6.0
docker tag  docker.io/kubernetes/heapster_influxdb:v0.5 10.0.0.11:5000/heapster_influxdb:v0.5
docker tag docker.io/kubernetes/heapster:canary 10.0.0.11:5000/heapster:canary

2:上传配置文件,kubectl create -f .

修改配置文件:
#heapster-controller.yaml
    spec:
      nodeName: 10.0.0.13
      containers:
      - name: heapster
        image: 10.0.0.11:5000/heapster:canary
        imagePullPolicy: IfNotPresent
#influxdb-grafana-controller.yaml
    spec:
      nodeName: 10.0.0.13
      containers:

3:打开dashboard验证

5.2 弹性伸缩

1:修改rc的配置文件

  containers:
  - name: myweb
    image: 10.0.0.11:5000/nginx:1.13
    ports:
    - containerPort: 80
    resources:
      limits:
        cpu: 100m
      requests:
        cpu: 100m

2:创建弹性伸缩规则

kubectl  autoscale  deploy  nginx-deployment  --max=8  --min=1 --cpu-percent=5

3:测试

 ab -n 1000000 -c 40  http://10.0.0.12:33218/index.html
 在dashbord观察,实现了自动缩容与扩容

6:持久化存储

6.1 emptyDir:

    spec:
      nodeName: 10.0.0.13
      volumes:
      - name: mysql
        emptyDir: {}
      containers:
        - name: wp-mysql
          image: 10.0.0.11:5000/mysql:5.7
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 3306
          volumeMounts:
          - mountPath: /var/lib/mysql
            name: mysql

6.2 HostPath:

    spec:
      nodeName: 10.0.0.12
      volumes:
      - name: mysql
        hostPath:
          path: /data/wp_mysql
      containers:
        - name: wp-mysql
          image: 10.0.0.11:5000/mysql:5.7
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 3306
          volumeMounts:
          - mountPath: /var/lib/mysql
            name: mysql

6.3 nfs:

      volumes:
      - name: mysql
        nfs:
          path: /data/wp_mysql
          server: 10.0.0.11

6.4 pv和pvc:

pv: persistent volume    全局资源,k8s集群

pvc: persistent volume  claim,   局部资源属于某一个namespace

实现:
6.4.1:安装nfs服务端(10.0.0.11)

yum install nfs-utils.x86_64 -y
mkdir /data
vim /etc/exports
/data  10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
systemctl start rpcbind
systemctl start nfs

6.4.2:在node节点安装nfs客户端

yum install nfs-utils.x86_64 -y
showmount -e 10.0.0.11

6.4.3:创建pv和pvc

上传yaml配置文件,创建pv和pvc

6.4.4:创建mysql-rc,pod模板里使用volume

      volumes:
      - name: mysql
        persistentVolumeClaim:
          claimName: tomcat-mysql

6.4.5: 验证持久化

验证方法1:删除mysql的pod,数据库不丢

kubectl delete pod mysql-gt054

验证方法2:查看nfs服务端,是否有mysql的数据文件

7:使用jenkins实现k8s持续更新

ip地址 服务 内存
10.0.0.11 kube-apiserver 8080 1G
10.0.0.12 kube-apiserver 8080 1G
10.0.0.13 jenkins(tomcat + jdk) 8080 3G
代码仓库使用gitee托管

7.1: 安装gitlab并上传代码

#a:安装
wget https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7/gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm
yum localinstall gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm -y
#b:配置
vim /etc/gitlab/gitlab.rb
external_url 'http://10.0.0.13'
prometheus_monitoring['enable'] = false
#c:应用并启动服务
gitlab-ctl reconfigure
​
#使用浏览器访问http://10.0.0.13,修改root用户密码,创建project
​
#上传代码到git仓库
cd /srv/
rz -E
unzip xiaoniaofeifei.zip 
rm -fr xiaoniaofeifei.zip 
​
git config --global user.name "Administrator"
git config --global user.email "admin@example.com"
git init
git remote add origin http://10.0.0.13/root/xiaoniao.git
git add .
git commit -m "Initial commit"
git push -u origin master

7.2 安装jenkins,并自动构建docker镜像

1:安装jenkins

cd /opt/
wget   http://192.168.12.201/191216/apache-tomcat-8.0.27.tar.gz 
wget   http://192.168.12.201/191216/jdk-8u102-linux-x64.rpm     
wget   http://192.168.12.201/191216/jenkin-data.tar.gz       
wget   http://192.168.12.201/191216/jenkins.war                       
rpm -ivh jdk-8u102-linux-x64.rpm 
mkdir /app -p
tar xf apache-tomcat-8.0.27.tar.gz -C /app
rm -fr /app/apache-tomcat-8.0.27/webapps/*
mv jenkins.war /app/apache-tomcat-8.0.27/webapps/ROOT.war
tar xf jenkin-data.tar.gz -C /root
/app/apache-tomcat-8.0.27/bin/startup.sh 
netstat -lntup

2:访问jenkins

访问http://10.0.0.12:8080/,默认账号密码admin:123456

3:配置jenkins拉取gitlab代码凭据

a:在jenkins上生成秘钥对

ssh-keygen -t rsa

b:复制公钥粘贴gitlab上

c:jenkins上创建全局凭据

4:拉取代码测试

5:编写dockerfile并测试

#vim dockerfile
FROM 10.0.0.11:5000/nginx:1.13
add .  /usr/share/nginx/html
添加docker build构建时不add的文件

vim  .dockerignore
dockerfile

docker build -t xiaoniao:v1 .
docker run -d -p 88:80 xiaoniao:v1

打开浏览器测试访问xiaoniaofeifei的项目

6:上传dockerfile和.dockerignore到私有仓库

git add docker  .dockerignore
git commit -m "fisrt commit"
git push -u origin master

7:点击jenkins立即构建,自动构建docker镜像并上传到私有仓库

docker  build  -t  10.0.0.11:5000/test:v$BUILD_ID  .
docker  push 10.0.0.11:5000/test:v$BUILD_ID

** jenkins自动部署应用到k8s**
kubectl -s 10.0.0.11:8080 get nodes

if [ -f /tmp/xiaoniao.lock ];then
    docker  build  -t  10.0.0.11:5000/xiaoniao:v$BUILD_ID  .
    docker  push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
    kubectl -s 10.0.0.11:8080 set image  -n xiaoniao deploy xiaoniao xiaoniao=10.0.0.11:5000/xiaoniao:v$BUILD_ID
    port=`kubectl -s 10.0.0.11:8080  get svc -n xiaoniao|grep -oP '(?<=80:)\d+'`
    echo "你的项目地址访问是http://10.0.0.13:$port"
    echo "更新成功"
else
    docker  build  -t  10.0.0.11:5000/xiaoniao:v$BUILD_ID  .
    docker  push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
    kubectl  -s 10.0.0.11:8080  create  namespace  xiaoniao
    kubectl  -s 10.0.0.11:8080  run   xiaoniao  -n xiaoniao  --image=10.0.0.11:5000/xiaoniao:v$BUILD_ID --replicas=3 --record
    kubectl  -s 10.0.0.11:8080   expose -n xiaoniao deployment xiaoniao --port=80 --type=NodePort
    port=`kubectl -s 10.0.0.11:8080  get svc -n xiaoniao|grep -oP '(?<=80:)\d+'`
    echo "你的项目地址访问是http://10.0.0.13:$port"
    echo "发布成功"
    touch /tmp/xiaoniao.lock
    chattr +i /tmp/xiaoniao.lock
fi

jenkins一键回滚

kubectl  -s 10.0.0.11:8080 rollout undo -n xiaoniao  deployment xiaoniao

8: k8s高可用

8.1: 安装配置etcd高可用集群

#所有节点安装etcd
yum install  etcd  -y
3:ETCD_DATA_DIR="/var/lib/etcd/"
5:ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
6:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
9:ETCD_NAME="node1"  #节点的名字
20:ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.0.11:2380"   #节点的同步数据的地址
21:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"         #节点对外提供服务的地址
26:ETCD_INITIAL_CLUSTER="node1=http://10.0.0.11:2380,node2=http://10.0.0.12:2380,node3=http://10.0.0.13:2380"
27:ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
28:ETCD_INITIAL_CLUSTER_STATE="new"
​
systemctl enable  etcd
systemctl restart  etcd
​
[root@k8s-master tomcat_demo]# etcdctl cluster-health
member 9e80988e833ccb43 is healthy: got healthy result from http://10.0.0.11:2379
member a10d8f7920cc71c7 is healthy: got healthy result from http://10.0.0.13:2379
member abdc532bc0516b2d is healthy: got healthy result from http://10.0.0.12:2379
cluster is healthy
​
#修改flannel
vim  /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://10.0.0.11:2379,http://10.0.0.12:2379,http://10.0.0.13:2379"
etcdctl mk /atomic.io/network/config   '{ "Network": "172.18.0.0/16" }'
systemctl  restart flanneld
systemctl  restart docker

8.2 安装配置master01的api-server,controller-manager,scheduler(127.0.0.1:8080)

vim /etc/kubernetes/apiserver 
KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379,http://10.0.0.12:2379,http://10.0.0.13:2379"
​
vim /etc/kubernetes/config 
KUBE_MASTER="--master=http://127.0.0.1:8080"
​
systemctl restart kube-apiserver.service 
systemctl restart kube-controller-manager.service kube-scheduler.service 

8.3 安装配置master02的api-server,controller-manager,scheduler(127.0.0.1:8080)

yum install kubernetes-master.x86_64 -y
scp -rp 10.0.0.11:/etc/kubernetes/apiserver /etc/kubernetes/apiserver
scp -rp 10.0.0.11:/etc/kubernetes/config /etc/kubernetes/config 
systemctl stop kubelet.service 
systemctl disable kubelet.service
systemctl stop kube-proxy.service 
systemctl disable kube-proxy.service
​
systemctl enable kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service

8.4 为master01和master02安装配置Keepalived

yum install keepalived.x86_64 -y
​
#master01配置:
! Configuration File for keepalived
​
global_defs {
   router_id LVS_DEVEL_11
}
​
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.0.0.10
    }
}
​
#master02配置
! Configuration File for keepalived
​
global_defs {
   router_id LVS_DEVEL_12
}
​
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.0.0.10
    }
}
​
systemctl  enable  keepalived
systemctl  start   keepalived

8.5: 所有node节点kubelet,kube-proxy指向api-server的vip

vim /etc/kubernetes/kubelet
KUBELET_API_SERVER="--api-servers=http://10.0.0.10:8080"
​
vim /etc/kubernetes/config 
KUBE_MASTER="--master=http://10.0.0.10:8080"
​
systemctl restart kubelet.service kube-proxy.service
posted @ 2021-05-11 14:19  Ming·go  阅读(255)  评论(0)    收藏  举报