Kubernetes继续学习

一、K8s集群安全机制

    Kubernetes作为一个分布式集群的管理工具,保障集群的安全性是一个重要的任务,API Server是集群内部各个组件的中介,也是外部控制的入口。所以Kubernetes的安全机制基本就是围绕保护API Server来设计的。Kubernetes使用了认证(Authentication)、鉴权(Authorization)、准入控制(Admission  Control)三步来保护API Server的安全。

 二、常用命令

查看集群状态信息

[root@master01 docker]# kubectl cluster-info
Kubernetes master is running at https://192.168.43.90:6443
KubeDNS is running at https://192.168.43.90:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

查看集群的健康状态

[root@master01 docker]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}  

查看节点状态

[root@master01 docker]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
master01   Ready    master   55d   v1.18.0
work01     Ready    <none>   55d   v1.18.0
work02     Ready    <none>   55d   v1.18.0

查看命名空间

[root@master01 docker]# kubectl get namespace
NAME              STATUS   AGE
default           Active   55d
ingress-nginx     Active   42d
kube-node-lease   Active   55d
kube-public       Active   55d
kube-system       Active   55d
roledemo          Active   42d

 三、资源清单

apiVersion:  group/version  如果没有给定指定的group,那么默认core,

kind: 资源类型

metadata: #资源元数据

      name

      namespace  

      lables

spec:     #期望状态

status: #当前状态,本字段kubernetes自己维护用户不能定义

 四、Pod的生命周期

initC的特点:

     initC总是运行到成功完成为止;

     每个initC容器都必须在下一个initC启动之前成功完成;

     如果initC容器运行失败,K8S集群会不断的重启pod,直到initC容器成功为止;

      如果pod对应的restartPolicy为never,它就不会重新启动。 

initC的实验

首先创建一个pod,在没有service的时候,到第一个就失败了

apiVersion: v1
kind: Pod
metadata:
  name: initcpod
  labels:
    app: initcpod-test
spec:
  containers:
    - name: initpod
      image: busybox:1.32.0
      imagePullPolicy: IfNotPresent
      command: ['sh','-c','echo The app is running  && sleep 3600']
  initContainers:
    - name: init1
      image: busybox:1.32.0
      imagePullPolicy: IfNotPresent
      command: ['sh','-c','until nslookup myservice; do echo waitting for myservice; sleep 2; done;']
    - name: init2
      image: busybox:1.32.0
      imagePullPolicy: IfNotPresent
      command: ['sh','-c','until nslookup mydb; do echo waitting for mydb; sleep 2; done;']
  restartPolicy: Always

然后创建一个service,然后观察,启动一个initc 

apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  selector:
    app: myservice
  ports:
    - port: 80
      targetPort: 9376
      protocol: TCP

 然后在创建一个service,观察启动情况

apiVersion: v1
kind: Service
metadata:
  name: mydb
spec:
  selector:
    app: mydb
  ports:
    - port: 8080
      targetPort: 9999
      protocol: TCP

两个service启动后,pod运行正常。 

五、Readness就绪性探测

创建一个测试pod

apiVersion: v1
kind: Pod
metadata:
  name: readinesstest
  labels:
    app: readinesstest
spec:
  containers:
    - name: readinesstest
      image: nginx:1.17.10-alpine
      imagePullPolicy: IfNotPresent
      readinessProbe:
        httpGet:
          port: 80
          path: /index1.html
        initialDelaySeconds: 2
        periodSeconds: 3
  restartPolicy: Always 
  

查看pod

[root@master01 ~]# kubectl get pod
NAME                                              READY   STATUS    RESTARTS   AGE
readinesstest                                     0/1     Running   0          13m

进入pod创建一个index1.html

[root@master01 ~]# kubectl exec -it readinesstest sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.

/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # ls
50x.html index.html
/usr/share/nginx/html # echo "Hello world" >> index1.html

检查pod正常

 六、LivenessProbe

apiVersion: v1
kind: Pod
metadata:
  name: readtest
  labels:
    apps: readtest
spec:
  containers:
    - name: readtest
      image: busybox:1.32.0
      imagePullPolicy: IfNotPresent
      command: ['/bin/sh','-c','touch /tmp/liveness;sleep 30; rm -f /tmp/liveness; sleep 3600']
      livenessProbe:
        exec:
          command: ['test','-e','/tmp/liveness']
        initialDelaySeconds: 1
        periodSeconds: 3
  restartPolicy: Always

 观察pod不断的重启

[root@master01 ~]# kubectl get pod -w
NAME                                              READY   STATUS    RESTARTS   AGE
readtest                                          1/1     Running   102        30h
weave-scope-cluster-agent-myui-75859b9bc4-2k2r6   1/1     Running   5          8d
weave-scope-frontend-myui-559664fbf6-l47gg        1/1     Running   5          8d
readtest                                          0/1     CrashLoopBackOff   102        30h
readtest                                          1/1     Running            103        30h
readtest                                          1/1     Running            104        30h

案例二:

apiVersion: v1
kind: Pod
metadata:
  name: liveness
  labels:
    app: liveness
spec:
  containers:
    - name: liveness
      image: nginx:1.17.10-alpine
      imagePullPolicy: IfNotPresent
      livenessProbe:
        tcpSocket:
          port: 8080
        initialDelaySeconds: 10
        periodSeconds: 2
        timeoutSeconds: 5
  restartPolicy: Always

 

 七、poststart 

创建容器后立即执行,当前container创建后发送钩子命令,发送后:

       pod是pending状态

       container是waiting状态

执行完成后:

      执行成功:pid会变为running状态这时候会被分配IP

      执行失败:容器重启

command 和 args 与pststart没有优先级,级别会同时执行

apiVersion: v1
kind: Pod
metadata:
  name: poststart
  labels:
    app: poststart
spec:
  containers:
    - name: poststart
      image: busybox:1.32.0
      imagePullPolicy: IfNotPresent
      command: ['sh','-c','sleep 5000']
      lifecycle:
        postStart:
          exec: 
            command: ['mkdir','-p','/xuexi/html']
  restartPolicy: Always

 八、Controller Manager

Controller  Manager 由kube-controller-manager 和 cloud-controller-manager组成,是Kubernetes的大脑,它通过apiserver监控整个集群状态,并确保集群处于预期的工作状态。

kube-controller-manager由一系列的控制器组成:

Replication  controller

Node Controller

CronJob Controller

DaemonSet Controller

Deployment Controller

Endpoint Controller

Grabage Controller

Namespace Controller

Job Controller

Pod SutoScaler

RelicaSet

Service Controller

Volume Controller

8.1常见的Pod控制器及含义

1、ReplicaSet:适合无状态的服务部署

    用于创建指定数量的pod副本数量,确保pod副本数量符合预期状态,并且支持滚动式自动扩容和缩容功能。

    ReplicaSet主要由三个组件组成:

          (1)用户期望的pod副本数量

          (2)标签选择器,判断哪个Pod归自己管理

          (3)当现存的pod数量不足,会蜂聚pod资源模版进行新建

帮助用户管理无状态的pod资源,精确反应用户定义的目标数量,但是RelicaSet不是直接使用的控制器,二是使用Deployment。

2、Deployment:适合无状态的服务部署

        工作在ReplicaSet之上,用于管理无状态应用,目前来说最好的控制器,支持滚动更新和回滚功能,还提供声明式配置。

3、StatefullSet:适合有状态的服务部署

4、DaemonSet:一次部署,所有的node节点都会部署。

5、Job:一次性的执行任务,只要完成就立即退出不需要重启或重建。

6、Cronjob:周期性的执行任务,周期性任务控制,不需要持续后台运行。

 8.2、ReplicaSet

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replic
  labels:
    app: replic
spec:
  replicas: 1
  template:
    metadata:
      name: replic
      labels:
        app: replic
    spec:
      containers:
        - name: replic
          image: nginx:1.17.10-alpine
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
      restartPolicy: Always  
  selector:
    matchLabels:
      app: replic

 扩容

[root@master01 ~]# kubectl scale replicaset replic --replicas=2
replicaset.apps/replic scaled
[root@master01 ~]# kubectl get rs
NAME                                        DESIRED   CURRENT   READY   AGE
replic                                      2         2         2       17h

 8.3、Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep
  labels:
    app: dep
spec:
  replicas: 3
  template:
    metadata: 
      name: dep
      labels:
        app: dep
    spec:
      containers:
        - name: dep
          image: nginx:1.17.10-alpine
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
      restartPolicy: Always
  selector:
     matchLabels:
       app: dep 

 升级 

[root@master01 ~]# kubectl edit deployments.apps dep

将副本数改为6 

spec:
progressDeadlineSeconds: 600
replicas: 6
revisionHistoryLimit: 10
selector:

查看deployment的副本数

[root@master01 ~]# kubectl get deploy
NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
dep                              6/6     6            6           29h

8.4、滚动更新

1、蓝绿部署

2、滚动发布

3、灰度发布

 金丝雀发布:

[root@master01 ~]# kubectl set image deployment dep dep=nginx:1.18.0-alpine && kubectl rollout pause deployment dep
deployment.apps/dep image updated
deployment.apps/dep paused

 查看状态

[root@master01 ~]# kubectl rollout status deployment dep
Waiting for deployment "dep" rollout to finish: 1 out of 3 new replicas have been updated...

继续滚动更新

[root@master01 ~]# kubectl rollout resume deployment  dep
deployment.apps/dep resumed

rollout常见命令

 

 回滚到上一版本

[root@master01 ~]# kubectl rollout undo deployment dep
deployment.apps/dep rolled back

 8.5、DaemonSet

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: daemonsetdemo
  labels:
    app: daemonsetdemo
spec:
  template:
    metadata:
      name: daemonsetdemo
      labels:
        app: daemonsetdemo
    spec:
      containers:
        - name: daemonsetdemo
          image: nginx:1.17.10-alpine
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
      restartPolicy: Always
  selector:
    matchLabels:
      app: daemonsetdemo

创建

[root@master01 ~]# kubectl create -f daemon.yaml
daemonset.apps/daemonsetdemo created
[root@master01 ~]# kubectl get pod -owide
NAME                                              READY   STATUS    RESTARTS   AGE    IP               NODE     NOMINATED NODE   READINESS GATES
daemonsetdemo-llz6v                               1/1     Running   0          27s    10.224.75.86     work02   <none>           <none>
daemonsetdemo-nl7hl                               1/1     Running   0          27s    10.224.205.240   work01   <none>           <none>

 8.6、Job

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
        - name: pi
          image: perl
          command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4

查看

[root@master01 ~]# kubectl logs   pi-mc8tr
3.1415926535897932384626433832795028841971693993751058209749445923078164062862。。。。

8.7、Service

Service在K8s中的四种类型

1、 ClusterIp: 默认类型,自动分配一个仅Cluster内部可以访问的虚拟IP

          类型为ClusterIP的service,这个service有一个Cluster-IP,其实就是一个VIP,具体实现原理依靠kubeproxy组件,通过iptables或者是ipvs实现;

2、NodePort: 在ClusterIP基础上为Service在每台机器上绑定一个端口,这样就可以通过:NodePort来访问该服务

3、LoadBalancer: 在NodePort的基础上,借助cloud priovider创建一个外部负载均衡其,并将请求转发到NodePort;

4、ExternalName:把集群外部的服务引入到集群内部来,在集群内部直接使用,没有任何类型代理被常见。

kube-proxy支持三种代理模式:用户空间,iptables和IPVS;他们各自的操作略有不同。

ClusterIP类型:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: clusteripdemo
  labels:
    app: clus
spec:
  replicas: 3
  template:
    metadata:
      name: clusteripdemo
      labels:
        app: clus
    spec:
      containers:
        - name: clusteripdemo
          image: tomcat:9.0.20-jre8-alpine 
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
      restartPolicy: Always
  selector:
    matchLabels:
      app: clus
      
---

apiVersion: v1
kind: Service
metadata:
  name: cluster-ser
spec:
  selector: 
    app: clus
  ports:
    - port: 8080
  type: ClusterIP

查看

[root@master01 ~]# kubectl get service
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
cluster-ser        ClusterIP   10.110.240.35   <none>        8080/TCP       4m43s
[root@master01 ~]# kubectl get endpoints
NAME               ENDPOINTS                                                    AGE
cluster-ser        10.224.205.248:8080,10.224.205.251:8080,10.224.75.105:8080   4m53s

 测试

[root@master01 ~]# curl 10.110.240.35:8080

NodePort类型:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodeportdemo
  labels:
    app: clus
spec:
  replicas: 3
  template:
    metadata:
      name: nodeportdemo
      labels:
        app: clus
    spec:
      containers:
        - name: nodeportdemo
          image: tomcat:9.0.20-jre8-alpine 
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
      restartPolicy: Always
  selector:
    matchLabels:
      app: clus
      
---

apiVersion: v1
kind: Service
metadata:
  name: nodeport-ser
spec:
  selector: 
    app: clus
  ports:
    - port: 8080
      targetPort: 8080
      nodePort: 30088
  type: NodePort

查看:

[root@master01 ~]# kubectl get service
nodeport-ser       NodePort    10.99.167.249   <none>        8080:30088/TCP   3m52s

 8.8、Ingress网络

k8s集群对外暴露服务的方式目前只有三种:Loadblance、nodeport、ingress;

Ingress由两部分组成:Ingress controller和ingress服务;

Ingress controller目前有两种:基于nginx服务的ingress controller和基于traefik的ingress controller

 

 而Ingress就是为进入集群的请求提供路由规则的集合,通俗点就是提供外部访问集群的入口,将外部的HTTP或者HTTPS氢气UI转发到集群内部的service上;

 Ingress-nginx一般有三个组件组成:

        反向代理负载均衡器:通常以service的port方式运行,接受并按照ingress定义的规则进行转发,常用的有nginx、Haproxy、Traefik等

        Ingress Controller:监听APIServer,根据用户编写的ingress规则,动态地去更改nginx服务的配置文件,并且reload重载使其生效,此过程自动化;

        Ingress:将nginx的配置抽象成一个Ingress对象,当用户每添加一个新的服务,只需要编写一个新的Ingress的yaml文件即可。

 九、Volume

部署mariadb

apiVersion: apps/v1
kind: Deployment
metadata:
  name: maria
  labels:
    app: maria
spec:
  replicas: 1
  template:
    metadata:
      name: maria
      labels:
        app: maria
    spec:
      containers:
        - name: maria
          image: mariadb:10.5.2
          imagePullPolicy: IfNotPresent
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: admin
            - name: TZ
              value: Asia/Shanghai
          ports: 
            - containerPort: 3306
      restartPolicy: Always
  selector:
    matchLabels:
      app: maria

---

apiVersion: v1
kind: Service
metadata:
  name: mariadb-service
spec:
  selector:
    app: maria
  ports:
    - port: 3306
      targetPort: 3306
      nodePort: 30036
  type: NodePort

十、Secret

[root@master01 mariadb]# echo -n admin| base64
YWRtaW4=
[root@master01 mariadb]# echo -n "YWRtaW4="| base64 -d
admin

 创建一个secret

apiVersion: v1
kind: Secret
metadata:
  name: mariadbsecret
type: Opaque
data: 
  password: YWRtaW4=

调用secret

apiVersion: apps/v1
kind: Deployment
metadata:
  name: maria
  labels:
    app: maria
spec:
  replicas: 1
  template:
    metadata:
      name: maria
      labels:
        app: maria
    spec:
      containers:
        - name: maria
          image: mariadb:10.5.2
          imagePullPolicy: IfNotPresent
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: password
                  name: mariadbsecret
            - name: TZ
              value: Asia/Shanghai
          ports: 
            - containerPort: 3306
      restartPolicy: Always
  selector:
    matchLabels:
      app: maria

---

apiVersion: v1
kind: Service
metadata:
  name: mariadb-service
spec:
  selector:
    app: maria
  ports:
    - port: 3306
      targetPort: 3306
      nodePort: 30036
  type: NodePort

查看

[root@master01 secret]# kubectl get secret
NAME                                         TYPE                                  DATA   AGE
default-token-mbnxd                          kubernetes.io/service-account-token   3      77d
mariadbsecret                                Opaque                                1      109s

 十一、ConfigMap

ConfigMap顾名思义,用于报错配置数据的键值对,可以用来保存单个属性,也可以保存配置文件。

11.1 命令创建

[root@master01 ~]# kubectl create configmap helloconfigmap --from-literal=yun.hello=world
configmap/helloconfigmap created

查看

[root@master01 ~]# kubectl get configmap
NAME                     DATA   AGE
helloconfigmap           1      39s

删除

[root@master01 ~]# kubectl delete configmap helloconfigmap
configmap "helloconfigmap" deleted

从文件创建configmap

11.2 创建文件

vi jdbc.properties
.driverclass=com.mysql.jdbc.Driver
jdbc.url=jdbc:mysql://localhost:3306/test
jdbc.username=root
jdbc.password=admin

从文件创建

[root@master01 configmap]# kubectl create configmap myjdbcmap --from-file=jdbc.properties
configmap/myjdbcmap created

查看configmap

[root@master01 configmap]# kubectl describe configmaps myjdbcmap
Name:         myjdbcmap
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
jdbc.properties:
----
jdbc.driverclass=com.mysql.jdbc.Driver
jdbc.url=jdbc:mysql://localhost:3306/test
jdbc.username=root
jdbc.password=admin

Events:  <none>

删除

[root@master01 configmap]# kubectl delete configmap myjdbcmap
configmap "myjdbcmap" deleted

11.3、目录方式创建

[root@master01 configmap]# kubectl create configmap myjdbcconfigmap --from-file=/root/configmap/jdbc.properties
configmap/myjdbcconfigmap created

查看

[root@master01 configmap]# kubectl describe configmap myjdbcconfigmap
Name:         myjdbcconfigmap
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
jdbc.properties:
----
jdbc.driverclass=com.mysql.jdbc.Driver
jdbc.url=jdbc:mysql://localhost:3306/test
jdbc.username=root
jdbc.password=admin

Events:  <none>

11.4 yml文件创建

apiVersion: v1
kind: ConfigMap
metadata:
  name: mariadbconfigmap
data:
  mysql-driver: com.mysql.jdbc.Driver
  mysql-url: jdbc:mysql://localhost:3306/test
  mysql-user: root
  mysql-password: admin

创建

[root@master01 configmap]# kubectl create -f myconfigmap.yaml 
configmap/mariadbconfigmap created

11.5 、configmap实战

创建个mariadb,先拷贝其配置文件

[root@work01 ~]# docker run --name some-mariadb -e MYSQL_ROOT_PASSWORK=admin -d mariadb:10.5.2
2ac9e1f5bdbb5db064602785bd64ecae2fd4cf1ab300698d1a460d0fd97baa20

拷贝配置文件

docker cp some-mariadb:/etc/mysql/my.cnf .

实验需要,将端口3306改为3307

用配置文件创建一个configmap

[root@master01 data]# kubectl create configmap mysqllini --from-file=my.cnf
configmap/mysqllini created

也可以用configmap生成一个yaml文件,这样后边可以用yaml文件创建,删除用文件创建的configmap,采用yaml方式创建

[root@master01 data]# kubectl get configmaps mysqllini -o yaml>mariadbconfigmap.yaml

在之前的创建mariadb中调用

apiVersion: apps/v1
kind: Deployment
metadata:
  name: maria
  labels:
    app: maria
spec:
  replicas: 1
  template:
    metadata:
      name: maria
      labels:
        app: maria
    spec:
      containers:
        - name: maria
          image: mariadb:10.5.2
          imagePullPolicy: IfNotPresent
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: password
                  name: mariadbsecret
            - name: TZ
              value: Asia/Shanghai
          volumeMounts:
            - mountPath: /etc/mysql/mariadb.conf.d/   #容器内的挂载目录
              name: yunmariadb
          ports: 
            - containerPort: 3307
      restartPolicy: Always
      volumes:
        - name: yunmariadb
          configMap:
            name: mysqllini     #configmap的名称
  selector:
    matchLabels:
      app: maria

---

apiVersion: v1
kind: Service
metadata:
  name: mariadb-service
spec:
  selector:
    app: maria
  ports:
    - port: 3307
      targetPort: 3307
      nodePort: 30037
  type: NodePort

创建对应的secret

apiVersion: v1
kind: Secret
metadata:
  name: mariadbsecret
type: Opaque
data: 
  password: YWRtaW4

连接测试

 十二、Label

给node打标签

[root@master01 data]# kubectl label node work01 mariadb=mariadb10.5.2
node/work01 labeled

查看标签

[root@master01 data]# kubectl get nodes --show-labels
NAME       STATUS   ROLES    AGE   VERSION   LABELS
master01   Ready    master   78d   v1.18.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node-role.kubernetes.io/master=
work01     Ready    <none>   78d   v1.18.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env_role=dev,kubernetes.io/arch=amd64,kubernetes.io/hostname=work01,kubernetes.io/os=linux,mariadb=mariadb10.5.2
work02     Ready    <none>   78d   v1.18.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=work02,kubernetes.io/os=linux

删除标签

[root@master01 data]# kubectl label node work01 mariadb-
node/work01 labeled

 使用案例

apiVersion: apps/v1
kind: Deployment
metadata:
  name: maria
  labels:
    app: maria
spec:
  replicas: 2
  template:
    metadata:
      name: maria
      labels:
        app: maria
    spec:
      containers:
        - name: maria
          image: mariadb:10.5.2
          imagePullPolicy: IfNotPresent
          ports: 
            - containerPort: 3306
      restartPolicy: Always
      nodeSelector:
        mariadb: mariadb
  selector:
    matchLabels:
      app: maria

查看结果两个pod都调度到了work02

十三、HostPath

hostPath类型的存储是将工作节点上的文件系统或者目录挂载到Pod中的一种存储卷,把宿主机上的目录挂载到容器,但是在每个节点上都要有,因为不确定容器会分配到哪个节点。也是把存储从宿主机挂载到k8s集群上,丹它有很多限制,例如只支持单节点(Node),而且只支持“ReadWriteOnce”模式。

实际使用案例:

[root@master01 hostpath]# cat mariadb.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: maria
  labels:
    app: maria
spec:
  replicas: 1
  template:
    metadata:
      name: maria
      labels:
        app: maria
    spec:
      containers:
        - name: maria
          image: mariadb:10.5.2
          imagePullPolicy: IfNotPresent
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: password
                  name: mariadbsecret
            - name: TZ
              value: Asia/Shanghai
          volumeMounts:
            - mountPath: /etc/mysql/mariadb.conf.d/
              name: yunmariadb
            - mountPath: /var/lib/mysql
              name: hostpathvolume
          ports: 
            - containerPort: 3307
      restartPolicy: Always
      nodeSelector: 
        mariadb: mariadb
      volumes:
        - name: yunmariadb
          configMap:
            name: mysqllini
        - name: hostpathvolume
          hostPath:
            path: /hostpath/data
            type: Directory
  selector:
    matchLabels:
      app: maria

---

apiVersion: v1
kind: Service
metadata:
  name: mariadb-service
spec:
  selector:
    app: maria
  ports:
    - port: 3307
      targetPort: 3307
      nodePort: 30037
  type: NodePort

查看宿主机的目录

[root@work02 data]# ls
aria_log.00000001  ib_buffer_pool  ib_logfile0  multi-master.info  performance_schema
aria_log_control   ibdata1         ibtmp1       mysql

 十四、emptyDir

emptyDir存储卷是Pod生命周期中的一个临时目录,在pod对象被移除时候,会被一并删除,用的较少,例如同一pod内的多个容器间文件共享,或者作为容器数据的临时存储目录用户数据缓存等。

十五、 PV和PVC

persisentVolume: 是管理员设置的存储,他是集群的一部分,就想节点是集群中的资源一样,PV也是集群中的资源。PV是集群级别的资源,不属于任何Namespace

PV和PVC之间的绑定是一对一的关系。一个PVC只能绑定一个PV,而一个PV只能被一个PVC绑定。但是,多个PVC可以绑定同一个PV,以实现共享存储。

     PV的四种状态:

              Available(可用)--可用状态,尚未被PVC绑定;

              Bound(已经绑定)--与某个PVC绑定;

               Released(以释放)---与之绑定的PVC已经被删除,但是资源尚未被集群回收;

               Failed(失败)-当删除PVC清理资源,自动回收卷失败,所以处于故障状态。

persistentVolumeClaim是用户存储的请求,它与pod类似,pod消耗节点资源,PVC消耗PV资源。是用户存储的一种声明,类似对存储资源的申请,它属于一个Namespace中的资源,可用于向PV申请存储资源。

实验:

创建PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mariadb-pv
  labels:
    app: mariadb-pv
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 5Gi
  hostPath:
    path: /data/mariadb
    type: DirectoryOrCreate
  persistentVolumeReclaimPolicy: Retain
  storageClassName: standard
  volumeMode: Filesystem

创建PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mariadb-pvc
  labels:
    app: mariadb-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: standard
  resources:
    requests:
      storage: 4Gi

创建业务使用pvc

apiVersion: apps/v1
kind: Deployment
metadata:
  name: maria
  labels:
    app: maria
spec:
  replicas: 1
  template:
    metadata:
      name: maria
      labels:
        app: maria
    spec:
      containers:
        - name: maria
          image: mariadb:10.5.2
          imagePullPolicy: IfNotPresent
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: password
                  name: mariadbsecret
            - name: TZ
              value: Asia/Shanghai
          volumeMounts:
            - mountPath: /etc/mysql/mariadb.conf.d/
              name: yunmariadb
            - mountPath: /var/lib/mysql
              name: hostpathvolume
          ports: 
            - containerPort: 3307
      restartPolicy: Always
      nodeSelector: 
        mariadb: mariadb
      volumes:
        - name: yunmariadb
          configMap:
            name: mysqllini
        - name: hostpathvolume
          persistentVolumeClaim:
            claimName: mariadb-pvc
  selector:
    matchLabels:
      app: maria

---

apiVersion: v1
kind: Service
metadata:
  name: mariadb-service
spec:
  selector:
    app: maria
  ports:
    - port: 3307
      targetPort: 3307
      nodePort: 30037
  type: NodePort

配置是之前生成的:

apiVersion: v1
data:
  my.cnf: "# MariaDB database server configuration file.\n#\n# You can copy this file
    to one of:\n# - \"/etc/mysql/my.cnf\" to set global options,\n# - \"~/.my.cnf\"
    to set user-specific options.\n# \n# One can use all long options that the program
    supports.\n# Run program with --help to get a list of available options and with\n#
    --print-defaults to see which it would actually understand and use.\n#\n# For
    explanations see\n# http://dev.mysql.com/doc/mysql/en/server-system-variables.html\n\n#
    This will be passed to all mysql clients\n# It has been reported that passwords
    should be enclosed with ticks/quotes\n# escpecially if they contain \"#\" chars...\n#
    Remember to edit /etc/mysql/debian.cnf when changing the socket location.\n[client]\nport\t\t=
    3307\nsocket\t\t= /var/run/mysqld/mysqld.sock\n\n# Here is entries for some specific
    programs\n# The following values assume you have at least 32M ram\n\n# This was
    formally known as [safe_mysqld]. Both versions are currently parsed.\n[mysqld_safe]\nsocket\t\t=
    /var/run/mysqld/mysqld.sock\nnice\t\t= 0\n\n[mysqld]\n#\n# * Basic Settings\n#\n#user\t\t=
    mysql\npid-file\t= /var/run/mysqld/mysqld.pid\nsocket\t\t= /var/run/mysqld/mysqld.sock\nport\t\t=
    3307\nbasedir\t\t= /usr\ndatadir\t\t= /var/lib/mysql\ntmpdir\t\t= /tmp\nlc_messages_dir\t=
    /usr/share/mysql\nlc_messages\t= en_US\nskip-external-locking\n#\n# Instead of
    skip-networking the default is now to listen only on\n# localhost which is more
    compatible and is not less secure.\n#bind-address\t\t= 127.0.0.1\n#\n# * Fine
    Tuning\n#\nmax_connections\t\t= 100\nconnect_timeout\t\t= 5\nwait_timeout\t\t=
    600\nmax_allowed_packet\t= 16M\nthread_cache_size       = 128\nsort_buffer_size\t=
    4M\nbulk_insert_buffer_size\t= 16M\ntmp_table_size\t\t= 32M\nmax_heap_table_size\t=
    32M\n#\n# * MyISAM\n#\n# This replaces the startup script and checks MyISAM tables
    if needed\n# the first time they are touched. On error, make copy and try a repair.\nmyisam_recover_options
    = BACKUP\nkey_buffer_size\t\t= 128M\n#open-files-limit\t= 2000\ntable_open_cache\t=
    400\nmyisam_sort_buffer_size\t= 512M\nconcurrent_insert\t= 2\nread_buffer_size\t=
    2M\nread_rnd_buffer_size\t= 1M\n#\n# * Query Cache Configuration\n#\n# Cache only
    tiny result sets, so we can fit more in the query cache.\nquery_cache_limit\t\t=
    128K\nquery_cache_size\t\t= 64M\n# for more write intensive setups, set to DEMAND
    or OFF\n#query_cache_type\t\t= DEMAND\n#\n# * Logging and Replication\n#\n# Both
    location gets rotated by the cronjob.\n# Be aware that this log type is a performance
    killer.\n# As of 5.1 you can enable the log at runtime!\n#general_log_file        =
    /var/log/mysql/mysql.log\n#general_log             = 1\n#\n# Error logging goes
    to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf.\n#\n# we do want to
    know about network errors and such\n#log_warnings\t\t= 2\n#\n# Enable the slow
    query log to see queries with especially long duration\n#slow_query_log[={0|1}]\nslow_query_log_file\t=
    /var/log/mysql/mariadb-slow.log\nlong_query_time = 10\n#log_slow_rate_limit\t=
    1000\n#log_slow_verbosity\t= query_plan\n\n#log-queries-not-using-indexes\n#log_slow_admin_statements\n#\n#
    The following can be used as easy to replay backup logs or for replication.\n#
    note: if you are setting up a replication slave, see README.Debian about\n#       other
    settings you may need to change.\n#server-id\t\t= 1\n#report_host\t\t= master1\n#auto_increment_increment
    = 2\n#auto_increment_offset\t= 1\n#log_bin\t\t\t= /var/log/mysql/mariadb-bin\n#log_bin_index\t\t=
    /var/log/mysql/mariadb-bin.index\n# not fab for performance, but safer\n#sync_binlog\t\t=
    1\nexpire_logs_days\t= 10\nmax_binlog_size         = 100M\n# slaves\n#relay_log\t\t=
    /var/log/mysql/relay-bin\n#relay_log_index\t= /var/log/mysql/relay-bin.index\n#relay_log_info_file\t=
    /var/log/mysql/relay-bin.info\n#log_slave_updates\n#read_only\n#\n# If applications
    support it, this stricter sql_mode prevents some\n# mistakes like inserting invalid
    dates etc.\n#sql_mode\t\t= NO_ENGINE_SUBSTITUTION,TRADITIONAL\n#\n# * InnoDB\n#\n#
    InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.\n# Read
    the manual for more InnoDB related options. There are many!\ndefault_storage_engine\t=
    InnoDB\ninnodb_buffer_pool_size\t= 256M\ninnodb_log_buffer_size\t= 8M\ninnodb_file_per_table\t=
    1\ninnodb_open_files\t= 400\ninnodb_io_capacity\t= 400\ninnodb_flush_method\t=
    O_DIRECT\n#\n# * Security Features\n#\n# Read the manual, too, if you want chroot!\n#
    chroot = /var/lib/mysql/\n#\n# For generating SSL certificates I recommend the
    OpenSSL GUI \"tinyca\".\n#\n# ssl-ca=/etc/mysql/cacert.pem\n# ssl-cert=/etc/mysql/server-cert.pem\n#
    ssl-key=/etc/mysql/server-key.pem\n\n#\n# * Galera-related settings\n#\n[galera]\n#
    Mandatory settings\n#wsrep_on=ON\n#wsrep_provider=\n#wsrep_cluster_address=\n#binlog_format=row\n#default_storage_engine=InnoDB\n#innodb_autoinc_lock_mode=2\n#\n#
    Allow server to accept connections on all interfaces.\n#\n#bind-address=0.0.0.0\n#\n#
    Optional setting\n#wsrep_slave_threads=1\n#innodb_flush_log_at_trx_commit=0\n\n[mysqldump]\nquick\nquote-names\nmax_allowed_packet\t=
    16M\n\n[mysql]\n#no-auto-rehash\t# faster start of mysql but no tab completion\n\n[isamchk]\nkey_buffer\t\t=
    16M\n\n#\n# * IMPORTANT: Additional settings that can override those from this
    file!\n#   The files must end with '.cnf', otherwise they'll be ignored.\n#\n!include
    /etc/mysql/mariadb.cnf\n!includedir /etc/mysql/conf.d/\n"
kind: ConfigMap
metadata:
  creationTimestamp: "2023-08-11T08:56:22Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:my.cnf: {}
    manager: kubectl
    operation: Update
    time: "2023-08-11T08:56:22Z"
  name: mysqllini
  namespace: default
  resourceVersion: "1627156"
  selfLink: /api/v1/namespaces/default/configmaps/mysqllini
  uid: 4515e18b-5125-457f-9736-6c84707602db

 查看创建的pv

[root@master01 pv2]# kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE
mariadb-pv   5Gi        RWO            Retain           Bound    default/mariadb-pvc   standard                17m

查看pvc

[root@master01 pv2]# kubectl get pvc
NAME          STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mariadb-pvc   Bound    mariadb-pv   5Gi        RWO            standard       17m

 十六、调度

16.1 指定node

    spec:
      containers:
        - name: maria
          image: mariadb:10.5.2
          imagePullPolicy: IfNotPresent
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: password
                  name: mariadbsecret
            - name: TZ
              value: Asia/Shanghai
          volumeMounts:
            - mountPath: /etc/mysql/mariadb.conf.d/
              name: yunmariadb
            - mountPath: /var/lib/mysql
              name: hostpathvolume
          ports: 
            - containerPort: 3307
      restartPolicy: Always
      nodeName: work02

 16.2 Affinity 硬亲和性

apiVersion: apps/v1
kind: Deployment
metadata:
  name: maria
  labels:
    app: maria
spec:
  replicas: 1
  template:
    metadata:
      name: maria
      labels:
        app: maria
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/hostname
                    operator: In
                    values: 
                      - work01
      containers:
        - name: maria
          image: mariadb:10.5.2
          imagePullPolicy: IfNotPresent
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: password
                  name: mariadbsecret
            - name: TZ
              value: Asia/Shanghai
          volumeMounts:
            - mountPath: /etc/mysql/mariadb.conf.d/
              name: yunmariadb
            - mountPath: /var/lib/mysql
              name: hostpathvolume
          ports: 
            - containerPort: 3307
      restartPolicy: Always
      volumes:
        - name: yunmariadb
          configMap:
            name: mysqllini
        - name: hostpathvolume
          hostPath:
            path: /hostpath/data
            type: Directory
  selector:
    matchLabels:
      app: maria

---

apiVersion: v1
kind: Service
metadata:
  name: mariadb-service
spec:
  selector:
    app: maria
  ports:
    - port: 3307
      targetPort: 3307
      nodePort: 30037
  type: NodePort

16.2 软亲和

    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - preference:
                matchExpressions:
                  - key: kubernetes.io/hostname
                    operator: In
                    values: 
                      - work01
              weight: 20

十七、污点和容忍度

17.1 污点

污点taints是定义在node节点上的键值型属性数据,用于让节点拒绝将Pod调度运行于其上,除非Pod有接纳节点污点的容忍度,容忍度tolerations是定义在pod上键值属性数据,用于配置可容忍的污点,且调度器将Pod调度至能容忍该污点的节点上或者没有污点的节点上。

 

给node打上污点

[root@master01 taints]# kubectl taint node work01 offline=testtaint:NoExecute
node/work01 tainted

删除污点

[root@master01 taints]# kubectl taint node work01 offline-
node/work01 untainted

 17.2 容忍度

给节点打污点

[root@master01 taints]# kubectl taint node work02 offline=testtaint:NoSchedule
node/work02 tainted

给pod添加容忍

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep
  labels:
    app: dep
spec:
  replicas: 5
  template:
    metadata: 
      name: dep
      labels:
        app: dep
    spec:
      tolerations:
        - key: "offline"
          value: "testtaint"
          effect: "NoSchedule"
          operator: "Equal" 
      containers:
        - name: dep
          image: nginx:1.17.10-alpine
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
      restartPolicy: Always
  selector:
     matchLabels:
       app: dep 

 十八、常用软件安装

18.1 RBAC

基于角色(Role)的访问控制(RBAC)是一种基于企业中用户的角色来调节控制对计算机或网络资源的访问方法。

一个Role只能用来对某一命名空间中的资源赋予访问权限。

ClusterRole可以授予的权限和Role,但是因为ClusterRole属于集群范围,所以它可以授予访问权限。

18.2、RoleBinding和ClusterRoleBinding

角色绑定(RoleBinding)是将角色中定义的权限赋予一个或者一组用户。

https://github.com/kubernetes/dashboard/blob/v2.0.3/aio/deploy/recommended.yaml

18.2、Dashboard安装

 复制yaml文件,并修改Service为NodePoart类型

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30300
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

创建应用

[root@master01 dashboard]# kubectl delete -f .
namespace "kubernetes-dashboard" deleted
serviceaccount "kubernetes-dashboard" deleted
service "kubernetes-dashboard" deleted
secret "kubernetes-dashboard-certs" deleted
secret "kubernetes-dashboard-csrf" deleted
secret "kubernetes-dashboard-key-holder" deleted
configmap "kubernetes-dashboard-settings" deleted
role.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
clusterrole.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
deployment.apps "kubernetes-dashboard" deleted
service "dashboard-metrics-scraper" deleted
deployment.apps "dashboard-metrics-scraper" deleted

 访问

https://192.168.43.90:30300

 命令端执行命令创建用户

[root@master01 ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created

创建授权

[root@master01 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

获取token

[root@master01 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-r888s
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 60209fe7-f1e4-4e96-9db2-b388c7d53442

Type:  kubernetes.io/service-account-token

Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImV1Qnp6Uk5LYUVfc1NGZVJwdGJySWJNamdPeUdOVXV6T1BHajZJVDVDWlkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcjg4OHMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjAyMDlmZTctZjFlNC00ZTk2LTlkYjItYjM4OGM3ZDUzNDQyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.rNdRJ-ulev3Z5N0XRMDhtyAHXVyfCmzxGxRC3UEmZ_xTQS9ERqaMRnuKE7GZzlGXyS01ifieu8fSLgNz0WNLm0OqO4xlZ9N-yz4rjnetKlzgwtO9ZCBYWYGMxbD_LiYGf-9kksWmTKBU0uVKCHYhm2o4H7r_qiKFyHb_vtKn8tZjpYVTdnA3DcaJ22p92tzA8SBsMcBhRgggzH-Qbu0zPDkxWTh92-WcgPTxXjOrNvKZBjqyrh7aCI61vhIXbCd_5KRRXKMw9gfsVqFhFsKy_hLB77rj_Gv56ziA-ytBb4Z0m80VxYutZsTfz_RfvbvUquV0dn6jK5JFJRFBzHS48g
ca.crt:     1025 bytes

将token复制到浏览器点击登陆

 十九、Statefulset

Statefulset服务pod名称的名称是稳定的,每次都一样

Headless  Service:和service差不多,只不过定义的这个叫无头服务,他们之间唯一的区别就是将Cluster ip设置为none,不会帮你配置IP;

Stateful通过调用servicename来调用Headless Service;

Pod会被顺序部署和顺序终结;

Pod具有唯一网络名称:Pod具有唯一的名称,而且在重启后会保持不变。通过Headless服务,基于主机名,每个Pod都具有独立的网络地址。

Pod能有稳定的吃鸡UI的存储,Statefulset中的每个Pod可以有其自己独立的PersistentVolumeClaim对象,及时Pod被重新调度到其他几点以后,原有的持久磁盘也会被挂载到该Pod;

 

posted @ 2023-07-07 10:56  中仕  阅读(62)  评论(0)    收藏  举报