fudonghai

导航

 

第一部分

day1,容器基础知识介绍

安装

apt-get install docker-engine


[root@cce-7day-fudonghai-24106 01CNL]# docker -v
Docker version 18.09.0, build f897bb1

[root@cce-7day-fudonghai-24106 01CNL]# docker images
REPOSITORY                                                  TAG                   IMAGE ID            CREATED             SIZE
100.125.17.64:20202/hwofficial/storage-driver-linux-amd64   1.0.13                9b1a762c647a        3 weeks ago         749MB
100.125.17.64:20202/op_svc_apm/icagent                      5.11.27               797b45c7e959        5 weeks ago         340MB
canal-agent                                                 1.0.RC8.SPC300.B010   4e31d812d31d        2 months ago        505MB
canal-agent                                                 latest                4e31d812d31d        2 months ago        505MB
100.125.17.64:20202/hwofficial/cce-coredns-linux-amd64      1.2.6.1               614e71c360a5        3 months ago        328MB
swr.cn-east-2.myhuaweicloud.com/fudonghai/tank              v1.0                  77c91a2d6c53        5 months ago        112MB
swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank     v1.0                  77c91a2d6c53        5 months ago        112MB
redis                                                       latest                415381a6cb81        8 months ago        94.9MB
busybox                                                     latest                59788edf1f3e        9 months ago        1.15MB
nginx                                                       latest                06144b287844        10 months ago       109MB
euleros                                                     2.2.5                 b0f6bcd0a2a0        20 months ago       289MB
mirrorgooglecontainers/fluentd-elasticsearch                1.20                  c264dff3420b        2 years ago         301MB
k8s.gcr.io/fluentd-elasticsearch                            1.20                  c264dff3420b        2 years ago         301MB
nginx                                                       1.11.1-alpine         5ad9802b809e        3 years ago         69.3MB
cce-pause                                                   2.0                   2b58359142b0        3 years ago         350kB


[root@cce-7day-fudonghai-24106 01CNL]# docker pull tomcat
Using default tag: latest
latest: Pulling from library/tomcat
55cbf04beb70: Pull complete 
1607093a898c: Pull complete 
9a8ea045c926: Pull complete 
1290813abd9d: Pull complete 
8a6b982ad6d7: Pull complete 
abb029e68402: Pull complete 
d068d0a738e5: Pull complete 
42ee47bb0c52: Pull complete 
ae9c861aed25: Pull complete 
60bba9d0dc8d: Pull complete 
15222e409530: Pull complete 
2dcc81b69024: Pull complete 
Digest: sha256:c0f20412acb98efb1af63911d38edca97df76fbf3c0f34de10cc2c56a9f57471
Status: Downloaded newer image for tomcat:latest


[root@cce-7day-fudonghai-24106 01CNL]# docker run -it -d -p 8888:8080 tomcat:latest
ee355280967236ab6eace5f98e5aa53edcb4026dece869f5366a829523beb464


[root@cce-7day-fudonghai-24106 01CNL]# docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                    NAMES
ee3552809672        tomcat:latest        "catalina.sh run"        6 seconds ago       Up 5 seconds        0.0.0.0:8888->8080/tcp   vibrant_burnell

 

在浏览器里面输入:

http://122.112.252.69:8888/

发现不能访问,到ecs控制台(cce控制台没找到相应设置),-->访问控制菜单,-->安全组菜单,发现有三个安全组,应该都是cce引擎自建的。

cce-7day-fudonghai-cce-node-e77y
Sys-default
cce-7day-fudonghai-cce-control-e77y

点击进入cce-7day-fudonghai-cce-node-e77y条目,添加访问控制规则

TCP : 8000-9000    IPv4    0.0.0.0/0  --
UDP : 8000-9000    IPv4    0.0.0.0/0  --

再刷新浏览器就会出现tomcat页面。最后停掉容器

[root@cce-7day-fudonghai-24106 01CNL]# docker stop ee3552809672
ee3552809672

[root@cce-7day-fudonghai-24106 01CNL]# docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS               NAMES

 

实验部分

1,创建,配置,购买CCE集群见作业文档

2,通过容器镜像服务,上传指定公共镜像到自己的私有镜像仓库

 0)把之前用过的镜像强制删除

[root@cce-7day-fudonghai-24106 01CNL]# docker  rmi -f 77c91a2d6c53
Untagged: swr.cn-east-2.myhuaweicloud.com/fudonghai/tank:v1.0
Untagged: swr.cn-east-2.myhuaweicloud.com/fudonghai/tank@sha256:c4ecb266f091fdf5ed37e78837f358d650be5d2d160aff00941569a0ac148aad
Untagged: swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank:v1.0
Untagged: swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank@sha256:c4ecb266f091fdf5ed37e78837f358d650be5d2d160aff00941569a0ac148aad
Deleted: sha256:77c91a2d6c53a8134c6838e5e25973eefaacf62548139043dada9ebe5bce4ef0
Deleted: sha256:17ce620a1c4218b5f6bd02e8b399a2501a9822e99c2efc9b1c54ca849a15f5d5

 

 1)拉取镜像

[root@cce-7day-fudonghai-24106 01CNL]#  docker pull swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank:v1.0
v1.0: Pulling from hwstaff_m00402518/tank
802b00ed6f79: Already exists 
e9d0e0ea682b: Already exists 
d8b7092b9221: Already exists 
d9bf1d47fd56: Pull complete 
Digest: sha256:c4ecb266f091fdf5ed37e78837f358d650be5d2d160aff00941569a0ac148aad
Status: Downloaded newer image for swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank:v1.0

发现再次拉取回来的IMAGE ID和之前一样

 

2)打成自己仓库的标签

docker   tag  swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank:v1.0 swr.cn-east-2.myhuaweicloud.com/fudonghai/tank:v1.0

实际是一份数据,两个tag

swr.cn-east-2.myhuaweicloud.com/fudonghai/tank              v1.0                  77c91a2d6c53        5 months ago        112MB
swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank     v1.0                  77c91a2d6c53        5 months ago        112MB

 

3)登陆自己的docker仓库。步骤是在cce管理页面,点击镜像仓库,弹出容器镜像服务,点击总览,里面的登录指令,复制出来运行,提示 Login Succeeded表示成功。

docker login -u cn-east-2@AsFCPS2h0jL1JQQ17AHk -p bfd23e9955d00ba3f9953c81d45d4c089b9f88f43f732f36966c7ecd939e0ecf swr.cn-east-2.myhuaweicloud.com    

对应下面的格式

docker login -u 账号(在swr获取) -p 密码(在swr获取) swr.cn-east-2.myhuaweicloud.com
(这里面的账号和密码都是华为云自动产生的)

 

4)推送到自己的镜像,然后就可以在我的镜像页面查看,多了一个镜像名为tank的镜像。

[root@cce-7day-fudonghai-24106 01CNL]# docker push swr.cn-east-2.myhuaweicloud.com/fudonghai/tank

 

 第二部分

day2,Kubernetes基础知识介绍

第1课:CKA考纲与K8S基础概念解读

两者PPT和内容几乎一样

 

课上实验部分

[root@cce-7day-fudonghai-24106 01CNL]# docker pull swr.cn-south-1.myhuaweicloud.com/kevin-wangzefeng/cce-kubectl:v1
v1: Pulling from kevin-wangzefeng/cce-kubectl
6c3eb4525275: Pull complete 
5f70bf18a086: Pull complete 
07195e1407cb: Pull complete 
91f80218be79: Pull complete 
c16157d8ae47: Pull complete 
9192f9d33ba2: Pull complete 
8cb6c9ac22d1: Pull complete 
aa78cd0bc75c: Pull complete 
8f7c2c7f8d57: Pull complete 
5358690ca7c4: Pull complete 
bc72688d8ec4: Pull complete 
e5ba68ed6b9e: Pull complete 
9a0122677c09: Pull complete 
b2faa4a30ed2: Pull complete 
2e107f3c2172: Pull complete 
ccd7ca8624d3: Pull complete 
Digest: sha256:934fe455e44ef10e850979b9f150867db391b2c9a95a9f1ea4a23c3af1922ba7
Status: Downloaded newer image for swr.cn-south-1.myhuaweicloud.com/kevin-wangzefeng/cce-kubectl:v1

在cce页面配置一个无状态Deployment工作负载,建立服务的时候把节点+端口3000暴露出来,进入工作负载里面的无状态查看,并点击访问

 

通过命令行kubectl建立一个无状态Deployment工作负载

[root@cce-7day-fudonghai-24106 01CNL]# kubectl run nginx --image nginx --port 80
deployment.apps/nginx created

查看

[root@cce-7day-fudonghai-24106 01CNL]# kubectl get deploy
NAME          READY     UP-TO-DATE   AVAILABLE   AGE
cka-kubectl   1/1       1            1           15m
nginx         1/1       1            1           6m8s

也可以

[root@cce-7day-fudonghai-24106 01CNL]# kubectl get deploy/nginx
NAME      READY     UP-TO-DATE   AVAILABLE   AGE
nginx     1/1       1            1           6m54s

加-owide查看更多的补充信息

[root@cce-7day-fudonghai-24106 01CNL]# kubectl get deploy/nginx -owide
NAME      READY     UP-TO-DATE   AVAILABLE   AGE       CONTAINERS   IMAGES    SELECTOR
nginx     1/1       1            1           7m59s     nginx        nginx     run=nginx
[root@cce-7day-fudonghai-24106 01CNL]# kubectl get pod -owide
NAME                           READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cka-kubectl-6b4cc7f476-bmt64   1/1       Running   0          18m       172.16.0.24   192.168.0.184   <none>           <none>
nginx-57867cc648-4qv6k         1/1       Running   0          8m45s     172.16.0.25   192.168.0.184   <none>           <none>
[root@cce-7day-fudonghai-24106 01CNL]# kubectl describe deployment nginx
Name:                   nginx
Namespace:              default
CreationTimestamp:      Wed, 24 Jul 2019 16:31:15 +0800
Labels:                 run=nginx
Annotations:            deployment.kubernetes.io/revision=1
Selector:               run=nginx
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  run=nginx
  Containers:
   nginx:
    Image:        nginx
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-57867cc648 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  10m   deployment-controller  Scaled up replica set nginx-57867cc648 to 1

 

扩容到2个POD

[root@cce-7day-fudonghai-24106 01CNL]# kubectl scale deployment nginx --replicas=2
deployment.extensions/nginx scaled

扩容成功

[root@cce-7day-fudonghai-24106 01CNL]# kubectl get deploy/nginx -owide
NAME      READY     UP-TO-DATE   AVAILABLE   AGE       CONTAINERS   IMAGES    SELECTOR
nginx     2/2       2            2           14m       nginx        nginx     run=nginx

 

小技巧

使用run命令建立yaml文件,而不是真正建立对象

[root@cce-7day-fudonghai-24106 ~]# kubectl run --image=nginx my-deploy -o yaml --dry-run > my-deploy.yaml

 

可以使用下列命令启动自动补全功能

source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

如果报错,升级一下bash-completion软件

yum install bash-completion

 

查看资源定义

[root@cce-7day-fudonghai-24106 027day]# kubectl explain pod.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution

 

 

课下实验部分

day2的课下实验:使用工具(putty,xshell)登录node节点,配置kubectl,需要使用的命令

wget https://cce-storage.obs.cn-north-1.myhwclouds.com/kubectl.zip
unzip kubectl.zip 
chmod 750 kubectl/kubectl 
mv kubectl/kubectl /usr/local/bin/

mkdir -p $HOME/.kube
mv -f kubeconfig.json $HOME/.kube/config
kubectl config use-context internal

需要注意的问题是kubeconfig.json不同的账户和新建的集群节点会不一样,如果用错则kubectl不能正常工作

使用命令查看集群状态

[root@cce-7day-fudonghai-24106 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.0.252:5443
CoreDNS is running at https://192.168.0.252:5443/api/v1/namespaces/kube-system/services/coredns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
You have new mail in /var/spool/mail/root
[root@cce-7day-fudonghai-24106 ~]# kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
192.168.0.184   Ready     <none>    24d       v1.13.7-r0-CCE2.0.24.B001
[root@cce-7day-fudonghai-24106 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                       READY     STATUS    RESTARTS   AGE
kube-system   coredns-646fc859df-29tkg   0/1       Pending   0          24d
kube-system   coredns-646fc859df-zc6w7   1/1       Running   2          24d
kube-system   icagent-mlzw2              1/1       Running   46         24d
kube-system   storage-driver-q9jcz       1/1       Running   2          24d
[root@cce-7day-fudonghai-24106 ~]# kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-4-events        Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-2-events        Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
etcd-3-events        Healthy   {"health": "true"}   

 

第三部分

day3,Kubernetes pod调度原理分析

第2课:调度管理实训

 

课上实验部分

Node定义,关注allocatable字段,意义是可以分配的系统资源(capacity减去k8s等系统组件占用的资源)

[root@cce-7day-fudonghai-24106 ~]# kubectl get node
NAME            STATUS    ROLES     AGE       VERSION
192.168.0.184   Ready     <none>    24d       v1.13.7-r0-CCE2.0.24.B001
[root@cce-7day-fudonghai-24106 ~]# kubectl get node 192.168.0.184 -o yaml
apiVersion: v1
kind: Node
metadata:
  annotations:
    huawei.com/gpu-status: '[]'
status:
  addresses:
  - address: 192.168.0.184
    type: InternalIP
  - address: 192.168.0.184
    type: Hostname
  allocatable:
    attachable-volumes-hc-all-mode-disk: "22"
    attachable-volumes-hc-scsi-mode-disk: "22"
    attachable-volumes-hc-vbd-mode-disk: "22"
    cce/eni: "10"
    cpu: 1930m
    ephemeral-storage: "9387421271"
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 2151520Ki
    pods: "110"
  capacity:
    attachable-volumes-hc-all-mode-disk: "22"
    attachable-volumes-hc-scsi-mode-disk: "22"
    attachable-volumes-hc-vbd-mode-disk: "22"
    cce/eni: "10"
    cpu: "2"
    ephemeral-storage: 10186004Ki
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 3880032Ki
    pods: "110"

 

pod定义

apiVersion: v1
kind: Pod
metadata:
  name: day3-pod-fudonghai
  labels:
    app:  day3-pod-app
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: my-container
    ports:
    - containerPort: 80
      protocol: TCP
    resources:
      requests:
        memory: "100Mi"
        cpu: "100m"
      limits:
        memory: "200Mi"
        cpu: "200m"
  #nodeName: 192.168.0.184           #调度结果,系统调度后填入
  schedulerName: default-scheduler   #执行调度的调度器
  restartPolicy: Always
  nodeSelector:               #匹配node的label,从系统的node复制出来,结果全部匹配,pod正常运行
    #disktype: ssd            #不符合条件,导致pending
    #node-flavor: s3.large.2  #不符合条件,导致pending
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/os: linux
    failure-domain.beta.kubernetes.io/is-baremetal: "false"
    failure-domain.beta.kubernetes.io/region: cn-east-2
    failure-domain.beta.kubernetes.io/zone: cn-east-2a
    kubernetes.io/availablezone: cn-east-2a
    kubernetes.io/hostname: 192.168.0.184
    os.architecture: amd64
    os.name: CentOS_Linux_7_Core
    os.version: 3.10.0-957.5.1.el7.x86_64
  affinity:
    nodeAffinity:                                     #引入运算符,可以排除不具备指定label的node
      requiredDuringSchedulingIgnoredDuringExecution: #硬性过滤
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In        #运算符
            values:
            - 192.168.0.184
      preferredDuringSchedulingIgnoredDuringExecution: #软性评分,不具备指定label打低分,降低node被选中的几率 
      - weight: 1
        preference:
          matchExpressions:
          - key: node-flavor
            operator: In
            values:
            - s3.large.2
    #podAffinity:     #根据集群中的pod来选择节点Node
    #podAntiAffinity: #避免某些pod分布在同一组Node上,为podAffinity取反
  tolerations:                      #高级调度策略
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300

 

node-affinity

apiVersion: v1
kind: Pod
metadata:
  name: node-affinity
  labels:
    run: node-affinity
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: has-eip
            operator: In
            values:
            - "yes"
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: container-0

使用命令查看node标签,并没有"has-eip"标签

[root@cce-7day-fudonghai-24106 027day]# kubectl get nodes --show-labels
NAME            STATUS    ROLES     AGE       VERSION                     LABELS
192.168.0.184   Ready     <none>    25d       v1.13.7-r0-CCE2.0.24.B001   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/is-baremetal=false,failure-domain.beta.kubernetes.io/region=cn-east-2,failure-domain.beta.kubernetes.io/zone=cn-east-2a,has-eip=yes,kubernetes.io/availablezone=cn-east-2a,kubernetes.io/hostname=192.168.0.184,os.architecture=amd64,os.name=CentOS_Linux_7_Core,os.version=3.10.0-957.5.1.el7.x86_64

这个时候pod处于Pending状态

[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME            READY     STATUS    RESTARTS   AGE
node-affinity   0/1       Pending   0          5s

在CCE的管理页面,调到“节点管理”,找到唯一一个node,点击“标签管理”,手动添加一个"has-eip"标签,值为yes。再次观察pod状体,为running状态

[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME            READY     STATUS    RESTARTS   AGE
node-affinity   1/1       Running   0          7m55s

 

pod-affinity,选择和label为run:pod-affinity(前面一个pod),在“node”范围内亲和 

apiVersion: v1
kind: Pod
metadata:
  name: pod-affinity
  labels:
    run: pod-affinity
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: run
            operator: In
            values:
            - "node-affinity"
        topologyKey: kubernetes.io/hostname
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: container-0

本pod必须在前面的pod建立起来后才能running。适用场景是和一组特定的pod分布在一起,比如前端+后端。这样在同一AZ内流量免费,且网速快。

[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME            READY     STATUS    RESTARTS   AGE
node-affinity   0/1       Pending   0          9s
pod-affinity    0/1       Pending   0          41s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME            READY     STATUS              RESTARTS   AGE
node-affinity   0/1       ContainerCreating   0          53s
pod-affinity    1/1       Running             0          85s

 

pod-anti-affinity,和前面pod反亲和,避免某些Pod分布在同一组Node上。与podAffinity的差异,1,匹配过程相同,2,最终处理调度结果时取反

apiVersion: v1
kind: Pod
metadata:
  name: pod-anti-affinity
  labels:
    run: pod-anti-affinity
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: run
            operator: In
            values:
            - "node-affinity"
        topologyKey: kubernetes.io/hostname
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: container-0

因为只有一个节点,调度失败

[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                READY     STATUS    RESTARTS   AGE
node-affinity       1/1       Running   0          18m
pod-affinity        1/1       Running   0          19m
pod-anti-affinity   0/1       Pending   0          7s

查看失败原因

[root@cce-7day-fudonghai-24106 027day]# kubectl describe pod pod-anti-affinity
Name:               pod-anti-affinity
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             run=pod-anti-affinity
Annotations:        <none>
Status:             Pending
IP:                 

Conditions:
  Type           Status
  PodScheduled   False 
Volumes:

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  48s (x18 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod anti-affinity rules.

 

pod-tolerations,

apiVersion: v1
kind: Pod
metadata:
  name: pod-tolerations
  labels:
    run: pod-tolerations
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: container-0
  tolerations:
  - key: gpu
    operator: Equal
    value: "yes"
    effect: NoSchedule

先给node打上标签,使调度不成功。taints:避免Pod调度到特定的Node上。带effect的特殊label,对Pod有排斥性

[root@cce-7day-fudonghai-24106 027day]# kubectl taint node 192.168.0.184 gpu=no:NoSchedule
node/192.168.0.184 tainted
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
node-affinity       1/1       Running   0          41m       172.16.0.22   192.168.0.184   <none>           <none>
pod-affinity        1/1       Running   0          42m       172.16.0.21   192.168.0.184   <none>           <none>
pod-anti-affinity   0/1       Pending   0          22m       <none>        <none>          <none>           <none>
pod-tolerations     0/1       Pending   0          5s        <none>        <none>          <none>           <none>

如果不打这个标签,直接会调度成功。

 

课下实验部分

1,通过命令行,使用nginx镜像创建一个pod并手动调度到集群中的一个节点。

apiVersion: v1
kind: Pod
metadata:
  name: cce7days-fudonghai
  labels:
    app: nginx
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - 192.168.0.184  #node节点的私网IP地址
  containers:
  - image: swr.cn-east-2.myhuaweicloud.com/fudonghai/tank:v1.0    #容器镜像地址
    imagePullPolicy: IfNotPresent
    name: container-0
    resources: {}
  dnsPolicy: ClusterFirst
  imagePullSecrets:
  - name: default-secret
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}

开始的时候pod并没有运行起来,查看pod状态发现

[root@cce-7day-fudonghai-24106 027day]# kubectl describe pod cce7days-fudonghai 
Name:               cce7days-fudonghai
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=nginx
Annotations:        <none>
Status:             Pending
IP:                 
Containers:
  container-0:
    Image:        swr.cn-east-2.myhuaweicloud.com/fudonghai/tank:v1.0
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9rk4h (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  default-token-9rk4h:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-9rk4h
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  52s (x2 over 52s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

手动删除node的taint,就可以运行起来

[root@cce-7day-fudonghai-24106 027day]# kubectl taint node 192.168.0.184 gpu:NoSchedule-
node/192.168.0.184 untainted
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                 READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cce7days-fudonghai   1/1       Running   0          62m       172.16.0.24   192.168.0.184   <none>           <none>

 

2,通过命令行,创建一个deployment,拥有2个pod,其自身的pod之间在节点级别反亲和

实验开始前需要再购买一个节点,选最低的按需计费0.42/小时,购买后有几分钟创建时间。两个节点并不在一个可用区里面,但在一个集群里面

[root@cce-7day-fudonghai-24106 027day]# kubectl get node -owide
NAME            STATUS    ROLES     AGE       VERSION                     INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
192.168.0.184   Ready     <none>    26d       v1.13.7-r0-CCE2.0.24.B001   192.168.0.184   <none>        CentOS Linux 7 (Core)   3.10.0-957.5.1.el7.x86_64   docker://18.9.0
192.168.0.187   Ready     <none>    93s       v1.13.4-r0-CCE2.0.23.B001   192.168.0.187   <none>        CentOS Linux 7 (Core)   3.10.0-957.5.1.el7.x86_64   docker://18.9.0

deploy的yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cce7days-app1-fudonghai
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: cce7days-app1-fudonghai
  template:
    metadata:
      labels:
        app: cce7days-app1-fudonghai
    spec:
      containers:
       - image: nginx
         name: container-0
         imagePullPolicy: IfNotPresent
      restartPolicy: Always
      dnsPolicy: ClusterFirst
      imagePullSecrets:
        - name: default-secret
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - cce7days-app1-fudonghai
            topologyKey: kubernetes.io/hostname
      schedulerName: default-scheduler

创建并查看

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f anti-affinity-deployment.yaml 
deployment.apps/cce7days-app1-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                                       READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cce7days-app1-fudonghai-54d59459b9-8lwzb   1/1       Running   0          10s       172.16.0.26   192.168.0.184   <none>           <none>
cce7days-app1-fudonghai-54d59459b9-9485l   1/1       Running   0          10s       172.16.0.36   192.168.0.187   <none>           <none>

 

 

3,通过命令行,创建一个deployment,拥有2个pod,并配置该deployment的pod与第1个deployment的pod在节点级别亲和

deploy的yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cce7days-app2-fudonghai
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: cce7days-app2-fudonghai
  template:
    metadata:
      labels:
        app: cce7days-app2-fudonghai
    spec:
      containers:
       - image: nginx
         name: container-0
      imagePullSecrets:
        - name: default-secret
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - cce7days-app1-fudonghai
              topologyKey: kubernetes.io/hostname

创建并查看

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f affinity-deployment.yaml 
deployment.apps/cce7days-app2-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                                       READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cce7days-app1-fudonghai-54d59459b9-8lwzb   1/1       Running   0          4m44s     172.16.0.26   192.168.0.184   <none>           <none>
cce7days-app1-fudonghai-54d59459b9-9485l   1/1       Running   0          4m44s     172.16.0.36   192.168.0.187   <none>           <none>
cce7days-app2-fudonghai-8fbd78c48-5gr7m    1/1       Running   0          4s        172.16.0.37   192.168.0.187   <none>           <none>
cce7days-app2-fudonghai-8fbd78c48-645z4    1/1       Running   0          4s        172.16.0.27   192.168.0.184   <none>           <none>

 

 

第四部分

Day4 Kubernetes应用生命周期原理分析

 第3课:K8S日志、监控与应用管理实训

 

课上实验部分

 

[root@cce-7day-fudonghai-24106 027day]# kubectl cluster-info
Kubernetes master is running at https://192.168.0.252:5443
CoreDNS is running at https://192.168.0.252:5443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
[root@cce-7day-fudonghai-24106 027day]# kubectl cluster-info dump > a.txt
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -n kube-system
NAME                       READY     STATUS    RESTARTS   AGE
coredns-646fc859df-6m5cf   0/1       Pending   0          20h
coredns-646fc859df-zc6w7   1/1       Running   2          27d
icagent-mlzw2              1/1       Running   52         27d
storage-driver-q9jcz       1/1       Running   2          27d
[root@cce-7day-fudonghai-24106 027day]# kubectl describe pod coredns-646fc859df-6m5cf -n kube-system
Name:               coredns-646fc859df-6m5cf
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=coredns
                    k8s-app=coredns
                    kubernetes.io/evictcritical=
                    pod-template-hash=646fc859df
                    release=cceaddon-coredns
Annotations:        checksum/config=3095a9b4028195e7e0b8b22c550bf183d0b7a8a7eba20808b36081d0b39f8b81
                    scheduler.alpha.kubernetes.io/critical-pod=
                    scheduler.alpha.kubernetes.io/tolerations=[{"key":"CriticalAddonsOnly", "operator":"Exists"}]
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/coredns-646fc859df
Containers:
  coredns:
    Image:      100.125.17.64:20202/hwofficial/cce-coredns-linux-amd64:1.2.6.1
    Port:       5353/UDP
    Host Port:  0/UDP
    Args:
      -conf
      /etc/coredns/Corefile
      -rmem
      udp#8388608
      -wmem
      udp#1048576
    Limits:
      cpu:     500m
      memory:  512Mi
    Requests:
      cpu:        500m
      memory:     512Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8080/health delay=3s timeout=3s period=5s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-vrbw4 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-vrbw4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-vrbw4
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 60s
                 node.kubernetes.io/unreachable:NoExecute for 60s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  4m (x1567 over 20h)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules.
[root@cce-7day-fudonghai-24106 027day]# kubectl run redis --image=redis
deployment.apps/redis created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                     READY     STATUS    RESTARTS   AGE
redis-785f9d6bfb-sp5n8   1/1       Running   0          11s

 

看redis容器日志

[root@cce-7day-fudonghai-24106 027day]# kubectl logs -f redis-785f9d6bfb-sp5n8 -c redis
1:C 28 Jul 2019 03:21:39.454 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 28 Jul 2019 03:21:39.454 # Redis version=5.0.1, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 28 Jul 2019 03:21:39.454 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 28 Jul 2019 03:21:39.463 * Running mode=standalone, port=6379.
1:M 28 Jul 2019 03:21:39.463 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 28 Jul 2019 03:21:39.463 # Server initialized
1:M 28 Jul 2019 03:21:39.463 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 28 Jul 2019 03:21:39.463 * Ready to accept connections

 运行2个副本的deploymnet

[root@cce-7day-fudonghai-24106 027day]# kubectl run nginx --image=nginx --replicas=2
deployment.apps/nginx created
[root@cce-7day-fudonghai-24106 027day]# kubectl get deploy
\NAME      READY     UP-TO-DATE   AVAILABLE   AGE
nginx     2/2       2            2           10s
[root@cce-7day-fudonghai-24106 027day]# \kubectl get pod
NAME                     READY     STATUS    RESTARTS   AGE
nginx-7cdbd8cdc9-6jpwd   1/1       Running   0          21s
nginx-7cdbd8cdc9-q5x6z   1/1       Running   0          21s

进入pod容器

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it nginx-7cdbd8cdc9-q5x6z /bin/sh
# ls
bin  boot  dev    etc  home  lib    lib64  media  mnt  opt    proc  root  run  sbin  srv  sys  tmp  usr  var
# exit

 

升级容器镜像

[root@cce-7day-fudonghai-24106 027day]# kubectl set image deploy nginx nginx=nginx:1.9.1
deployment.extensions/nginx image updated
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod 
NAME                     READY     STATUS              RESTARTS   AGE
nginx-7cdbd8cdc9-6jpwd   1/1       Running             0          8h
nginx-7cdbd8cdc9-q5x6z   1/1       Running             0          8h
nginx-8676fdbb6d-ljwsz   0/1       ContainerCreating   0          19s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod 
NAME                     READY     STATUS              RESTARTS   AGE
nginx-7cdbd8cdc9-6jpwd   1/1       Running             0          8h
nginx-7cdbd8cdc9-q5x6z   1/1       Running             0          8h
nginx-8676fdbb6d-ljwsz   0/1       ContainerCreating   0          28s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod 
NAME                     READY     STATUS        RESTARTS   AGE
nginx-7cdbd8cdc9-q5x6z   0/1       Terminating   0          8h
nginx-8676fdbb6d-ljwsz   1/1       Running       0          42s
nginx-8676fdbb6d-zbbx8   1/1       Running       0          10s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod 
NAME                     READY     STATUS    RESTARTS   AGE
nginx-8676fdbb6d-ljwsz   1/1       Running   0          2m51s
nginx-8676fdbb6d-zbbx8   1/1       Running   0          2m19s
[root@cce-7day-fudonghai-24106 027day]# kubectl rollout status deploy nginx
deployment "nginx" successfully rolled out

查看历史

[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deploy nginx
deployments "nginx"
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deploy nginx --revision=2
deployments "nginx" with revision #2
Pod Template:
  Labels:    pod-template-hash=8676fdbb6d
    run=nginx
  Containers:
   nginx:
    Image:    nginx:1.9.1
    Port:    <none>
    Host Port:    <none>
    Environment:    <none>
    Mounts:    <none>
  Volumes:    <none>

[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deploy nginx --revision=1
deployments "nginx" with revision #1
Pod Template:
  Labels:    pod-template-hash=7cdbd8cdc9
    run=nginx
  Containers:
   nginx:
    Image:    nginx
    Port:    <none>
    Host Port:    <none>
    Environment:    <none>
    Mounts:    <none>
  Volumes:    <none>

把maxSurge和maxUnavailable 都改成2

[root@cce-7day-fudonghai-24106 027day]# kubectl edit deploy nginx
deployment.extensions/nginx edited
  strategy:
    rollingUpdate:
      maxSurge: 2
      maxUnavailable: 2
    type: RollingUpdate

改变资源升级

[root@cce-7day-fudonghai-24106 027day]# kubectl set resources deploy nginx -c=nginx --limits=cpu=200m,memory=256Mi
deployment.extensions/nginx resource requirements updated
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                     READY     STATUS    RESTARTS   AGE
nginx-6bc5c66cdd-dqwdm   1/1       Running   0          14s
nginx-6bc5c66cdd-xz85d   1/1       Running   0          14s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                     READY     STATUS    RESTARTS   AGE
nginx-6bc5c66cdd-dqwdm   1/1       Running   0          19s
nginx-6bc5c66cdd-xz85d   1/1       Running   0          19s
[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deploy nginx --revision=3
deployments "nginx" with revision #3
Pod Template:
  Labels:    pod-template-hash=6bc5c66cdd
    run=nginx
  Containers:
   nginx:
    Image:    nginx:1.9.1
    Port:    <none>
    Host Port:    <none>
    Limits:
      cpu:    200m
      memory:    256Mi
    Environment:    <none>
    Mounts:    <none>
  Volumes:    <none>

回滚到版本2

[root@cce-7day-fudonghai-24106 027day]# kubectl  rollout undo deploy nginx --to-revision=2
deployment.extensions/nginx

水平扩容

[root@cce-7day-fudonghai-24106 027day]# kubectl scale deploy nginx --replicas=4
deployment.extensions/nginx scaled
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                     READY     STATUS              RESTARTS   AGE
nginx-8676fdbb6d-2g4np   1/1       Running             0          3m39s
nginx-8676fdbb6d-c9k78   0/1       ContainerCreating   0          7s
nginx-8676fdbb6d-rj4np   1/1       Running             0          3m39s
nginx-8676fdbb6d-vjtln   0/1       ContainerCreating   0          7s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                     READY     STATUS              RESTARTS   AGE
nginx-8676fdbb6d-2g4np   1/1       Running             0          3m45s
nginx-8676fdbb6d-c9k78   0/1       ContainerCreating   0          13s
nginx-8676fdbb6d-rj4np   1/1       Running             0          3m45s
nginx-8676fdbb6d-vjtln   1/1       Running             0          13s

[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deploy nginx
deployments "nginx"
REVISION  CHANGE-CAUSE
1         <none>
3         <none>
4         <none>

 

 

课下实验部分

 1,通过Deployment方式,使用redis镜像创建1个Pod。通过kubectl获得redis启动日志。打卡:将所用命令、创建的Deployment完整yaml截图上传

yaml文件

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cce7days-app3-fudonghai
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cce7days-app3-fudonghai
  template:
    metadata:
      labels:
        app: cce7days-app3-fudonghai
    spec:
      containers:
       - image: 'redis:latest'
         name: container-0
      imagePullSecrets:
        - name: default-secret
    # 此处亲和性设置是为了将pod调度到有EIP的节点,便于下载外网镜像
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - 192.168.0.184

使用命令

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day4-redis-deployment.yaml 
deployment.apps/cce7days-app3-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                                      READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cce7days-app3-fudonghai-b4f5bf6d6-b5xrv   1/1       Running   0          20s       172.16.0.26   192.168.0.184   <none>           <none>
[root@cce-7day-fudonghai-24106 027day]# kubectl logs -f cce7days-app3-fudonghai-b4f5bf6d6-b5xrv -c container-0
1:C 29 Jul 2019 00:51:04.808 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 29 Jul 2019 00:51:04.808 # Redis version=5.0.1, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 29 Jul 2019 00:51:04.808 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 29 Jul 2019 00:51:04.811 * Running mode=standalone, port=6379.
1:M 29 Jul 2019 00:51:04.811 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 29 Jul 2019 00:51:04.811 # Server initialized
1:M 29 Jul 2019 00:51:04.811 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 29 Jul 2019 00:51:04.811 * Ready to accept connections
^C

 

 

2,通过命令行,创建1个deployment,副本数为3,镜像为nginx:latest。然后滚动升级到nginx:1.9.1。

 yaml文件

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cce7days-app4-fudonghai
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: cce7days-app4-fudonghai
  template:
    metadata:
      labels:
        app: cce7days-app4-fudonghai
    spec:
      containers:
       - image: 'nginx:latest'
         name: container-0
      imagePullSecrets:
        - name: default-secret
    # 此处亲和性设置是为了将pod调度到有EIP的节点,便于下载外网镜像
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - 192.168.0.184

使用命令

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day4-nginx-deployment.yaml 
deployment.apps/cce7days-app4-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                                      READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cce7days-app4-fudonghai-6c5fb9794-5wl6r   1/1       Running   0          14s       172.16.0.29   192.168.0.184   <none>           <none>
cce7days-app4-fudonghai-6c5fb9794-9mqfm   1/1       Running   0          14s       172.16.0.28   192.168.0.184   <none>           <none>
cce7days-app4-fudonghai-6c5fb9794-zgjsk   1/1       Running   0          14s       172.16.0.27   192.168.0.184   <none>           <none>
[root@cce-7day-fudonghai-24106 027day]# kubectl set image deploy cce7days-app4-fudonghai container-0=nginx:1.9.1
deployment.extensions/cce7days-app4-fudonghai image updated
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: 2019-07-29T01:10:31Z
    generateName: cce7days-app4-fudonghai-7b4bd886f4-
    labels:
      app: cce7days-app4-fudonghai
      pod-template-hash: 7b4bd886f4
    name: cce7days-app4-fudonghai-7b4bd886f4-5j2pn
    namespace: default
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: cce7days-app4-fudonghai-7b4bd886f4
      uid: a13fbe25-b19d-11e9-8bbd-fa163eb87d99
    resourceVersion: "6554233"
    selfLink: /api/v1/namespaces/default/pods/cce7days-app4-fudonghai-7b4bd886f4-5j2pn
    uid: a8cc08d2-b19d-11e9-8bbd-fa163eb87d99
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - 192.168.0.184
    containers:
    - image: nginx:1.9.1
      imagePullPolicy: Always
      name: container-0
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: default-token-9rk4h
        readOnly: true
    dnsConfig:
      options:
      - name: single-request-reopen
        value: ""
      - name: timeout
        value: "2"
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: default-secret
    nodeName: 192.168.0.184
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: default
    serviceAccountName: default
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - name: default-token-9rk4h
      secret:
        defaultMode: 420
        secretName: default-token-9rk4h
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:31Z
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:36Z
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:36Z
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:31Z
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: docker://41faea96f702417e1b76fd1b6462c40eb8399564c7da1e15844222946d5877fd
      image: nginx:1.9.1
      imageID: docker-pullable://nginx@sha256:2f68b99bc0d6d25d0c56876b924ec20418544ff28e1fb89a4c27679a40da811b
      lastState: {}
      name: container-0
      ready: true
      restartCount: 0
      state:
        running:
          startedAt: 2019-07-29T01:10:36Z
    hostIP: 192.168.0.184
    phase: Running
    podIP: 172.16.0.19
    qosClass: BestEffort
    startTime: 2019-07-29T01:10:31Z
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: 2019-07-29T01:10:18Z
    generateName: cce7days-app4-fudonghai-7b4bd886f4-
    labels:
      app: cce7days-app4-fudonghai
      pod-template-hash: 7b4bd886f4
    name: cce7days-app4-fudonghai-7b4bd886f4-k62fv
    namespace: default
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: cce7days-app4-fudonghai-7b4bd886f4
      uid: a13fbe25-b19d-11e9-8bbd-fa163eb87d99
    resourceVersion: "6554165"
    selfLink: /api/v1/namespaces/default/pods/cce7days-app4-fudonghai-7b4bd886f4-k62fv
    uid: a1411f99-b19d-11e9-8bbd-fa163eb87d99
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - 192.168.0.184
    containers:
    - image: nginx:1.9.1
      imagePullPolicy: Always
      name: container-0
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: default-token-9rk4h
        readOnly: true
    dnsConfig:
      options:
      - name: single-request-reopen
        value: ""
      - name: timeout
        value: "2"
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: default-secret
    nodeName: 192.168.0.184
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: default
    serviceAccountName: default
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - name: default-token-9rk4h
      secret:
        defaultMode: 420
        secretName: default-token-9rk4h
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:18Z
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:24Z
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:24Z
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:18Z
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: docker://c7fa6320cfb4ad5bc6e0a31392bfd2ded5b5168647478a09386b63c69ef96b45
      image: nginx:1.9.1
      imageID: docker-pullable://nginx@sha256:2f68b99bc0d6d25d0c56876b924ec20418544ff28e1fb89a4c27679a40da811b
      lastState: {}
      name: container-0
      ready: true
      restartCount: 0
      state:
        running:
          startedAt: 2019-07-29T01:10:23Z
    hostIP: 192.168.0.184
    phase: Running
    podIP: 172.16.0.30
    qosClass: BestEffort
    startTime: 2019-07-29T01:10:18Z
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: 2019-07-29T01:10:24Z
    generateName: cce7days-app4-fudonghai-7b4bd886f4-
    labels:
      app: cce7days-app4-fudonghai
      pod-template-hash: 7b4bd886f4
    name: cce7days-app4-fudonghai-7b4bd886f4-rqfh4
    namespace: default
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: cce7days-app4-fudonghai-7b4bd886f4
      uid: a13fbe25-b19d-11e9-8bbd-fa163eb87d99
    resourceVersion: "6554200"
    selfLink: /api/v1/namespaces/default/pods/cce7days-app4-fudonghai-7b4bd886f4-rqfh4
    uid: a509133c-b19d-11e9-8bbd-fa163eb87d99
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - 192.168.0.184
    containers:
    - image: nginx:1.9.1
      imagePullPolicy: Always
      name: container-0
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: default-token-9rk4h
        readOnly: true
    dnsConfig:
      options:
      - name: single-request-reopen
        value: ""
      - name: timeout
        value: "2"
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: default-secret
    nodeName: 192.168.0.184
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: default
    serviceAccountName: default
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - name: default-token-9rk4h
      secret:
        defaultMode: 420
        secretName: default-token-9rk4h
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:24Z
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:31Z
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:31Z
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:24Z
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: docker://4bd5450bd4516da0710ddcf4e1be29823c535f88fb3149c266d2240ceb475fc4
      image: nginx:1.9.1
      imageID: docker-pullable://nginx@sha256:2f68b99bc0d6d25d0c56876b924ec20418544ff28e1fb89a4c27679a40da811b
      lastState: {}
      name: container-0
      ready: true
      restartCount: 0
      state:
        running:
          startedAt: 2019-07-29T01:10:30Z
    hostIP: 192.168.0.184
    phase: Running
    podIP: 172.16.0.31
    qosClass: BestEffort
    startTime: 2019-07-29T01:10:24Z
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deploy cce7days-app4-fudonghai 
deployments "cce7days-app4-fudonghai"
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deployment cce7days-app4-fudonghai --revision=2
deployments "cce7days-app4-fudonghai" with revision #2
Pod Template:
  Labels:    app=cce7days-app4-fudonghai
    pod-template-hash=7b4bd886f4
  Containers:
   container-0:
    Image:    nginx:1.9.1
    Port:    <none>
    Host Port:    <none>
    Environment:    <none>
    Mounts:    <none>
  Volumes:    <none>

 

 

 

 

第五部分

Day5 Kubernetes网络管理原理分析

 第4课:K8S网络管理实训

 

课上实验部分

(一)svc

 建立类型为clusterip的service,名称为my-svc-cp

[root@cce-7day-fudonghai-24106 027day]# kubectl create service clusterip my-svc-cp --tcp=80:8080
service/my-svc-cp created
[root@cce-7day-fudonghai-24106 027day]# kubectl get svc
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
cka-kubectl   NodePort    10.247.1.217     <none>        3000:30078/TCP   4d22h
kubernetes    ClusterIP   10.247.0.1       <none>        443/TCP          28d
my-svc-cp     ClusterIP   10.247.100.116   <none>        80/TCP           31s
[root@cce-7day-fudonghai-24106 027day]# curl 10.247.100.116:80
curl: (7) Failed connect to 10.247.100.116:80; Connection refused

k8s会产生同名的endpoint,此时endpoint还没有指定IP,所以看到为none

[root@cce-7day-fudonghai-24106 027day]# kubectl get endpoints
NAME          ENDPOINTS            AGE
cka-kubectl   <none>               4d22h
kubernetes    192.168.0.252:5444   28d
my-svc-cp     <none>               6m2s

查看这个service

[root@cce-7day-fudonghai-24106 027day]# kubectl describe service my-svc-cp 
Name:              my-svc-cp
Namespace:         default
Labels:            app=my-svc-cp
Annotations:       <none>
Selector:          app=my-svc-cp
Type:              ClusterIP
IP:                10.247.100.116
Port:              80-8080  80/TCP
TargetPort:        8080/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>

 

建立类型为nodeport的service,名称为my-svc-np

[root@cce-7day-fudonghai-24106 027day]# kubectl  create service nodeport my-svc-np --tcp=1234:80
service/my-svc-np created
[root@cce-7day-fudonghai-24106 027day]# kubectl describe service my-svc-np
Name:                     my-svc-np
Namespace:                default
Labels:                   app=my-svc-np
Annotations:              <none>
Selector:                 app=my-svc-np
Type:                     NodePort
IP:                       10.247.109.121
Port:                     1234-80  1234/TCP
TargetPort:               80/TCP
NodePort:                 1234-80  31263/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

nodeport的svc与clusterip的svc相同之处在于也有clusterip,不同之处是还有一个主机的端口号(个人认为node端口号,可以从外网访问),这里是31263

也就是说nodeport 涵盖了 clusterip

[root@cce-7day-fudonghai-24106 027day]# curl 10.247.109.121:1234
curl: (7) Failed connect to 10.247.109.121:1234; Connection refused
You have new mail in /var/spool/mail/root
[root@cce-7day-fudonghai-24106 027day]# curl http://122.112.252.69:31263
curl: (7) Failed connect to 122.112.252.69:31263; Connection refused

 

建立类型为headless的svc,指定clusterip为None

[root@cce-7day-fudonghai-24106 027day]# kubectl create svc clusterip my-svc-headless --clusterip="None"
service/my-svc-headless created
[root@cce-7day-fudonghai-24106 027day]# kubectl describe svc my-svc-headless 
Name:              my-svc-headless
Namespace:         default
Labels:            app=my-svc-headless
Annotations:       <none>
Selector:          app=my-svc-headless
Type:              ClusterIP
IP:                None
Session Affinity:  None
Events:            <none>

 

上面建立的3个svc都是没有后端的,下面建立一个名为hello-nginx的后端,为svc提供支撑

[root@cce-7day-fudonghai-24106 027day]# kubectl run hello-nginx --image=nginx
deployment.apps/hello-nginx created
You have new mail in /var/spool/mail/root
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                           READY     STATUS    RESTARTS   AGE
hello-nginx-79c6778c6f-pdqzl   1/1       Running   0          6s

使用expose命令建立svc,把这个deploy和这个svc连接起来,svc类型clusterip,端口8090

[root@cce-7day-fudonghai-24106 027day]# kubectl expose deploy hello-nginx --type=ClusterIP --name=my-svc-nginx --port=8090 --target-port=80
service/my-svc-nginx exposed
You have new mail in /var/spool/mail/root
[root@cce-7day-fudonghai-24106 027day]# kubectl get svc
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
cka-kubectl       NodePort    10.247.1.217     <none>        3000:30078/TCP   4d23h
kubernetes        ClusterIP   10.247.0.1       <none>        443/TCP          28d
my-svc-cp         ClusterIP   10.247.100.116   <none>        80/TCP           80m
my-svc-headless   ClusterIP   None             <none>        <none>           36m
my-svc-nginx      ClusterIP   10.247.205.107   <none>        8090/TCP         7s
my-svc-np         NodePort    10.247.109.121   <none>        1234:31263/TCP   68m

测试一下,成功

[root@cce-7day-fudonghai-24106 027day]# curl 10.247.205.107:8090
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

k8s会产生同名的endpoint,其IP就是刚才那个nginx pod的IP:172.16.0.20

[root@cce-7day-fudonghai-24106 027day]# kubectl get endpoints 
NAME              ENDPOINTS            AGE
cka-kubectl       <none>               4d23h
kubernetes        192.168.0.252:5444   28d
my-svc-cp         <none>               84m
my-svc-headless   <none>               41m
my-svc-nginx      172.16.0.20:80       4m41s
my-svc-np         <none>               72m

[root@cce-7day-fudonghai-24106 027day]# kubectl describe pod hello-nginx-79c6778c6f-pdqzl 
Name:               hello-nginx-79c6778c6f-pdqzl
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               192.168.0.184/192.168.0.184
Start Time:         Mon, 29 Jul 2019 15:47:31 +0800
Labels:             pod-template-hash=79c6778c6f
                    run=hello-nginx
Annotations:        <none>
Status:             Running
IP:                 172.16.0.20
Controlled By:      ReplicaSet/hello-nginx-79c6778c6f

也可以用pod IP测试

[root@cce-7day-fudonghai-24106 027day]# curl 172.16.0.20
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

 

(二)查域名

查域名需要有工具nslookup,而实际考试环境没有,需要找到一个有的docker镜像run起来

[root@cce-7day-fudonghai-24106 027day]# wget https://kubernetes.io/examples/admin/dns/busybox.yaml
--2019-07-29 16:17:12--  https://kubernetes.io/examples/admin/dns/busybox.yaml
Resolving kubernetes.io (kubernetes.io)... 45.54.44.102
Connecting to kubernetes.io (kubernetes.io)|45.54.44.102|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 234 [application/x-yaml]
Saving to: ‘busybox.yaml’

100%[========================================================================================>] 234         --.-K/s   in 0s      

2019-07-29 16:17:14 (7.26 MB/s) - ‘busybox.yaml’ saved [234/234]

busybox1.28的镜像带nslookup

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    command:
      - sleep      #需要睡眠阻塞住,不然一下就运行完了
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

运行这个busybox镜像

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f busybox.yaml 
pod/busybox created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                           READY     STATUS    RESTARTS   AGE
busybox                        1/1       Running   0          22s
hello-nginx-79c6778c6f-pdqzl   1/1       Running   0          46m

使用nslookup命令 查看 kubernetes服务的IP

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it busybox -- nslookup kubernetes.default
Server:    10.247.3.10
Address 1: 10.247.3.10 coredns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.247.0.1 kubernetes.default.svc.cluster.local
10.247.3.10 是DNS server的IP,在kube-system的namespace里面
[root@cce-7day-fudonghai-24106 027day]# kubectl get svc -n kube-system
NAME      TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
coredns   ClusterIP   10.247.3.10   <none>        53/UDP,53/TCP,8080/TCP   28d
kubernetes服务的IP是 10.247.0.1,它后端的pod是API Server
[root@cce-7day-fudonghai-24106 027day]# kubectl get svc
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
cka-kubectl       NodePort    10.247.1.217     <none>        3000:30078/TCP   5d
kubernetes        ClusterIP   10.247.0.1       <none>        443/TCP          28d

 

解析my-svc-nginx域名

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it busybox -- nslookup my-svc-nginx
Server:    10.247.3.10
Address 1: 10.247.3.10 coredns.kube-system.svc.cluster.local

Name:      my-svc-nginx
Address 1: 10.247.205.107 my-svc-nginx.default.svc.cluster.local

 

解析hello-nginx-79c6778c6f-pdqzl 这个pod的IP,失败,原因未明

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it busybox -- nslookup hello-nginx-79c6778c6f-pdqzl
Server:    10.247.3.10
Address 1: 10.247.3.10 coredns.kube-system.svc.cluster.local

nslookup: can't resolve 'hello-nginx-79c6778c6f-pdqzl'

换busybox这个pod,成功

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it busybox -- nslookup busybox 
Server:    10.247.3.10
Address 1: 10.247.3.10 coredns.kube-system.svc.cluster.local

Name:      busybox
Address 1: 172.16.0.21 busybox

 

课下实验部分

 1,创建1个Service和1个Pod作为其后端。通过kubectl describe获得该Service和对应Endpoints信息。

 pod的yaml

apiVersion: v1
kind: Pod
metadata:
  name: cce7days-app5-pod-fudonghai
  labels:
    app: cce7days-app5-pod-fudonghai
spec:
  #此处亲和性设置是为了将pod调度到有EIP的节点,便于下载外网镜像
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - 192.168.0.184
  containers:
  - image: nginx:latest
    imagePullPolicy: IfNotPresent
    name: container-0
  restartPolicy: Always
  schedulerName: default-scheduler

svc的yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: cce7days-app5-svc-fudonghai
  name: cce7days-app5-svc-fudonghai
spec:
  ports:
  - name: service0
    port: 80
    protocol: TCP
    targetPort: 80
  selector:   #选中对应的pod
    app: cce7days-app5-pod-fudonghai
  type: NodePort

建立

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day5-pod.yaml 
pod/cce7days-app5-pod-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day5-service.yaml 
service/cce7days-app5-svc-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                          READY     STATUS    RESTARTS   AGE
cce7days-app5-pod-fudonghai   1/1       Running   0          32s

查看

[root@cce-7day-fudonghai-24106 027day]# kubectl get svc
NAME                          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
cce7days-app5-svc-fudonghai   NodePort    10.247.103.223   <none>        80:31367/TCP   6m19s
kubernetes                    ClusterIP   10.247.0.1       <none>        443/TCP        29d
[root@cce-7day-fudonghai-24106 027day]# kubectl get endpoints
NAME                          ENDPOINTS            AGE
cce7days-app5-svc-fudonghai   172.16.0.22:80       6s
kubernetes                    192.168.0.252:5444   29d
[root@cce-7day-fudonghai-24106 027day]# kubectl describe svc cce7days-app5-svc-fudonghai
Name:                     cce7days-app5-svc-fudonghai
Namespace:                default
Labels:                   app=cce7days-app5-svc-fudonghai
Annotations:              <none>
Selector:                 app=cce7days-app5-pod-fudonghai
Type:                     NodePort
IP:                       10.247.103.223
Port:                     service0  80/TCP
TargetPort:               80/TCP
NodePort:                 service0  31367/TCP
Endpoints:                172.16.0.22:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
[root@cce-7day-fudonghai-24106 027day]# kubectl describe endpoints cce7days-app5-svc-fudonghai
Name:         cce7days-app5-svc-fudonghai
Namespace:    default
Labels:       app=cce7days-app5-svc-fudonghai
Annotations:  <none>
Subsets:
  Addresses:          172.16.0.22
  NotReadyAddresses:  <none>
  Ports:
    Name      Port  Protocol
    ----      ----  --------
    service0  80    TCP

Events:  <none>

 因为是svc是nodeport类型的,可以从外网使用http://122.112.252.69:31367/ 访问

 

第六部分

Day6 Kubernetes存储管理原理分析

 第5课:K8S存储管理实训

 

课上实验部分

 把configMap挂在到容器指定的目录里面

ConfigMap的yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: special-config
  namespace: default
data:
  special.how: very
  special.type: charm

deploy的yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cce7days-configmap-fudonghai
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cce7days-configmap-fudonghai
  template:
    metadata:
      labels:
        app: cce7days-configmap-fudonghai
    spec:
      containers:
       - image: 'nginx:latest'
         name: container-0
         volumeMounts:
         - name: test
           mountPath: /tmp
      volumes:
       - name: test
         configMap:
          name: special-config
          defaultMode: 420
          items:
          - key: special.how
            path: welcome/how
    # 此处亲和性设置是为了将pod调度到有EIP的节点,便于下载外网镜像
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - 192.168.0.184

如果pod没有找到对应的configMap,会一直处于 ContainerCreating状态,直到建立起这个configMap后,pod才会Running

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day6-configmap-deployment.yaml 
deployment.apps/cce7days-configmap-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                                            READY     STATUS              RESTARTS   AGE
cce7days-configmap-fudonghai-79c5584d67-cmd9f   0/1       ContainerCreating   0          2s
[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day6-configmap.yaml 
configmap/special-config created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                                            READY     STATUS              RESTARTS   AGE
cce7days-configmap-fudonghai-79c5584d67-cmd9f   0/1       ContainerCreating   0          33s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                                            READY     STATUS              RESTARTS   AGE
cce7days-configmap-fudonghai-79c5584d67-cmd9f   0/1       ContainerCreating   0          34s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                                            READY     STATUS    RESTARTS   AGE
cce7days-configmap-fudonghai-79c5584d67-cmd9f   1/1       Running   0          41s

进入容器查看,键值是文件内容,path是文件名,path可以带多级路径的,如welcome/how,welcome是目录,how是最终文件

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it cce7days-configmap-fudonghai-57bf99985d-bjd4w /bin/sh
# cd /tmp
# ls
welcome
# cd welcome
# ls
how
# cat how
very# 

 

PVC方式测试详见课后实验

云硬盘类型

[root@cce-7day-fudonghai-24106 027day]# kubectl get storageclass
NAME              PROVISIONER                     AGE
efs-performance   flexvolume-huawei.com/fuxiefs   29d
efs-standard      flexvolume-huawei.com/fuxiefs   29d
nfs-rw            flexvolume-huawei.com/fuxinfs   29d
obs-standard      flexvolume-huawei.com/fuxiobs   29d
obs-standard-ia   flexvolume-huawei.com/fuxiobs   29d
sas               flexvolume-huawei.com/fuxivol   29d
sata              flexvolume-huawei.com/fuxivol   29d
ssd               flexvolume-huawei.com/fuxivol   29d

 

 

课下实验部分

1、部署一个statefulset应用,使用持久化卷,通过pvc声明所需的存储大小1G及访问模式为RWX。

建立pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-evs-auto
  namespace: default
  annotations:
    volume.beta.kubernetes.io/storage-class: sata    #云硬盘类型
    volume.beta.kubernetes.io/storage-provisioner: flexvolume-huawei.com/fuxivol
  labels:
    failure-domain.beta.kubernetes.io/region: cn-east-2
    failure-domain.beta.kubernetes.io/zone: cn-east-2a
spec:
  accessModes:    #访问模式
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

create并查看

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day6-pvc.yaml 
persistentvolumeclaim/pvc-evs-auto created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pvc
NAME           STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-evs-auto   Bound     pvc-927bc7a9-b29c-11e9-8bbd-fa163eb87d99   1Gi        RWX            sata           15s

statefulset的yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: cce7days-app11-fudonghai
  namespace: default
spec:
  podManagementPolicy: OrderedReady
  serviceName: cce7days-app11-fudonghai-headless
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: cce7days-app11-fudonghai
      failure-domain.beta.kubernetes.io/region: cn-east-2
      failure-domain.beta.kubernetes.io/zone: cn-east-2a
  template:
    metadata:
      labels:
        app: cce7days-app11-fudonghai
        failure-domain.beta.kubernetes.io/region: cn-east-2
        failure-domain.beta.kubernetes.io/zone: cn-east-2a
    spec:
      affinity: {}
      containers:
      - image: swr.cn-east-2.myhuaweicloud.com/fudonghai/tank:v1.0
        imagePullPolicy: IfNotPresent
        name: container-0
        resources: {}
        volumeMounts:
        - mountPath: /tmp
          name: pvc-evs-example
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: default-secret  
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
        - name: pvc-evs-example
          persistentVolumeClaim:
            claimName: pvc-evs-auto  #对应PVC的名字
  updateStrategy:
    type: RollingUpdate

建立并查看

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day6-statefulset.yaml 
statefulset.apps/cce7days-app11-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                                            READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cce7days-app11-fudonghai-0                      1/1       Running   0          43s       172.16.0.26   192.168.0.184   <none>           <none>

进入容器内操作

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -ti cce7days-app11-fudonghai-0 /bin/sh
# df
Filesystem                                                                                       1K-blocks    Used Available Use% Mounted on
/dev/mapper/docker-253:1-788010-e5d376529d38bb6e9fa8e5a2fc33894198c53f52d559787ae67cdf0259fad6de  10475520  152016  10323504   2% /
tmpfs                                                                                                65536       0     65536   0% /dev
tmpfs                                                                                              1940016       0   1940016   0% /sys/fs/cgroup
/dev/sda                                                                                            999320    2564    927944   1% /tmp

# echo "this is a test" > /tmp/test.txt
# cat /tmp/test.txt
this is a test
# exit

退出容器并删除pod

[root@cce-7day-fudonghai-24106 027day]# kubectl delete pod cce7days-app11-fudonghai-0
pod "cce7days-app11-fudonghai-0" deleted
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                         READY     STATUS              RESTARTS   AGE
cce7days-app11-fudonghai-0   0/1       ContainerCreating   0          3s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                         READY     STATUS    RESTARTS   AGE
cce7days-app11-fudonghai-0   1/1       Running   0          64s

再次进入pod,看看之前建立的文件是否还在

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -ti cce7days-app11-fudonghai-0 /bin/sh
# cat /tmp/test.txt
this is a test

完成任务后及时删除pvc和statefulset

[root@cce-7day-fudonghai-24106 027day]# kubectl delete -f day6-statefulset.yaml 
statefulset.apps "cce7days-app11-fudonghai" deleted
[root@cce-7day-fudonghai-24106 027day]# kubectl delete -f day6-pvc.yaml 
persistentvolumeclaim "pvc-evs-auto" deleted

 

 

第七部分

Day7 Kubernetes安全原理分析

 第6课:K8S安全管理实训

 

课上实验部分

1,networkpolicy

networkpolicy的yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: wangbo
  namespace: default
spec:
  podSelector:
    matchLabels:       #nginx使用这个role: db加入 NetworkPolicy
      role: db
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:     #只接受远端带有role: frontend标签的pod访问
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 80   #nginx的端口是80

 通过cce控制台创建无状态工作负载:nginx,查看

[root@cce-7day-fudonghai-24106 027day]# kubectl get deploy
NAME      READY     UP-TO-DATE   AVAILABLE   AGE
nginx     1/1       1            1           6s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                     READY     STATUS    RESTARTS   AGE
nginx-656f6bf4f8-zmf52   1/1       Running   0          13s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                     READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
nginx-656f6bf4f8-zmf52   1/1       Running   0          37s       172.16.0.28   192.168.0.184   <none>           <none>

建立一个pod:fudongahi,不带什么特殊标签,从pod里可以正常访问nginx

apiVersion: v1
kind: Pod
metadata:
  name: fudonghai
  labels:
    run: normal
spec:
  containers:
  - name: euleros
    image: euleros:2.2.5
    command:
    - "/bin/sh"
    - "-c"
    - "sleep 3600"
[root@cce-7day-fudonghai-24106 027day]# kubectl create -f normal-pod.yaml 
pod/fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it fudonghai /bin/sh
sh-4.2# curl 172.16.0.28:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

给nginx那个pod打上标签,关联networkpolicy

[root@cce-7day-fudonghai-24106 027day]# kubectl edit pod nginx-656f6bf4f8-zmf52
pod/nginx-656f6bf4f8-zmf52 edited

新增标签内容
  labels:
    role: db

这个时候我们再从pod:fudonghai里面访问nginx,失败

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it fudonghai sh
sh-4.2# curl 172.16.0.28:80

^C

 

继续修改pod:fudonghai的标签,加上role: frontend就可以正常访问了

[root@cce-7day-fudonghai-24106 027day]# kubectl edit pod fudonghai
pod/fudonghai edited
[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it fudonghai sh
sh-4.2# curl 172.16.0.28:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
sh-4.2# exit

 

 

课下实验部分

 1,serviceaccount认证方式,使用kubectl为CCE集群创建一个pod只读用户,该用户只能查询指定namespace下的pod权限。

 建立namespace:cce

[root@cce-7day-fudonghai-24106 027day]# kubectl create namespace cce
namespace/cce created
[root@cce-7day-fudonghai-24106 027day]# kubectl get ns
NAME          STATUS    AGE
cce           Active    4s
default       Active    30d
kube-public   Active    30d
kube-system   Active    30d

在cce namespace下创建一个serviceAccount(sa)并获取对应的secret下的token

[root@cce-7day-fudonghai-24106 027day]# kubectl create sa cce-service-account -ncce
serviceaccount/cce-service-account created
[root@cce-7day-fudonghai-24106 027day]# kubectl get sa -ncce
NAME                  SECRETS   AGE
cce-service-account   1         14s
default               1         5m1s

获取sa对应的secret名字:

[root@cce-7day-fudonghai-24106 027day]# kubectl get sa cce-service-account -ncce -oyaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: 2019-07-31T06:58:21Z
  name: cce-service-account
  namespace: cce
  resourceVersion: "7070120"
  selfLink: /api/v1/namespaces/cce/serviceaccounts/cce-service-account
  uid: 950cea18-b360-11e9-8bbd-fa163eb87d99
secrets:
- name: cce-service-account-token-8hgjd

获取secret下的token,并base64解码获取token明文:

[root@cce-7day-fudonghai-24106 027day]# token=`kubectl get secret cce-service-account-token-8hgjd -ncce -oyaml |grep token: | awk '{print $2}' | xargs echo -n | base64 -d`
[root@cce-7day-fudonghai-24106 027day]# echo $token
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjY2UiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiY2NlLXNlcnZpY2UtYWNjb3VudC10b2tlbi04aGdqZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJjY2Utc2VydmljZS1hY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTUwY2VhMTgtYjM2MC0xMWU5LThiYmQtZmExNjNlYjg3ZDk5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNjZTpjY2Utc2VydmljZS1hY2NvdW50In0.H_6utsk_IqTzutOmjUCDvaIMUQ0F4W7PlyH41cO7JC2M6H4-ZS5AnRIwbP4E-5-b5lvof4d6UZv6MsvqgHWUUQwDRe5Ju21eBsED_pz7nltKNWgJhDKx50gxMytdelf0mDfznccSs66Sly_JJ48yF8636Q4XuNwaO1qPfnOvqWRUCDlDnJkza73l8qYUotQMZ9zjLXjOMnAo1DM7RPI2zGlv3c8KCSqqj6UYU2Jg_u8dz9nzOnWjZCkg5dRs5DBtKdkhnlCNnlC7fH8I7OkS-URq_rvxrAqWkPIWD2H3qKzZks6Q4Ydp-zdCnD_f4Va-ICxqjHBPGThKX_Xt1wSwYQ

新增cce-user用户

[root@cce-7day-fudonghai-24106 027day]# kubectl config set-cluster cce-viewer --server=https://192.168.0.252:5443 --certificate-authority=/var/paas/srv/kubernetes/ca.crt 
Cluster "cce-viewer" set.
[root@cce-7day-fudonghai-24106 027day]# kubectl config set-context cce-viewer --cluster=cce-7days-fudonghai
Context "cce-viewer" created.
[root@cce-7day-fudonghai-24106 027day]# kubectl set-credentials cce-user --token=$token
Error: unknown command "set-credentials" for "kubectl"
Run 'kubectl --help' for usage.
unknown command "set-credentials" for "kubectl"
[root@cce-7day-fudonghai-24106 027day]# kubectl config set-credentials cce-user --token=$token
User "cce-user" set.
[root@cce-7day-fudonghai-24106 027day]# kubectl config set-context cce-viewer --user=cce-user
Context "cce-viewer" modified.

通过如下命令可以看到已经有新建的context:

[root@cce-7day-fudonghai-24106 027day]# kubectl config get-contexts
CURRENT   NAME         CLUSTER               AUTHINFO   NAMESPACE
          cce-viewer   cce-7days-fudonghai   cce-user   
*         internal     internalCluster       user       

授予cce-user只读权限的role并通过rolebinding绑定对应的serviceAccount

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day7-role.yaml 
role.rbac.authorization.k8s.io/pod-reader created
[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day7-rolebinding.yaml 
rolebinding.rbac.authorization.k8s.io/pod-reader-binding created

role.yaml,role文件规定了,资源和对资源的权限。鉴权方式:RBAC(基于角色的访问控制)

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: cce
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

rolebinding.yaml,rolebinding文件规定了role和serviceAccount的绑定

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: pod-reader-binding
  namespace: cce
subjects:
- kind: ServiceAccount
  name: cce-service-account #步骤2中创建的serviceAccount名称
  namespace: cce
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

切换context到cce-viewer用户下,验证权限设置结果:

[root@cce-7day-fudonghai-24106 027day]# kubectl config use-context cce-viewer
Switched to context "cce-viewer".

查看default namespace下的pod,应该会返回403无权限的错误

[root@cce-7day-fudonghai-24106 027day]# kubectl get pod

但是提示错误:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
使用下面命令解决:

export KUBERNETES_MASTER=https://192.168.0.252:5443

继续kubectl get pods出现新错误
No resources found.
Unable to connect to the server: x509: certificate signed by unknown authority

最后使用这句跳过了509,终于提示没有权限

[root@cce-7day-fudonghai-24106 027day]# kubectl get pods --insecure-skip-tls-verify=true
No resources found.
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:cce:cce-service-account" cannot list resource "pods" in API group "" in the namespace "default"

继续查看namespace:cce里面的pod,权限正常

[root@cce-7day-fudonghai-24106 027day]# kubectl get pods -ncce --insecure-skip-tls-verify=true
No resources found.

使用如下命令即可切换回admin管理员权限的context:

[root@cce-7day-fudonghai-24106 027day]# kubectl config use-context internal
Switched to context "internal".
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
No resources found.
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -ncce
No resources found.

 查看之前所有操作所作的配置

[root@cce-7day-fudonghai-24106 027day]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /var/paas/srv/kubernetes/ca.crt
    server: https://192.168.0.252:5443
  name: cce-viewer
- cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.0.252:5443
  name: internalCluster
contexts:
- context:
    cluster: cce-7days-fudonghai
    user: cce-user
  name: cce-viewer
- context:
    cluster: internalCluster
    user: user
  name: internal
current-context: internal
kind: Config
preferences: {}
users:
- name: cce-user
  user:
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjY2UiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiY2NlLXNlcnZpY2UtYWNjb3VudC10b2tlbi04aGdqZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJjY2Utc2VydmljZS1hY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTUwY2VhMTgtYjM2MC0xMWU5LThiYmQtZmExNjNlYjg3ZDk5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNjZTpjY2Utc2VydmljZS1hY2NvdW50In0.H_6utsk_IqTzutOmjUCDvaIMUQ0F4W7PlyH41cO7JC2M6H4-ZS5AnRIwbP4E-5-b5lvof4d6UZv6MsvqgHWUUQwDRe5Ju21eBsED_pz7nltKNWgJhDKx50gxMytdelf0mDfznccSs66Sly_JJ48yF8636Q4XuNwaO1qPfnOvqWRUCDlDnJkza73l8qYUotQMZ9zjLXjOMnAo1DM7RPI2zGlv3c8KCSqqj6UYU2Jg_u8dz9nzOnWjZCkg5dRs5DBtKdkhnlCNnlC7fH8I7OkS-URq_rvxrAqWkPIWD2H3qKzZks6Q4Ydp-zdCnD_f4Va-ICxqjHBPGThKX_Xt1wSwYQ
- name: user
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

 

 

第八部分

 第7课:K8S集群运维与安装配置实训

 第8课:K8S问题排查实训

 

课上实验部分

 

课下实验部分

 

posted on 2019-07-29 22:23  fudonghai  阅读(1573)  评论(0编辑  收藏  举报