Kubernetes Ingress 使用总结-xianchao

 

1.1 Ingress 和 Ingress Controller 深度解读


互动:为什么要使用 k8s 原生的 Ingress controller 做七层负载均衡?


1.1.1 Ingress 介绍


Ingress 官网定义:Ingress 可以把进入到集群内部的请求转发到集群中的一些服务上,从而可以把服务映射到集群外部。Ingress 能把集群内 Service 配置成外网能够访问的 URL,流量负载均衡,提供
基于域名访问的虚拟主机等。Ingress 简单的理解就是你原来需要改 Nginx 配置,然后配置各种域名对应哪个 Service,现在把这个动作抽象出来,变成一个 Ingress 对象,你可以用 yaml 创建,每次不要去改 Nginx 了,直接改yaml 然后创建/更新就行了;那么问题来了:”Nginx 该怎么处理?”

Ingress Controller 这东西就是解决 “Nginx 的处理方式” 的;Ingress Controller 通过与Kubernetes API 交互,动态的去感知集群中 Ingress 规则变化,然后读取他,按照他自己模板生成一段 Nginx 配置,再写到 Ingress Controller Nginx 里,最后 reload 一下,工作流程如下图:

实际上 Ingress 也是 Kubernetes API 的标准资源类型之一,它其实就是一组基于 DNS 名称(host)或 URL 路径把请求转发到指定的 Service 资源的规则。用于将集群外部的请求流量转发到集群内部完成的服务发布。我们需要明白的是,Ingress 资源自身不能进行“流量穿透”,仅仅是一组规则的集合,这些集合规则还需要其他功能的辅助,比如监听某套接字,然后根据这些规则的匹配进行路由转发,这些能够为 Ingress 资源监听套接字并将流量转发的组件就是 Ingress Controller。
注:Ingress 控制器不同于 Deployment 控制器的是,Ingress 控制器不直接运行,它不由 kubecontroller-manager 进行控制,它仅仅是 Kubernetes 集群的一个附件,类似于 CoreDNS,需要在集群上单独部署。


1.1.2 Ingress Controller 介绍


Ingress Controller 是一个七层负载均衡调度器,客户端的请求先到达这个七层负载均衡调度器,由七层负载均衡器在反向代理到后端 pod,常见的七层负载均衡器有 nginx、traefik,以我们熟悉的nginx 为例,假如请求到达 nginx,会通过 upstream 反向代理到后端 pod 应用,但是后端 pod 的 ip地址是一直在变化的,因此在后端 pod 前需要加一个 service,这个 service 只是起到分组的作用,那么我们 upstream 只需要填写 service 地址即可

1.1.3 Ingress 和 Ingress Controller 总结


Ingress Controller


Ingress Controller 可以理解为控制器,它通过不断的跟 Kubernetes API 交互,实时获取后端Service、Pod 的变化,比如新增、删除等,结合 Ingress 定义的规则生成配置,然后动态更新上边的Nginx 或者 trafik 负载均衡器,并刷新使配置生效,来达到服务自动发现的作用。
Ingress 则是定义规则,通过它定义某个域名的请求过来之后转发到集群中指定的 Service。它可以通过 Yaml 文件定义,可以给一个或多个 Service 定义一个或多个 Ingress 规则。


1.1.4 使用 Ingress Controller 代理 k8s 内部应用的流程


(1)部署 Ingress controller,我们 ingress controller 使用的是 nginx
(2)创建 Pod 应用,可以通过控制器创建 pod
(3)创建 Service,用来分组 pod
(4)创建 Ingress http,测试通过 http 访问应用
(5)创建 Ingress https,测试通过 https 访问应用
客户端通过七层调度器访问后端 pod 的方式使用七层负载均衡调度器 ingress controller 时,当客户端访问 kubernetes 集群内部的应用时,数据包走向如下图流程所示:

1.1.5 安装 Nginx Ingress Controller


Ingress-nginx-controller 版本是 0.21.0 之后才具有灰度功能:
前面我们学习的 ingress-controller-controller 是 0.20.0 版本,不具备灰度发布的能力
这节课学习的 ingress-nginx-controller 是 0.46.0 的版本,具备灰度发布能力。
#把 ingress-nginx-controller_0_46_0.tar.gz 和 kube-webhook-certgen-v_1_5_1.tar.gz 镜像上传到 xianchaonode1 节点,手动解压镜像:

 

安装 Ingress conrtroller 参考地址:
https://github.com/kubernetes/ingress-nginx

 

#node1 node2
[root@xianchaonode1 Ingress]# docker load -i ingress-nginx-controller_0_46_0.tar.gz
[root@xianchaonode1 Ingress]# docker load -i kube-webhook-certgen-v_1_5_1.tar.gz

[root@xianchaomaster1 Ingress]# kubectl apply -f deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
You have new mail in /var/spool/mail/root
[root@xianchaomaster1 Ingress]# kubectl get pods -n ingress-nginx
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-z2zc2       0/1     Completed   0          13s
ingress-nginx-admission-patch-crw6g        0/1     Completed   0          13s
ingress-nginx-controller-9c746979d-jv67t   0/1     Running     0          13s
[root@xianchaomaster1 Ingress]# kubectl get pods -n ingress-nginx
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-z2zc2       0/1     Completed   0          15s
ingress-nginx-admission-patch-crw6g        0/1     Completed   0          15s
ingress-nginx-controller-9c746979d-jv67t   0/1     Running     0          15s
[root@xianchaomaster1 Ingress]# kubectl get pods -n ingress-nginx
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-z2zc2       0/1     Completed   0          15s
ingress-nginx-admission-patch-crw6g        0/1     Completed   0          15s
ingress-nginx-controller-9c746979d-jv67t   0/1     Running     0          15s
[root@xianchaomaster1 Ingress]# kubectl get pods -n ingress-nginx
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-z2zc2       0/1     Completed   0          28s
ingress-nginx-admission-patch-crw6g        0/1     Completed   0          28s
ingress-nginx-controller-9c746979d-jv67t   1/1     Running     0          28s

# 80:32480
[root@xianchaomaster1 Ingress]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.100.187.98   <none>        80:32480/TCP,443:31975/TCP   2m33s
ingress-nginx-controller-admission   ClusterIP   10.98.6.42      <none>        443/TCP                      2m33s

 1.1.6 测试 Ingress HTTP 代理 tomcat

1.部署后端 tomcat 服务 


#把 tomcat-8-5.tar.gz 上传到 xianchaonode1 \xianchaonode2机器,手动解压:

[root@xianchaonode1 Ingress]# docker load -i tomcat-8-5.tar.gz
[root@xianchaonode2 Ingress]# docker load -i tomcat-8-5.tar.gz

#ingress-demo.yaml
[root@xianchaomaster1 Ingress]# cat ingress-demo.yaml
apiVersion: v1
kind: Service
metadata:
  name: tomcat
  namespace: default
spec:
  selector:
    app: tomcat
    release: canary
  ports:
  - name: http
    targetPort: 8080
    port: 8080
  - name: ajp
    targetPort: 8009
    port: 8009
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tomcat
      release: canary
  template:
    metadata:
      labels:
        app: tomcat
        release: canary
    spec:
      containers:
      - name: tomcat
        image: tomcat:8.5.34-jre8-alpine
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 8080
          name: ajp
          containerPort: 8009
          
[root@xianchaomaster1 Ingress]# kubectl apply -f ingress-demo.yaml

[root@xianchaomaster1 Ingress]# kubectl get pods | grep tomcat
tomcat-deploy-66b67fcf7b-v7br5    1/1     Running    0          15s
tomcat-deploy-66b67fcf7b-xhn9b    1/1     Running    0          15s

 2、部署 ingress


(1)编写 ingress 的配置清单

[root@xianchaomaster1 Ingress]# cat ingress-myapp.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-myapp
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: tomcat.lucky.com
    http:
      paths:
      - path: /
        pathType:  Prefix
        backend:
         service:
           name: tomcat
           port:
            number: 8080

[root@xianchaomaster1 Ingress]# kubectl apply -f ingress-myapp.yaml
ingress.networking.k8s.io/ingress-myapp created
[root@xianchaomaster1 Ingress]# kubectl describe ingress ingress-myapp
Name:             ingress-myapp
Namespace:        default
Address:
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host              Path  Backends
  ----              ----  --------
  tomcat.lucky.com
                    /   tomcat:8080 (10.244.121.50:8080,10.244.121.9:8080)
Annotations:        kubernetes.io/ingress.class: nginx
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    7s    nginx-ingress-controller  Scheduled for sync
[root@xianchaomaster1 Ingres

 

修改电脑本地的 host 文件,增加如下一行,下面的 ip 是 k8s 的 xianchaonode1 节点 ip
192.168.40.181 tomcat.lucky.com
浏览器访问 http://tomcat.lucky.com:32480,出现如下:

 

1.2 通过 Ingress-nginx 实现灰度发布

#场景一: 将新版本灰度给部分用户

假设线上运行了一套对外提供 7 层服务的 Service A 服务,后来开发了个新版本 Service A’ 想要上线,但又不想直接替换掉原来的 Service A,希望先灰度一小部分用户,等运行一段时间足够稳定
了再逐渐全量上线新版本,最后平滑下线旧版本。这个时候就可以利用 Nginx Ingress 基于 Header 或 Cookie 进行流量切分的策略来发布,业务使用 Header 或 Cookie 来标识不同类型的用户,我们
通过配置 Ingress 来实现让带有指定 Header 或 Cookie 的请求被转发到新版本,其它的仍然转发到旧版本,从而实现将新版本灰度给部分用户:

#场景二: 切一定比例的流量给新版本

假设线上运行了一套对外提供 7 层服务的 Service B 服务,后来修复了一些问题,需要灰度上线一个新版本 Service B’,但又不想直接替换掉原来的 Service B,而是让先切 10% 的流量到新版
本,等观察一段时间稳定后再逐渐加大新版本的流量比例直至完全替换旧版本,最后再滑下线旧版本,从而实现切一定比例的流量给新版本:

基于Ingress Nginx 的 Canary 规则

#解释1 和 解释2 看一个就行
#解释1
Ingress Nginx Annotations文于的Canary规则
    #nginx.ingress.kubernetes.io/canary-by-header: 基于该Annotation中指定Request Header进行流量切分,适用于灰度发布以及A/B测试
        在请求报文中,若存在该Hleader且其值为always时,请求将会被发送到Canary版本
        若存在该Header且其值为never时,请求将不会被发送至Canary版本
        对于任何其它值,将忽略该Annotation指定的Headder,并通过优先级将请求与其他金丝雀规则进行优先级的比较

    #nginx.ingress.kubernetes,io/canary-by-header-value:基于该Annotation中指定的Request Header的值进行流量切分,标头名称则由前一个Annotation (nginx.ingress.kubernetes.io/canary-by-header) 进行指定
        请求报文中存在指定的标头,且其值与该Annotation的信匹配时,它将被路由到Canary版本
        对于任何其它值,将忽略该Annotation
    
    #nginx.ingress.kubernetes.io/canary-by-header-pattern
        同canary-by-header-value的功能类似,但该Annotation其于正则表达式匹配Request Header的值
        若该Annotation与canary-by-header-value同时存在,则该Annotation会被忽略
       
    #nginx.ingress.kubernetes.io/canary-weight: 基于服务权重进行流量切分,适用于绿部署,权重范围0-100按百分比将请求路由到Canary Ingress中指定的服务
        权重为0 意味着该金丝雀规则不会向Canary入口的服务发送任何请求
        权重为100意味着所有请求都将被发送到 Canary入口
    
    #nginx.ingress.kubernetes.io/canary-by-cookie: 基于 cookie 的流量切分,适用于灰度发布与 A/B 测试
        cookie的值设置为alwavs时,它将被路山到Canarv入口
        cookie的值设置为never时,请求不会被发送到Canary入口
        对于任何其他值,将忽略 cookie 并将请求与其他金丝雀规则进行优先级的比较

规则的应用次序
    #Canary规则会按特定的次序进行评估
        canary-by-header -> canary-by-cookie-> canary-weight
#解释2
Ingress-Nginx 是一个 K8S ingress 工具,支持配置 Ingress Annotations 来实现不同场景下的灰度发布和测试。 Nginx Annotations 支持以下几种 Canary 规则:
假设我们现在部署了两个版本的服务,老版本和 canary 版本

#nginx.ingress.kubernetes.io/canary-by-header:基于 Request Header 的流量切分,适用于灰度发布以及 A/B 测试。
    当 Request Header 设置为 always 时,请求将会被一直发送到 Canary 版本;
    当 Request Header 设置为 never 时,请求不会被发送到 Canary 入口。

#nginx.ingress.kubernetes.io/canary-by-header-value:要匹配的 Request Header 的值,用于通知 Ingress 将请求路由到 Canary Ingress 中指定的服务。
    当 Request Header 设置为此值时,它将被路由到 Canary 入口。

#nginx.ingress.kubernetes.io/canary-weight:基于服务权重的流量切分,适用于蓝绿部署,权重范围 0 - 100 按百分比将请求路由到 Canary Ingress 中指定的服务。
    权重为 0 意味着该金丝雀规则不会向 Canary 入口的服务发送任何请求。权重为 60 意味着 60%流量转到 canary。
    权重为 100 意味着所有请求都将被发送到 Canary 入口。

#nginx.ingress.kubernetes.io/canary-by-cookie:基于 Cookie 的流量切分,适用于灰度发布与 A/B 测试。用于通知 Ingress 将请求路由到 Canary Ingress 中指定的服务的 cookie。
    当cookie 值设置为 always 时,它将被路由到 Canary 入口;
    当 cookie 值设置为 never 时,请求不会被发送到 Canary 入口。

实战

#部署两个版本的服务
#这里以简单的 nginx 为例,先部署一个 v1 版本:

[root@xianchaonode1 ~]# docker load -i openresty.tar.gz
[root@xianchaonode2 ~]# docker load -i openresty.tar.gz

[root@xianchaomaster1 v1-v2]# vim v1.yam
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
      version: v1
  template:
    metadata:
      labels:
        app: nginx
        version: v1
    spec:
      containers:
      - name: nginx
        image: "openresty/openresty:centos"
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          protocol: TCP
          containerPort: 80
        volumeMounts:
        - mountPath: /usr/local/openresty/nginx/conf/nginx.conf
          name: config
          subPath: nginx.conf
      volumes:
      - name: config
        configMap:
          name: nginx-v1
---
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: nginx
    version: v1
  name: nginx-v1
data:
  nginx.conf: |-
    worker_processes  1;
    events {
        accept_mutex on;
        multi_accept on;
        use epoll;
        worker_connections  1024;
    }
    http {
        ignore_invalid_headers off;
        server {
            listen 80;
            location / {
                access_by_lua '
                    local header_str = ngx.say("nginx-v1")
                ';
            }
        }
    }
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-v1
spec:
  type: ClusterIP
  ports:
  - port: 80
    protocol: TCP
    name: http
  selector:
    app: nginx
    version: v1
    
#再部署一个 v2 版本:
vim v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
      version: v2
  template:
    metadata:
      labels:
        app: nginx
        version: v2
    spec:
      containers:
      - name: nginx
        image: "openresty/openresty:centos"
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          protocol: TCP
          containerPort: 80
        volumeMounts:
        - mountPath: /usr/local/openresty/nginx/conf/nginx.conf
          name: config
          subPath: nginx.conf
      volumes:
      - name: config
        configMap:
          name: nginx-v2
---
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: nginx
    version: v2
  name: nginx-v2
data:
  nginx.conf: |-
    worker_processes  1;
    events {
        accept_mutex on;
        multi_accept on;
        use epoll;
        worker_connections  1024;
    }
    http {
        ignore_invalid_headers off;
        server {
            listen 80;
            location / {
                access_by_lua '
                    local header_str = ngx.say("nginx-v2")
                ';
            }
        }
    }
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-v2
spec:
  type: ClusterIP
  ports:
  - port: 80
    protocol: TCP
    name: http
  selector:
    app: nginx
    version: v2
    
 kubectl apply -f v1.yaml
 kubectl apply -f v2.yaml

[root@xianchaomaster1 Ingress]# kubectl get pods | grep nginx
nginx-v1-79bc94ff97-h54gs         1/1     Running    0          3m1s
nginx-v2-5f885975d5-q2jl2         1/1     Running    0          14s


#应用v1-ingress canary.example.com nginx-v1 80
#cat v1-ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: canary.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-v1
          servicePort: 80
        path: /


[root@xianchaomaster1 Ingress]# kubectl apply -f v1-ingress.yaml
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.extensions/nginx created
[root@xianchaomaster1 Ingress]# kubectl get ingress
NAME    CLASS    HOSTS                ADDRESS   PORTS   AGE
nginx   <none>   canary.example.com             80      14s
[root@xianchaomaster1 Ingress]# kubectl describe ingress nginx
Name:             nginx
Namespace:        default
Address:
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                Path  Backends
  ----                ----  --------
  canary.example.com
                      /   nginx-v1:80 (10.244.121.18:80)
Annotations:          kubernetes.io/ingress.class: nginx
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    26s   nginx-ingress-controller  Scheduled for sync
  
#访问验证一下:
[root@xianchaomaster1 Ingress]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.100.187.98   <none>        80:32480/TCP,443:31975/TCP   3h8m
ingress-nginx-controller-admission   ClusterIP   10.98.6.42      <none>        443/TCP                      3h8m
[root@xianchaomaster1 Ingress]# curl -H "Host: canary.example.com" http://192.168.40.180:32480
nginx-v1

 基于 Header 的流量切分

#创建 Canary Ingress,指定 v2 版本的后端服务,且加上一些 annotation,
#实现仅将带有名为Region 且值为 cd 或 sz 的请求头的请求转发给当前 Canary Ingress,模拟灰度新版本给成都和深圳地域的用户:
[root@xianchaomaster1 v1-v2]# vim v2-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-header: "Region"
    nginx.ingress.kubernetes.io/canary-by-header-pattern: "cd|sz"
  name: nginx-canary
spec:
  rules:
  - host: canary.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-v2
          servicePort: 80
        path: /
        
测试访问:
[root@xianchaomaster1 ingress]# curl -H "Host: canary.example.com" -H "Region: cd" http://192.168.40.180:32480
返回结果如下:
nginx-v2
[root@xianchaomaster1 v1-v2]# curl -H "Host: canary.example.com" -H "Region: bj" http://192.168.40.180:32480
返回结果如下:
nginx-v1
[root@xianchaomaster1 v1-v2]# curl -H "Host: canary.example.com" -H "Region: cd" http://192.168.40.180:32480
返回结果如下:
nginx-v2
$ curl -H "Host: canary.example.com" http://192.168.40.180:32480
返回结果如下:
nginx-v1
可以看到,只有 header Region 为 cd 或 sz 的请求才由 v2 版本服务响应。

 基于 Cookie 的流量切分:

与前面 Header 类似,不过使用 Cookie 就无法自定义 value 了,这里以模拟灰度成都地域用户为例,仅将带有名为 user_from_cd 的 cookie 的请求转发给当前 Canary Ingress 。先删除前面基于 Header 的流量切分的 Canary Ingress,然后创建下面新的 Canary Ingress:

[root@xianchaomaster1 v1-v2]# kubectl delete -f v2-ingress.yaml
[root@xianchaomaster1 v1-v2]# vim v1-cookie.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-cookie: "user_from_cd"
  name: nginx-canary
spec:
  rules:
  - host: canary.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-v2
          servicePort: 80
        path: /

[root@xianchaomaster1 v1-v2]# kubectl apply -f v1-cookie.yaml
#测试访问:只有always才返回v2 其他 =never =n 都返回v1
[root@xianchaomaster1 v1-v2]# curl -s -H "Host: canary.example.com" --cookie "user_from_cd=always" http://192.168.40.180:32480
nginx-v2
[root@xianchaomaster1 v1-v2]# curl -s -H "Host: canary.example.com" --cookie "user_from_bj=always" http://192.168.40.180:32480
nginx-v1
[root@xianchaomaster1 v1-v2]# curl -s -H "Host: canary.example.com" http://192.168.40.180:32480
nginx-v1

可以看到,只有 cookie user_from_cd 为 always 的请求才由 v2 版本的服务响应。

基于服务权重的流量切分 

基于服务权重的 Canary Ingress 就简单了,直接定义需要导入的流量比例,这里以导入 10% 流量到 v2 版本为例 (如果有,先删除之前的 Canary Ingress)

[root@xianchaomaster1 v1-v2]# kubectl delete -f v1-cookie.yaml
[root@xianchaomaster1 v1-v2]# vim v1-weight.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "10"
  name: nginx-canary
spec:
  rules:
  - host: canary.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-v2
          servicePort: 80
        path: /

#没有分配权重都是v1
[root@xianchaomaster1 v1-v2]# for i in {1..10}; do curl -H "Host: canary.example.com" http://192.168.40.180:32480; done;
返回如下结果:
nginx-v1
nginx-v1
nginx-v1
nginx-v1
nginx-v1
nginx-v1
nginx-v1
nginx-v1
nginx-v1
nginx-v1

[root@xianchaomaster1 v1-v2]# kubectl apply -f v1-weight.yaml

#10个里面1个是v2
#可以看到,大概只有十分之一的几率由 v2 版本的服务响应,符合 10% 服务权重的设
[root@xianchaomaster1 v1-v2]# for i in {1..10}; do curl -H "Host: canary.example.com" http://192.168.40.180:32480; done;
nginx-v1
nginx-v1
nginx-v1
nginx-v1
nginx-v1
nginx-v1
nginx-v2
nginx-v1
nginx-v1
nginx-v1

Ingress-其他Annotation配置

#ingress_single-host.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-web
  namespace: magedu
  annotations:
    kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型
    nginx.ingress.kubernetes.io/use-regex: "true" ##指定后面rules定义的path可以使用正则表达式
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600" ##连接超时时间,默认为5s
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600" ##后端服务器回转数据超时时间,默认为60s
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600" ##后端服务器响应超时时间,默认为60s
    nginx.ingress.kubernetes.io/proxy-body-size: "50m" ##客户端上传文件,最大大小,默认为20m
    #nginx.ingress.kubernetes.io/rewrite-target: / ##URL重写
    nginx.ingress.kubernetes.io/app-root: /index.html

spec:
  rules:
  - host: www.jiege.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: magedu-tomcat-app1-service 
            port:
              number: 80


#ingress_multi-host.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-web
  namespace: magedu
  annotations:
    kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型
    nginx.ingress.kubernetes.io/use-regex: "true" ##指定后面rules定义的path可以使用正则表达式
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600" ##连接超时时间,默认为5s
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600" ##后端服务器回转数据超时时间,默认为60s
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600" ##后端服务器响应超时时间,默认为60s
    nginx.ingress.kubernetes.io/proxy-body-size: "10m" ##客户端上传文件,最大大小,默认为20m
    #nginx.ingress.kubernetes.io/rewrite-target: / ##URL重写
    nginx.ingress.kubernetes.io/app-root: /index.html

spec:
  rules:
  - host: www.jiege.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: magedu-tomcat-app1-service
            port:
              number: 80


  - host: mobile.jiege.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: magedu-tomcat-app2-service
            port:
              number: 80



#ingress-url.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-web
  namespace: magedu
  annotations:
    kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型
    nginx.ingress.kubernetes.io/use-regex: "true" ##指定后面rules定义的path可以使用正则表达式
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600" ##连接超时时间,默认为5s
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600" ##后端服务器回转数据超时时间,默认为60s
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600" ##后端服务器响应超时时间,默认为60s
    nginx.ingress.kubernetes.io/proxy-body-size: "10m" ##客户端上传文件,最大大小,默认为20m
    #nginx.ingress.kubernetes.io/rewrite-target: / ##URL重写
    nginx.ingress.kubernetes.io/app-root: /index.html
spec:
  rules:
  - host: www.jiege.com
    http:
      paths:
      - pathType: Prefix
        path: "/app1"
        backend:
          service:
            name: magedu-tomcat-app1-service
            port:
              number: 80

      - pathType: Prefix
        path: "/app2"
        backend:
          service:
            name: magedu-tomcat-app2-service
            port:
              number: 80


#ingress-https-magedu_single-host.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-web
  namespace: magedu
  annotations:
    kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型
    nginx.ingress.kubernetes.io/ssl-redirect: 'true' #SSL重定向,即将http请求强制重定向至https,等于nginx中的全站https
spec:
  tls:
  - hosts:
    - www.jiege.com
    secretName: tls-secret-www
  rules:
  - host: www.jiege.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: magedu-tomcat-app1-service
            port:
              number: 80

 

posted @ 2023-04-09 20:31  しみずよしだ  阅读(438)  评论(0)    收藏  举报