Envoy-负载均衡策略【七】

Envoy提供了几种不同的负载均衡策略,并可大体分为全局负载均衡和分布式负载均衡两大类 
  分布式负载均衡: Envoy自身基于上游主机(区域感知)的位置及健康状态等来确定如何分配负载至相关端点 
      主动健康检查
       区域感知路由
       负载均衡算法 
    全局负载均衡: 这是一种通过单个具有全局权限的组件来统一决策负载机制,Envoy的控制平面即是该类组件之一,它能通过指定各种参数来调整应用于各端点的负载. 
       优先级
        位置权重
        端点权重
        端点健康状态  

复杂的部署场景可以混合使用两类负载均衡策略,全局负载均衡通过定义高级路由优先级和权重以控制同级别的流量,而分布式负载均衡用于对系统中的微观变动作出反应.

clusters:
- name: ...
...
load_assignment: {...}
  cluster_name: ...
  endpoints: [] # LocalityLbEndpoints列表,每个列表项主要由位置、端点列表、权重和优先级四项组成;
  - locality: {...} # 位置定义
    region: ...
    zone: ...
    sub_zone: ...
  lb_endpoints: [] # 端点列表
  - endpoint: {...} # 端点定义
    address: {...} # 端点地址
    health_check_config: {...} # 当前端点与健康状态检查相关的配置;
  load_balancing_weight: ... # 当前端点的负载均衡权重,可选;
  metadata: {...} # 基于匹配的侦听器、过滤器链、路由和端点等为过滤器提供额外信息的元数据,常用用于提供服务配置或辅助负载均衡;
  health_status: ... # 端点是经EDS发现时,此配置项用于管理式设定端点的健康状态,可用值有UNKOWN、HEALTHY、UNHEALTHY、DRAINING、TIMEOUT和DEGRADED;
  load_balancing_weight: {...} # 权重
  priority: ... # 优先级
policy: {...} # 负载均衡策略设定
  drop_overloads: [] # 过载保护机制,丢弃过载流量的机制;
  overprovisioning_factor: ... # 整数值,定义超配因子(百分比),默认值为140,即1.4;
  endpoint_stale_after: ... # 过期时长,过期之前未收到任何新流量分配的端点将被视为过时,并标记为不健康;默认值0表示永不过时;
lb_subset_config: {...}	# 负载均衡子集
ring_hash_lb_config: {...}	# 环hash算法配置
original_dst_lb_config: {...}	#  原始连接配置
least_request_lb_config: {...}	#  最少连接数配置
common_lb_config: {...}			# 公共配置
  health_panic_threshold: ... # 恐慌阈值,Panic阈值,默认为50%;
  zone_aware_lb_config: {...} # 区域感知路由的相关配置;
  locality_weighted_lb_config: {...} # 局部权重负载均衡相关的配置;
  ignore_new_hosts_until_first_hc: ... # 是否在新加入的主机经历第一次健康状态检查之前不予考虑进负载均衡;
  
Envoy负载均衡算法 
  加权轮询(weighted round robin): 
  ROUND_ROBIN加权最少连请求(weighted least request): 
  LEAST_REQUEST环哈希(ring hash): 
  RING_HASH工作方式类似于一致性hash算法磁悬浮(maglev): 类似于环哈希,但大小固定为65537,并需要个主机映射的节点填满整个环,无论配置的主机和位置权重如何,算法都会尝试确保将每个主机至少映射一次;算法名称MAGLEV
  随机(random): 未配置健康检查策略,则随机负载均衡算法比轮询更好.
  原始目标负载均衡 ORIGINAL_DST_LB 

加权最少请求 
    所有主机权重相同(对长连接更有意义) 
      这是一种复杂度为O(1)调度算法,它随机选择N个(默认2,可调整)可用主机并从中选取具有最少活动请求的主机
      P2C算法效果不亚于O(1)复杂度的全扫描算法.它确保了集群中具有最大连接数的端点绝不会收到新的请求,直到其连接数小于等于其他主机  
    所有主机权重不同(对短连接更有意义) 
      调度算法使用加权轮询的方式调度,权重将根据主机在请求时的请求负载进行动态调整,方法是权重除以当前活动请求数.
      该算法在稳态下可提供良好的平衡效果,但可能无法尽快适应不太均衡的负载场景
      与P2C不同,主机将永远不会真正排空,即便随着时间的推移它将收到更少的请求.

负载均衡策略

 

Cluster中与负载均衡相关的配置参数速览

Envoy的负载均衡算法概述

负载算法:加权最少请求

负载算法:环哈希

路由哈希策略

配置路由哈希策略

环哈希配置示例

负载算法:磁悬浮

Envoy负载均衡算法小结

负载均衡算法:
        无状态应用请求分发:
            ROUND_ROBIN
            LEAST_REQUEST
            RANDOM
        有状态应用会话保持
            RING_HASH
            MAGLEV

            需要结合路由配置指明要hash的内容,或者称为绑定的标准; 

    算法ring_hash:
        Failover
        Failback

        会导致状态丢失

环哈希和磁悬浮部署实验示例:

环哈希是对2的32次方取模,缺点是计算量较大,节点少时可能存在负载的偏斜
磁悬浮是换哈希的改良,对65537取模,构建权重使得所有节点填满整个环,减少了计算量,计算完成时就确定了使用的是哪个节点,不用去计算下个点.计算消耗小于环哈希,稳定性略逊于换哈希.

1.Readme

Ring Hash LB Demo
环境说明
五个Service:

envoy:Front Proxy,地址为172.31.25.2
webserver01:第一个后端服务
webserver01-sidecar:第一个后端服务的Sidecar Proxy,地址为172.31.25.11
webserver02:第二个后端服务
webserver02-sidecar:第二个后端服务的Sidecar Proxy,地址为172.31.25.12
webserver03:第三个后端服务
webserver03-sidecar:第三个后端服务的Sidecar Proxy,地址为172.31.25.13
运行和测试
创建
docker-compose up
测试
# 我们在路由hash策略中,hash计算的是用户的浏览器类型,因而,使用如下命令持续发起请求可以看出,用户请求将始终被定向到同一个后端端点;因为其浏览器类型一直未变。
while true; do curl 172.31.25.2; sleep .3; done

# 我们可以模拟使用另一个浏览器再次发请求;其请求可能会被调度至其它节点,也可能仍然调度至前一次的相同节点之上;这取决于hash算法的计算结果;
while true; do curl -H "User-Agent: Hello" 172.31.25.2; sleep .3; done

# 也可使用如下脚本,验证同一个浏览器的请求是否都发往了同一个后端端点,而不同浏览器则可能会被重新调度;
while true; do index=$[$RANDOM%10]; curl -H "User-Agent: Browser_${index}" 172.31.25.2/user-agent && curl -H "User-Agent: Browser_${index}" 172.31.25.2/hostname && echo ; sleep .1; done

# 也可以使用如下命令,将一个后端端点的健康检查结果置为失败,动态改变端点,并再次判定其调度结果,验证此前调度至该节点的请求是否被重新分配到了其它节点;
curl -X POST -d 'livez=FAIL' http://172.31.25.11/livez
停止后清理
docker-compose down

2.docker-compose.yaml

[root@xksmaster1 ring-hash]# cat docker-compose.yaml
# Author: MageEdu <mage@magedu.com>
# Version: v1.0.1
# Site: www.magedu.com
#
version: '3.3'

services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.25.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01-sidecar
    - webserver02-sidecar
    - webserver03-sidecar

  webserver01-sidecar:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    hostname: red
    networks:
      envoymesh:
        ipv4_address: 172.31.25.11
        aliases:
        - myservice
        - red

  webserver01:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    network_mode: "service:webserver01-sidecar"
    depends_on:
    - webserver01-sidecar

  webserver02-sidecar:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    hostname: blue
    networks:
      envoymesh:
        ipv4_address: 172.31.25.12
        aliases:
        - myservice
        - blue

  webserver02:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    network_mode: "service:webserver02-sidecar"
    depends_on:
    - webserver02-sidecar

  webserver03-sidecar:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    hostname: green
    networks:
      envoymesh:
        ipv4_address: 172.31.25.13
        aliases:
        - myservice
        - green

  webserver03:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    network_mode: "service:webserver03-sidecar"
    depends_on:
    - webserver03-sidecar

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.25.0/24

2.front-envoy.yaml

lb_policy: RING_HASH

定义了算法为RING_HASH,环最大为2的20次方,最小为2的9次方.
对于什么进行hash,是基于route_config中hash_policy的参数进行设定,常见的会根据源地址hash或uri,浏览器等进行hash=》header_name: User-Agent
健康检查基于/livez返回值200-399进行判断
#主要配置
hash_policy:
                  # - connection_properties:
                  #     source_ip: true
                  - header:
                      header_name: User-Agent
                      
 clusters:
  - name: web_cluster_01
    connect_timeout: 0.5s
    type: STRICT_DNS
    lb_policy: RING_HASH
    ring_hash_lb_config:
      maximum_ring_size: 1048576
      minimum_ring_size: 512
admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 0.0.0.0, port_value: 9901 }

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: webservice
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route:
                  cluster: web_cluster_01
                  hash_policy:
                  # - connection_properties:
                  #     source_ip: true
                  - header:
                      header_name: User-Agent
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: web_cluster_01
    connect_timeout: 0.5s
    type: STRICT_DNS
    lb_policy: RING_HASH
    ring_hash_lb_config:
      maximum_ring_size: 1048576
      minimum_ring_size: 512
    load_assignment:
      cluster_name: web_cluster_01
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: myservice
                port_value: 80
    health_checks:
    - timeout: 5s
      interval: 10s
      unhealthy_threshold: 2
      healthy_threshold: 2
      http_health_check:
        path: /livez
        expected_statuses:
          start: 200
          end: 399

 3.envoy-sidercar-proxy.yaml

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service 
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 127.0.0.1, port_value: 8080 }

4.部署测试

启动容器后,当User-Agent不发生改变,始终会被调度到通一台服务器上.
一旦User-Agent发生改变,根据计算后的hash值,请求会被分别调度到三台服务器之一进行调度,并保持不变.即使间隔很久再次访问.
# docker-compose up

## 我们在路由hash策略中,hash计算的是用户的浏览器类型,因而,使用如下命令持续发起请求可以看出,用户请求将始终被定向到同一个后端端点;因为其浏览器类型一直未变。
# while true;do curl 172.31.25.2;sleep 2;done
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12!

#我们可以模拟使用另一个浏览器再次发请求;其请求可能会被调度至其它节点,也可能仍然调度至前一次的相同节点之上;这取决于hash算法的计算结果;
root@k8s-node-1:~# while true;do curl -H "User-Agent: Chrome" 172.31.25.2;sleep 2;done
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11!

root@k8s-node-1:~# while true;do curl -H "User-Agent: IE6.0" 172.31.25.2;sleep 2;done
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12!
^C
root@k8s-node-1:~# while true;do curl -H "User-Agent: IE4.0" 172.31.25.2;sleep 2;done
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.25.11!
^C
root@k8s-node-1:~# while true;do curl -H "User-Agent: IE7.0" 172.31.25.2;sleep 2;done
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.25.13!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.25.13!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.25.13!

# curl 172.31.25.2
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.25.12!

[root@xksmaster1 ring-hash]# # 也可使用如下脚本,验证同一个浏览器的请求是否都发往了同一个后端端点,而不同浏览器则可能会被重新调度;
[root@xksmaster1 ring-hash]# while true; do index=$[$RANDOM%10]; curl -H "User-Agent: Browser_${index}" 172.31.25.2/user-agent && curl -H "User-Agent: Browser_${index}" 172.31.25.2/hostname && echo ; sleep .1; done
User-Agent: Browser_7
ServerName: blue

User-Agent: Browser_2
ServerName: red

User-Agent: Browser_8
ServerName: blue

User-Agent: Browser_4
ServerName: green

User-Agent: Browser_3
ServerName: green

User-Agent: Browser_9
ServerName: red

User-Agent: Browser_5
ServerName: blue

User-Agent: Browser_9
ServerName: red

User-Agent: Browser_0
ServerName: green

User-Agent: Browser_1
ServerName: blue

User-Agent: Browser_6
ServerName: green


#也可以使用如下命令,将一个后端端点的健康检查结果置为失败,动态改变端点,并再次判定其调度结果,验证此前调度至该节点的请求是否被重新分配到了其它节点;
# red服务给剔除,只能请求 blue和green
# red服务恢复的话,请求 重新恢复到原来请求服务上

[root@xksmaster1 ring-hash]# curl -X POST -d 'livez=FAIL' http://172.31.25.11/livez

[root@xksmaster1 ring-hash]# while true; do index=$[$RANDOM%10]; curl -H "User-Agent: Browser_${index}" 172.31.25.2/user-agent && curl -H "User-Agent: Browser_${index}" 172.31.25.2/hostname && echo ; sleep .1; done
User-Agent: Browser_2
ServerName: blue

User-Agent: Browser_3
ServerName: green

User-Agent: Browser_2
ServerName: blue

User-Agent: Browser_7
ServerName: green

User-Agent: Browser_9
ServerName: green

User-Agent: Browser_1
ServerName: blue

User-Agent: Browser_4
ServerName: green

User-Agent: Browser_3
ServerName: green

User-Agent: Browser_2
ServerName: blue

User-Agent: Browser_0
ServerName: green

User-Agent: Browser_7
ServerName: green

User-Agent: Browser_0
ServerName: green

 

posted @ 2023-05-11 16:07  しみずよしだ  阅读(306)  评论(0)    收藏  举报