Envoy-流量治理进阶(流量灰度(迁移)、流量分割、Http流量镜像)【十二】

流量灰度(迁移)实验部署示例:

1.Readme

[root@k8s-master01 http-traffic-shifting]# cat README.md
# HTTP Traffic Shifting Demo

### 环境说明
#### 六个Service:

- envoy:Front Proxy,地址为172.31.55.10
- 5个后端服务
  - demoapp-v1.0-1、demoapp-v1.0-2和demoapp-v1.0-3:对应于Envoy中的demoappv10集群
  - demoapp-v1.1-1和demoapp-v1.1-2:对应于Envoy中的demoappv11集群

#### 使用的路由配置

```
            virtual_hosts:
            - name: demoapp
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                  runtime_fraction:
                    default_value:
                      numerator: 100
                      denominator: HUNDRED
                    runtime_key: routing.traffic_shift.demoapp
                route:
                  cluster: demoappv10
              - match:
                  prefix: "/"
                route:
                  cluster: demoappv11
```

### 运行和测试
1. 创建并运行容器
```
docker-compose up
```

2. 在一个特定的终端上,运行测试脚本send-request.sh

   ```
   # 脚本需要front-envoy的地址为参数,并每隔两秒钟向其发起一次HTTP请求
   ./send-request.sh 172.31.55.10
   demoapp-v1.0:demoapp-v1.1 = 1:0
   demoapp-v1.0:demoapp-v1.1 = 2:0
   demoapp-v1.0:demoapp-v1.1 = 3:0
   demoapp-v1.0:demoapp-v1.1 = 4:0
   ……
   # 此时所有流量,都将由集群demoappv10所承载,因为默认配置中,100%的流量比例将保留给该集群
   ```

3. 另外启动一个终端,动态调整流量分发比例

   ```
   # 将保留给demoappv10集群的流量比例调整为90%,方法是将指定键的值定义为相应的分子数即可
   curl -XPOST http://172.31.55.10:9901/runtime_modify?routing.traffic_shift.demoapp=90
   ```

4. 重新运行测试脚本,可以得出类似如下的结果

   ```
   # 在请求中使用特定的查询条件
   ./send-request.sh 172.31.55.10
   demoapp-v1.0:demoapp-v1.1 = 1:0
   demoapp-v1.0:demoapp-v1.1 = 2:0
   demoapp-v1.0:demoapp-v1.1 = 3:0
   demoapp-v1.0:demoapp-v1.1 = 4:0
   demoapp-v1.0:demoapp-v1.1 = 5:0
   demoapp-v1.0:demoapp-v1.1 = 6:0
   demoapp-v1.0:demoapp-v1.1 = 6:1
   demoapp-v1.0:demoapp-v1.1 = 7:1
   demoapp-v1.0:demoapp-v1.1 = 8:1
   demoapp-v1.0:demoapp-v1.1 = 9:1
   demoapp-v1.0:demoapp-v1.1 = 10:1
   demoapp-v1.0:demoapp-v1.1 = 11:1
   demoapp-v1.0:demoapp-v1.1 = 12:1
   demoapp-v1.0:demoapp-v1.1 = 13:1
   demoapp-v1.0:demoapp-v1.1 = 14:1
   demoapp-v1.0:demoapp-v1.1 = 15:1
   demoapp-v1.0:demoapp-v1.1 = 16:1
   demoapp-v1.0:demoapp-v1.1 = 17:1
   demoapp-v1.0:demoapp-v1.1 = 18:1
   demoapp-v1.0:demoapp-v1.1 = 18:2
   demoapp-v1.0:demoapp-v1.1 = 19:2
   demoapp-v1.0:demoapp-v1.1 = 20:2
   demoapp-v1.0:demoapp-v1.1 = 21:2
   demoapp-v1.0:demoapp-v1.1 = 22:2
   demoapp-v1.0:demoapp-v1.1 = 23:2
   demoapp-v1.0:demoapp-v1.1 = 24:2
   demoapp-v1.0:demoapp-v1.1 = 25:2
   demoapp-v1.0:demoapp-v1.1 = 25:3
   demoapp-v1.0:demoapp-v1.1 = 26:3
   demoapp-v1.0:demoapp-v1.1 = 27:3
   demoapp-v1.0:demoapp-v1.1 = 28:3
   demoapp-v1.0:demoapp-v1.1 = 29:3
   demoapp-v1.0:demoapp-v1.1 = 30:3
   demoapp-v1.0:demoapp-v1.1 = 31:3
   demoapp-v1.0:demoapp-v1.1 = 32:3
   ……
   # 测试的时间越长,样本数越大,越能接近于实际比例;
   # 事实上,我们完全可以阶段性地多次进行流量比例的微调;
   ```

5. 停止后清理

```
docker-compose down
```

2.docker-compose.yaml

[root@k8s-master01 http-traffic-shifting]# cat docker-compose.yaml
version: '3'

services:
  front-envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
      - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.55.10
    expose:
      # Expose ports 80 (for general traffic) and 9901 (for the admin server)
      - "80"
      - "9901"

  demoapp-v1.0-1:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-1
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.0-2:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-2
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.0-3:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-3
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.1-1:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-1
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"

  demoapp-v1.1-2:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-2
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.55.0/24

3.front-envoy.yaml

[root@k8s-master01 http-traffic-shifting]# cat front-envoy.yaml
admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

layered_runtime:
  layers:
  - name: admin
    admin_layer: {}

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: demoapp
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                  runtime_fraction:
                    default_value:
                      numerator: 100
                      denominator: HUNDRED
                    runtime_key: routing.traffic_shift.demoapp
                route:
                  cluster: demoappv10
              - match:
                  prefix: "/"
                route:
                  cluster: demoappv11
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: demoappv10
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv10
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv10
                port_value: 80

  - name: demoappv11
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv11
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv11
                port_value: 80

4.send-request.sh

[root@k8s-master01 http-traffic-shifting]# cat send-request.sh
#!/bin/bash
declare -i ver10=0
declare -i ver11=0

interval="0.2"

while true; do
        if curl -s http://$1/hostname | grep "demoapp-v1.0" &> /dev/null; then
                # $1 is the host address of the front-envoy.
                ver10=$[$ver10+1]
        else
                ver11=$[$ver11+1]
        fi
        echo "demoapp-v1.0:demoapp-v1.1 = $ver10:$ver11"
        sleep $interval
done

5.测试

#在一个特定的终端上,运行测试脚本send-request.sh
# 脚本需要front-envoy的地址为参数,并每隔两秒钟向其发起一次HTTP请求
# 此时所有流量,都将由集群demoappv10所承载,因为默认配置中,100%的流量比例将保留给该集群
[root@k8s-master01 http-traffic-shifting]# ./send-request.sh 172.31.55.10
demoapp-v1.0:demoapp-v1.1 = 98:0
demoapp-v1.0:demoapp-v1.1 = 99:0
demoapp-v1.0:demoapp-v1.1 = 100:0

#另外启动一个终端,动态调整流量分发比例
#将保留给demoappv10集群的流量比例调整为90%,方法是将指定键的值定义为相应的分子数即可
[root@k8s-master01 http-traffic-shifting]#    curl -XPOST http://172.31.55.10:9901/runtime_modify?routing.traffic_shift.demoapp=90
OK
#重新运行测试脚本,可以得出类似如下的结果
# 在请求中使用特定的查询条件
[root@k8s-master01 http-traffic-shifting]# ./send-request.sh 172.31.55.10
demoapp-v1.0:demoapp-v1.1 = 98:13
demoapp-v1.0:demoapp-v1.1 = 99:13
demoapp-v1.0:demoapp-v1.1 = 100:13

# 测试的时间越长,样本数越大,越能接近于实际比例;
# 事实上,我们完全可以阶段性地多次进行流量比例的微调;

流量分割实验部署示例:

1.Readme

六个Service:
envoy:Front Proxy,地址为172.31.57.10
5个后端服务
demoapp-v1.0-1、demoapp-v1.0-2和demoapp-v1.0-3:对应于Envoy中的demoappv10集群
demoapp-v1.1-1和demoapp-v1.1-2:对应于Envoy中的demoappv11集群
使用的路由配置
            virtual_hosts:
            - name: demoapp
              domains: ["*"]
              routes:
              routes:
              - match:
                  prefix: "/"
                route:
                  weighted_clusters:
                    clusters:
                    - name: demoappv10
                      weight: 100
                    - name: demoappv11
                      weight: 0
                    total_weight: 100
                    runtime_key_prefix: routing.traffic_split.demoapp
1.运行和测试
#创建并运行容器
docker-compose up
===================================================================
2.在一个特定的终端上,运行测试脚本send-request.sh
# 脚本需要front-envoy的地址为参数,并每隔两秒钟向其发起一次HTTP请求
./send-request.sh 172.31.57.10
demoapp-v1.0:demoapp-v1.1 = 1:0
demoapp-v1.0:demoapp-v1.1 = 2:0
demoapp-v1.0:demoapp-v1.1 = 3:0
demoapp-v1.0:demoapp-v1.1 = 4:0
demoapp-v1.0:demoapp-v1.1 = 5:0
demoapp-v1.0:demoapp-v1.1 = 6:0
demoapp-v1.0:demoapp-v1.1 = 7:0
demoapp-v1.0:demoapp-v1.1 = 8:0
demoapp-v1.0:demoapp-v1.1 = 9:0
demoapp-v1.0:demoapp-v1.1 = 10:0
demoapp-v1.0:demoapp-v1.1 = 11:0
demoapp-v1.0:demoapp-v1.1 = 12:0
demoapp-v1.0:demoapp-v1.1 = 13:0
demoapp-v1.0:demoapp-v1.1 = 14:0
demoapp-v1.0:demoapp-v1.1 = 15:0
demoapp-v1.0:demoapp-v1.1 = 16:0
demoapp-v1.0:demoapp-v1.1 = 17:0
demoapp-v1.0:demoapp-v1.1 = 18:0
demoapp-v1.0:demoapp-v1.1 = 19:0
demoapp-v1.0:demoapp-v1.1 = 20:0
demoapp-v1.0:demoapp-v1.1 = 21:0
……
# 此时所有流量,所有流量都将由集群demoappv10承载,因为默认配置中,demoappv10与demoappv11的权重比为100:0;
===================================================================
3.另外启动一个终端,动态调整流量分发比例
# 将集群权重对调来模拟蓝绿部署,方法是在指定键(runtime_key)的值后附加以点号分隔的集群名称,并为其各自定义为相应的新权重值即可;
curl -XPOST 'http://172.31.57.10:9901/runtime_modify?routing.traffic_split.demoapp.demoappv10=0&routing.traffic_split.demoapp.demoappv11=100'

# 注意:各集群的权重之和要等于total_weight的值; 
===================================================================
4.重新运行测试脚本,可以得出类似如下的结果

# 在请求中使用特定的查询条件
./send-request.sh 172.31.57.10
demoapp-v1.0:demoapp-v1.1 = 0:1
demoapp-v1.0:demoapp-v1.1 = 0:2
demoapp-v1.0:demoapp-v1.1 = 0:3
demoapp-v1.0:demoapp-v1.1 = 0:4
demoapp-v1.0:demoapp-v1.1 = 0:5
demoapp-v1.0:demoapp-v1.1 = 0:6
demoapp-v1.0:demoapp-v1.1 = 0:7
demoapp-v1.0:demoapp-v1.1 = 0:8
demoapp-v1.0:demoapp-v1.1 = 0:9
demoapp-v1.0:demoapp-v1.1 = 0:10
demoapp-v1.0:demoapp-v1.1 = 0:11
demoapp-v1.0:demoapp-v1.1 = 0:12
demoapp-v1.0:demoapp-v1.1 = 0:13
demoapp-v1.0:demoapp-v1.1 = 0:14
demoapp-v1.0:demoapp-v1.1 = 0:15
demoapp-v1.0:demoapp-v1.1 = 0:16
demoapp-v1.0:demoapp-v1.1 = 0:17
demoapp-v1.0:demoapp-v1.1 = 0:18
demoapp-v1.0:demoapp-v1.1 = 0:19
demoapp-v1.0:demoapp-v1.1 = 0:20
demoapp-v1.0:demoapp-v1.1 = 0:21
demoapp-v1.0:demoapp-v1.1 = 0:22
demoapp-v1.0:demoapp-v1.1 = 0:23
demoapp-v1.0:demoapp-v1.1 = 0:24
demoapp-v1.0:demoapp-v1.1 = 0:25
……
# 测试的时间越长,样本数越大,越能接近于实际比例;
# 事实上,我们完全可以阶段性地多次进行流量比例的微调;
===================================================================
5.停止后清理

docker-compose down

2.docker-compose.yaml

version: '3'

services:
  front-envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
      - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.57.10
    expose:
      # Expose ports 80 (for general traffic) and 9901 (for the admin server)
      - "80"
      - "9901"

  demoapp-v1.0-1:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-1
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"      
      
  demoapp-v1.0-2:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-2
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"  
      
  demoapp-v1.0-3:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-3
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80" 
      
  demoapp-v1.1-1:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-1
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"      
      
  demoapp-v1.1-2:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-2
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"  

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.57.0/24

3.front-envoy.yaml

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

layered_runtime:
  layers:
  - name: admin
    admin_layer: {}
       
static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: demoapp
              domains: ["*"]
              routes:
              routes:
              - match:
                  prefix: "/"
                route:
                  weighted_clusters:
                    clusters:
                    - name: demoappv10
                      weight: 100
                    - name: demoappv11
                      weight: 0
                    total_weight: 100
                    runtime_key_prefix: routing.traffic_split.demoapp
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: demoappv10
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv10
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv10
                port_value: 80

  - name: demoappv11
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv11
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv11
                port_value: 80

4.send-request.sh

#!/bin/bash
declare -i ver10=0
declare -i ver11=0

interval="0.2"

while true; do
	if curl -s http://$1/hostname | grep "demoapp-v1.0" &> /dev/null; then
		# $1 is the host address of the front-envoy.
		ver10=$[$ver10+1]
	else
		ver11=$[$ver11+1]
	fi
	echo "demoapp-v1.0:demoapp-v1.1 = $ver10:$ver11"
	sleep $interval
done

5.测试

1.运行和测试
#创建并运行容器
docker-compose up
===================================================================
2.在一个特定的终端上,运行测试脚本send-request.sh
# 脚本需要front-envoy的地址为参数,并每隔两秒钟向其发起一次HTTP请求
./send-request.sh 172.31.57.10
demoapp-v1.0:demoapp-v1.1 = 1:0
demoapp-v1.0:demoapp-v1.1 = 2:0
demoapp-v1.0:demoapp-v1.1 = 3:0
demoapp-v1.0:demoapp-v1.1 = 4:0
demoapp-v1.0:demoapp-v1.1 = 5:0
demoapp-v1.0:demoapp-v1.1 = 6:0
demoapp-v1.0:demoapp-v1.1 = 7:0
demoapp-v1.0:demoapp-v1.1 = 8:0
demoapp-v1.0:demoapp-v1.1 = 9:0
demoapp-v1.0:demoapp-v1.1 = 10:0
demoapp-v1.0:demoapp-v1.1 = 11:0
demoapp-v1.0:demoapp-v1.1 = 12:0
demoapp-v1.0:demoapp-v1.1 = 13:0
demoapp-v1.0:demoapp-v1.1 = 14:0
demoapp-v1.0:demoapp-v1.1 = 15:0
demoapp-v1.0:demoapp-v1.1 = 16:0
demoapp-v1.0:demoapp-v1.1 = 17:0
demoapp-v1.0:demoapp-v1.1 = 18:0
demoapp-v1.0:demoapp-v1.1 = 19:0
demoapp-v1.0:demoapp-v1.1 = 20:0
demoapp-v1.0:demoapp-v1.1 = 21:0
……
# 此时所有流量,所有流量都将由集群demoappv10承载,因为默认配置中,demoappv10与demoappv11的权重比为100:0;
===================================================================
3.另外启动一个终端,动态调整流量分发比例
# 将集群权重对调来模拟蓝绿部署,方法是在指定键(runtime_key)的值后附加以点号分隔的集群名称,并为其各自定义为相应的新权重值即可;
curl -XPOST 'http://172.31.57.10:9901/runtime_modify?routing.traffic_split.demoapp.demoappv10=0&routing.traffic_split.demoapp.demoappv11=100'

# 注意:各集群的权重之和要等于total_weight的值; 
===================================================================
4.重新运行测试脚本,可以得出类似如下的结果

# 在请求中使用特定的查询条件
./send-request.sh 172.31.57.10
demoapp-v1.0:demoapp-v1.1 = 0:1
demoapp-v1.0:demoapp-v1.1 = 0:2
demoapp-v1.0:demoapp-v1.1 = 0:3
demoapp-v1.0:demoapp-v1.1 = 0:4
demoapp-v1.0:demoapp-v1.1 = 0:5
demoapp-v1.0:demoapp-v1.1 = 0:6
demoapp-v1.0:demoapp-v1.1 = 0:7
demoapp-v1.0:demoapp-v1.1 = 0:8
demoapp-v1.0:demoapp-v1.1 = 0:9
demoapp-v1.0:demoapp-v1.1 = 0:10
demoapp-v1.0:demoapp-v1.1 = 0:11
demoapp-v1.0:demoapp-v1.1 = 0:12
demoapp-v1.0:demoapp-v1.1 = 0:13
demoapp-v1.0:demoapp-v1.1 = 0:14
demoapp-v1.0:demoapp-v1.1 = 0:15
demoapp-v1.0:demoapp-v1.1 = 0:16
demoapp-v1.0:demoapp-v1.1 = 0:17
demoapp-v1.0:demoapp-v1.1 = 0:18
demoapp-v1.0:demoapp-v1.1 = 0:19
demoapp-v1.0:demoapp-v1.1 = 0:20
demoapp-v1.0:demoapp-v1.1 = 0:21
demoapp-v1.0:demoapp-v1.1 = 0:22
demoapp-v1.0:demoapp-v1.1 = 0:23
demoapp-v1.0:demoapp-v1.1 = 0:24
demoapp-v1.0:demoapp-v1.1 = 0:25
……
# 测试的时间越长,样本数越大,越能接近于实际比例;
# 事实上,我们完全可以阶段性地多次进行流量比例的微调;
===================================================================
5.停止后清理

docker-compose down

 Http流量镜像

 

流量镜像实验部署示例:

1.Readme

六个Service:
envoy:Front Proxy,地址为172.31.60.10
5个后端服务
demoapp-v1.0-1、demoapp-v1.0-2和demoapp-v1.0-3:对应于Envoy中的demoappv10集群
demoapp-v1.1-1和demoapp-v1.1-2:对应于Envoy中的demoappv11集群
使用的路由配置
            virtual_hosts:
            - name: demoapp
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: demoappv10
                  request_mirror_policies:
                  - cluster: demoappv11
                    runtime_fraction:
                      default_value:
                        numerator: 20    # 默认只镜像demoappv10集群上20%的流量到该集群
                        denominator: HUNDRED
                      runtime_key: routing.request_mirror.demoapp
                      
运行和测试
1.创建并运行容器
docker-compose up
====================================================================================================
2.在一个特定的终端上,运行测试脚本send-request.sh

# 脚本需要front-envoy的地址为参数,并每隔两秒钟向其发起一次HTTP请求
./send-request.sh 172.31.60.10
ServerName: demoapp-v1.0-1
ServerName: demoapp-v1.0-2
ServerName: demoapp-v1.0-3
ServerName: demoapp-v1.0-1
ServerName: demoapp-v1.0-3
ServerName: demoapp-v1.0-1
……

客户端的请求,仅会由demoappv10集群响应;镜像的流量的信息,可以在docker-compose命令控制台的日志信息中显示。当然,没有该控制台时,我们也可以通过demoappv11相关容器的控制台来了解访问请求是否到达。

demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:08:50] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-2_1  | 172.31.60.10 - - [29/Oct/2021 07:08:51] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-2_1  | 172.31.60.10 - - [29/Oct/2021 07:08:51] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-3_1  | 172.31.60.10 - - [29/Oct/2021 07:08:52] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:08:53] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-3_1  | 172.31.60.10 - - [29/Oct/2021 07:08:54] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-1_1  | 172.31.60.10 - - [29/Oct/2021 07:08:54] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:08:55] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-2_1  | 172.31.60.10 - - [29/Oct/2021 07:08:56] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:08:56] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-2_1  | 172.31.60.10 - - [29/Oct/2021 07:08:57] "GET /hostname HTTP/1.1" 200 -
……
====================================================================================================
3.动态调整镜像流量的比例

# 我们可以通过runtime_layer中的routing.request_mirror.demoapp键来调整镜像的流量的比例,例如,将其调整到100%,即镜像所有流量的方法如下;
 curl -XPOST 'http://172.31.60.10:9901/runtime_modify?routing.request_mirror.demoapp=100'
调整完成后,再通过脚本发起请求测试

./send-request.sh 172.31.60.10
而后可于docker-compose的控制台中看到类似如下日志,这表明流量已经100%镜像。

demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:16:03] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-2_1  | 172.31.60.10 - - [29/Oct/2021 07:16:03] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-3_1  | 172.31.60.10 - - [29/Oct/2021 07:16:03] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-1_1  | 172.31.60.10 - - [29/Oct/2021 07:16:03] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-2_1  | 172.31.60.10 - - [29/Oct/2021 07:16:04] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:16:04] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-2_1  | 172.31.60.10 - - [29/Oct/2021 07:16:05] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-1_1  | 172.31.60.10 - - [29/Oct/2021 07:16:05] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-2_1  | 172.31.60.10 - - [29/Oct/2021 07:16:05] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-2_1  | 172.31.60.10 - - [29/Oct/2021 07:16:05] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-3_1  | 172.31.60.10 - - [29/Oct/2021 07:16:06] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-1_1  | 172.31.60.10 - - [29/Oct/2021 07:16:06] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:16:06] "GET /hostname HTTP/1.1" 200 -
……
====================================================================================================
4.停止后清理

docker-compose down

2.docker-compose.yaml

[root@xksmaster1 http-request-mirror]# cat docker-compose.yaml
version: '3'

services:
  front-envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
      - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.60.10
    expose:
      # Expose ports 80 (for general traffic) and 9901 (for the admin server)
      - "80"
      - "9901"

  demoapp-v1.0-1:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-1
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.0-2:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-2
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.0-3:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-3
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.1-1:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-1
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"

  demoapp-v1.1-2:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-2
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.60.0/24

3.front-envoy.yaml

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

layered_runtime:
  layers:
  - name: admin
    admin_layer: {}
       
static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: demoapp
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: demoappv10
                  request_mirror_policies:
                  - cluster: demoappv11
                    runtime_fraction:
                      default_value:
                        numerator: 20
                        denominator: HUNDRED
                      runtime_key: routing.request_mirror.demoapp
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: demoappv10
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv10
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv10
                port_value: 80

  - name: demoappv11
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv11
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv11
                port_value: 80

 4.send-request.sh

#!/bin/bash
interval="0.5"

while true; do
	curl -s http://$1/hostname
		# $1 is the host address of the front-envoy.
	sleep $interval
done

 5.测试

                     
运行和测试
1.创建并运行容器
docker-compose up
====================================================================================================
2.在一个特定的终端上,运行测试脚本send-request.sh

# 脚本需要front-envoy的地址为参数,并每隔两秒钟向其发起一次HTTP请求
./send-request.sh 172.31.60.10
ServerName: demoapp-v1.0-1
ServerName: demoapp-v1.0-2
ServerName: demoapp-v1.0-3
ServerName: demoapp-v1.0-1
ServerName: demoapp-v1.0-3
ServerName: demoapp-v1.0-1
……

客户端的请求,仅会由demoappv10集群响应;镜像的流量的信息,可以在docker-compose命令控制台的日志信息中显示。当然,没有该控制台时,我们也可以通过demoappv11相关容器的控制台来了解访问请求是否到达。

demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:08:50] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-2_1  | 172.31.60.10 - - [29/Oct/2021 07:08:51] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-2_1  | 172.31.60.10 - - [29/Oct/2021 07:08:51] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-3_1  | 172.31.60.10 - - [29/Oct/2021 07:08:52] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:08:53] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-3_1  | 172.31.60.10 - - [29/Oct/2021 07:08:54] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-1_1  | 172.31.60.10 - - [29/Oct/2021 07:08:54] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:08:55] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-2_1  | 172.31.60.10 - - [29/Oct/2021 07:08:56] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:08:56] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-2_1  | 172.31.60.10 - - [29/Oct/2021 07:08:57] "GET /hostname HTTP/1.1" 200 -
……
====================================================================================================
3.动态调整镜像流量的比例

# 我们可以通过runtime_layer中的routing.request_mirror.demoapp键来调整镜像的流量的比例,例如,将其调整到100%,即镜像所有流量的方法如下;
 curl -XPOST 'http://172.31.60.10:9901/runtime_modify?routing.request_mirror.demoapp=100'
调整完成后,再通过脚本发起请求测试

./send-request.sh 172.31.60.10
而后可于docker-compose的控制台中看到类似如下日志,这表明流量已经100%镜像。

demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:16:03] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-2_1  | 172.31.60.10 - - [29/Oct/2021 07:16:03] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-3_1  | 172.31.60.10 - - [29/Oct/2021 07:16:03] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-1_1  | 172.31.60.10 - - [29/Oct/2021 07:16:03] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-2_1  | 172.31.60.10 - - [29/Oct/2021 07:16:04] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:16:04] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-2_1  | 172.31.60.10 - - [29/Oct/2021 07:16:05] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-1_1  | 172.31.60.10 - - [29/Oct/2021 07:16:05] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-2_1  | 172.31.60.10 - - [29/Oct/2021 07:16:05] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-2_1  | 172.31.60.10 - - [29/Oct/2021 07:16:05] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-3_1  | 172.31.60.10 - - [29/Oct/2021 07:16:06] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.1-1_1  | 172.31.60.10 - - [29/Oct/2021 07:16:06] "GET /hostname HTTP/1.1" 200 -
demoapp-v1.0-1_1  | 172.31.60.10 - - [29/Oct/2021 07:16:06] "GET /hostname HTTP/1.1" 200 -
……
====================================================================================================
4.停止后清理

docker-compose down

 

posted @ 2023-05-22 16:44  しみずよしだ  阅读(214)  评论(0)    收藏  举报