Envoy-基于GRPC管理服务订阅(LDS、CDS、ADS)【五】
一、基于gRPC动态配置实现【LDS、CDS】



dynamic_resources:
lds_config:
api_config_source:
api_type: ... # API可以是REST,gRPC,delta_gRPC 三者之一,必须明确
resource_api_version: ... # v3
rate_limit_settings: {...} # 速率限制
grpc_services: # 提供grpc服务的一到多个服务源
transport_api_version: ... # xDS传输协议使用的API版本v3
envoy_grpc: #envoy内建的grpc客户端,envoy_grpc或google_grpc二选一
cluster_name: ... # grpc集群的名称
google_grpc: # google的c++ grpc客户端
timeout: # 超时时长
1.Readme
环境说明
六个Service:
envoy:Front Proxy,地址为172.31.15.2
webserver01:第一个后端服务
webserver01-sidecar:第一个后端服务的Sidecar Proxy,地址为172.31.15.11
webserver02:第二个后端服务
webserver02-sidecar:第二个后端服务的Sidecar Proxy,地址为172.31.15.12
xdsserver: xDS management server,地址为172.31.15.5
运行和测试
创建
docker-compose up
测试
# 查看Cluster及Endpoints信息;
curl 172.31.15.2:9901/clusters
或者查看动态Clusters的相关信息
curl -s 172.31.15.2:9901/config_dump | jq '.configs[1].dynamic_active_clusters'
# 查看Listener列表;
curl 172.31.15.2:9901/listeners
或者查看动态的Listener信息
curl -s 172.31.15.2:9901/config_dump?resource=dynamic_listeners | jq '.configs[0].active_state.listener.address'
# 接入xdsserver容器的交互式接口,修改config.yaml文件中的内容,将另一个endpoint添加进文件中,或进行其它修改;
docker-compose exec xdsserver /bin/sh
cd /etc/envoy-xds-server/config
cat config.yaml-v2 > config.yaml
提示:以上修改操作也可以直接在宿主机上的存储卷目录中进行。
# 再次查看Cluster中的Endpoint信息
curl 172.31.15.2:9901/clusters
停止后清理
docker-compose down
2.docker-compose.yaml
[root@xksmaster1 lds-cds-grpc]# cat docker-compose.yaml
version: '3.3'
services:
envoy:
image: envoyproxy/envoy-alpine:v1.21-latest
environment:
- ENVOY_UID=0
- ENVOY_GID=0
volumes:
- ./front-envoy.yaml:/etc/envoy/envoy.yaml
networks:
envoymesh:
ipv4_address: 172.31.15.2
aliases:
- front-proxy
depends_on:
- webserver01
- webserver02
- xdsserver
webserver01:
image: ikubernetes/demoapp:v1.0
environment:
- PORT=8080
- HOST=127.0.0.1
hostname: webserver01
networks:
envoymesh:
ipv4_address: 172.31.15.11
webserver01-sidecar:
image: envoyproxy/envoy-alpine:v1.21-latest
environment:
- ENVOY_UID=0
- ENVOY_GID=0
volumes:
- ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
network_mode: "service:webserver01"
depends_on:
- webserver01
webserver02:
image: ikubernetes/demoapp:v1.0
environment:
- PORT=8080
- HOST=127.0.0.1
hostname: webserver02
networks:
envoymesh:
ipv4_address: 172.31.15.12
webserver02-sidecar:
image: envoyproxy/envoy-alpine:v1.21-latest
environment:
- ENVOY_UID=0
- ENVOY_GID=0
volumes:
- ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
network_mode: "service:webserver02"
depends_on:
- webserver02
xdsserver:
image: ikubernetes/envoy-xds-server:v0.1
environment:
- SERVER_PORT=18000
- NODE_ID=envoy_front_proxy
- RESOURCES_FILE=/etc/envoy-xds-server/config/config.yaml
volumes:
- ./resources:/etc/envoy-xds-server/config/
networks:
envoymesh:
ipv4_address: 172.31.15.5
aliases:
- xdsserver
- xds-service
expose:
- "18000"
networks:
envoymesh:
driver: bridge
ipam:
config:
- subnet: 172.31.15.0/24
3.front-envoy.yaml
一共2个cluster:
- 通过GRPC加载LDS实现Listener和VirtualHost,Route的动态发现
通过GRPC加载CDS实现Cluster和Endpoint的动态发现 - 通过STRICT_DNS参数解析加载所有xdsserver的IP
[root@xksmaster1 lds-cds-grpc]# cat front-envoy.yaml
node:
id: envoy_front_proxy
cluster: webcluster
admin:
profile_path: /tmp/envoy.prof
access_log_path: /tmp/admin_access.log
address:
socket_address:
address: 0.0.0.0
port_value: 9901
dynamic_resources:
lds_config:
resource_api_version: V3
api_config_source:
api_type: GRPC
transport_api_version: V3
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
cds_config:
resource_api_version: V3
api_config_source:
api_type: GRPC
transport_api_version: V3
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
static_resources:
clusters:
- name: xds_cluster
connect_timeout: 0.25s
type: STRICT_DNS
# The extension_protocol_options field is used to provide extension-specific protocol options for upstream connections.
typed_extension_protocol_options:
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
"@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
explicit_http_config:
http2_protocol_options: {}
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: xds_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: xdsserver
port_value: 18000
4.resources/config.yaml
定义了Listener和Endpoint.这里避免修改一半出发配置下发,,先修改中间文件,再同步到配置文件上
# cat resources/config.yaml-v1
# cat resources/config.yaml-v1 > resources/config.yaml
name: myconfig
spec:
listeners:
- name: listener_http
address: 0.0.0.0
port: 80
routes:
- name: local_route
prefix: /
clusters:
- webcluster
clusters:
- name: webcluster
endpoints:
- address: 172.31.15.11
port: 80
5.运行测试
启动后状态
此时有2个cluster,1个是基于配置文件静态发现的xds_cluster,一个是基于xds_cluster动态发现的webcluster webcluster集群只有172.31.15.11:80,一个endpoint 访问172.31.15.2 也只有172.31.15.11应答
# docker-compose up
# 查看集群状态 => curl 172.31.15.2:9901/clusters
[root@xksmaster1 Dynamic-Configuration]# curl 172.31.15.2:9901/clusters
xds_cluster::observability_name::xds_cluster
xds_cluster::default_priority::max_connections::1024
xds_cluster::default_priority::max_pending_requests::1024
xds_cluster::default_priority::max_requests::1024
xds_cluster::default_priority::max_retries::3
xds_cluster::high_priority::max_connections::1024
xds_cluster::high_priority::max_pending_requests::1024
xds_cluster::high_priority::max_requests::1024
xds_cluster::high_priority::max_retries::3
xds_cluster::added_via_api::false
xds_cluster::172.31.15.5:18000::cx_active::1
xds_cluster::172.31.15.5:18000::cx_connect_fail::0
xds_cluster::172.31.15.5:18000::cx_total::1
xds_cluster::172.31.15.5:18000::rq_active::4
xds_cluster::172.31.15.5:18000::rq_error::0
xds_cluster::172.31.15.5:18000::rq_success::0
xds_cluster::172.31.15.5:18000::rq_timeout::0
xds_cluster::172.31.15.5:18000::rq_total::4
xds_cluster::172.31.15.5:18000::hostname::xdsserver
xds_cluster::172.31.15.5:18000::health_flags::healthy
xds_cluster::172.31.15.5:18000::weight::1
xds_cluster::172.31.15.5:18000::region::
xds_cluster::172.31.15.5:18000::zone::
xds_cluster::172.31.15.5:18000::sub_zone::
xds_cluster::172.31.15.5:18000::canary::false
xds_cluster::172.31.15.5:18000::priority::0
xds_cluster::172.31.15.5:18000::success_rate::-1.0
xds_cluster::172.31.15.5:18000::local_origin_success_rate::-1.0
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::true
webcluster::172.31.15.11:80::cx_active::4
webcluster::172.31.15.11:80::cx_connect_fail::0
webcluster::172.31.15.11:80::cx_total::4
webcluster::172.31.15.11:80::rq_active::0
webcluster::172.31.15.11:80::rq_error::0
webcluster::172.31.15.11:80::rq_success::9
webcluster::172.31.15.11:80::rq_timeout::0
webcluster::172.31.15.11:80::rq_total::9
webcluster::172.31.15.11:80::hostname::
webcluster::172.31.15.11:80::health_flags::healthy
webcluster::172.31.15.11:80::weight::1
webcluster::172.31.15.11:80::region::
webcluster::172.31.15.11:80::zone::
webcluster::172.31.15.11:80::sub_zone::
webcluster::172.31.15.11:80::canary::false
webcluster::172.31.15.11:80::priority::0
webcluster::172.31.15.11:80::success_rate::-1.0
webcluster::172.31.15.11:80::local_origin_success_rate::-1.0
# 单点测试 此时只能到172.31.15.11
[root@xksmaster1 Dynamic-Configuration]# curl 172.31.15.2
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.15.11!
[root@xksmaster1 Dynamic-Configuration]# curl 172.31.15.2
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.15.11!
[root@xksmaster1 Dynamic-Configuration]# curl 172.31.15.2
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.15.11!
[root@xksmaster1 Dynamic-Configuration]# curl 172.31.15.2
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.15.11!
[root@xksmaster1 Dynamic-Configuration]# curl 172.31.15.2
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.15.11!
# 需要安装 yum install -y jq
[root@xksmaster1 Dynamic-Configuration]# curl -s 172.31.15.2:9901/config_dump | jq '.configs[1].dynamic_active_clusters'
"dynamic_active_clusters": [
{
"version_info": "411",
"cluster": {
"@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster",
"name": "webcluster",
"type": "EDS",
"eds_cluster_config": {
"eds_config": {
"api_config_source": {
"api_type": "GRPC",
"grpc_services": [
{
"envoy_grpc": {
"cluster_name": "xds_cluster"
}
}
],
"set_node_on_first_message_only": true,
"transport_api_version": "V3"
},
"resource_api_version": "V3"
}
},
"connect_timeout": "5s",
"dns_lookup_family": "V4_ONLY"
},
"last_updated": "2023-05-12T01:36:22.691Z"
}
]
},
#查看监听地址
[root@xksmaster1 Dynamic-Configuration]# curl -s 172.31.15.2:9901/config_dump?resource=dynamic_listeners| jq '.configs[0].active_state.listener.address'
{
"socket_address": {
"address": "0.0.0.0",
"port_value": 80
}
}
6.修改resources/config.yaml
1.修改有两个ep 172.31.15.11/12
2.修改listerner_http 为 8081端口
# 接入xdsserver容器的交互式接口,修改config.yaml文件中的内容,将另一个endpoint添加进文件中,或进行其它修改;
docker-compose exec xdsserver /bin/sh
cd /etc/envoy-xds-server/config
cat config.yaml-v2 > config.yaml
#提示:以上修改操作也可以直接在宿主机上的存储卷目录中进行。
#修改之后,自动发现了配置的改变
envoy_1 | [2023-05-12 01:54:50.126][1][info][upstream] [source/common/upstream/cds_api_helper.cc:30] cds: add 1 cluster(s), remove 1 cluster(s)
envoy_1 | [2023-05-12 01:54:50.126][1][info][upstream] [source/common/upstream/cds_api_helper.cc:67] cds: added/updated 0 cluster(s), skipped 1 unmodified cluster(s)
name: myconfig
spec:
listeners:
- name: listener_http
address: 0.0.0.0
port: 8081
routes:
- name: local_route
prefix: /
clusters:
- webcluster
clusters:
- name: webcluster
endpoints:
- address: 172.31.15.11
port: 80
- address: 172.31.15.12
port: 80
此时不用重启,再次查看cluster状态和访问listener地址
root@k8s-node-1:~# curl -s 172.31.15.2:9901/listeners
listener_http::0.0.0.0:8081
root@k8s-node-1:~# curl -s 172.31.15.2:9901/clusters
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::true
webcluster::172.31.15.11:80::cx_active::0
webcluster::172.31.15.11:80::cx_connect_fail::0
webcluster::172.31.15.11:80::cx_total::0
webcluster::172.31.15.11:80::rq_active::0
webcluster::172.31.15.11:80::rq_error::0
webcluster::172.31.15.11:80::rq_success::0
webcluster::172.31.15.11:80::rq_timeout::0
webcluster::172.31.15.11:80::rq_total::0
webcluster::172.31.15.11:80::hostname::
webcluster::172.31.15.11:80::health_flags::healthy
webcluster::172.31.15.11:80::weight::1
webcluster::172.31.15.11:80::region::
webcluster::172.31.15.11:80::zone::
webcluster::172.31.15.11:80::sub_zone::
webcluster::172.31.15.11:80::canary::false
webcluster::172.31.15.11:80::priority::0
webcluster::172.31.15.11:80::success_rate::-1.0
webcluster::172.31.15.11:80::local_origin_success_rate::-1.0
webcluster::172.31.15.12:80::cx_active::0
webcluster::172.31.15.12:80::cx_connect_fail::0
webcluster::172.31.15.12:80::cx_total::0
webcluster::172.31.15.12:80::rq_active::0
webcluster::172.31.15.12:80::rq_error::0
webcluster::172.31.15.12:80::rq_success::0
webcluster::172.31.15.12:80::rq_timeout::0
webcluster::172.31.15.12:80::rq_total::0
webcluster::172.31.15.12:80::hostname::
webcluster::172.31.15.12:80::health_flags::healthy
webcluster::172.31.15.12:80::weight::1
webcluster::172.31.15.12:80::region::
webcluster::172.31.15.12:80::zone::
webcluster::172.31.15.12:80::sub_zone::
webcluster::172.31.15.12:80::canary::false
webcluster::172.31.15.12:80::priority::0
webcluster::172.31.15.12:80::success_rate::-1.0
webcluster::172.31.15.12:80::local_origin_success_rate::-1.0
xds_cluster::observability_name::xds_cluster
xds_cluster::default_priority::max_connections::1024
xds_cluster::default_priority::max_pending_requests::1024
xds_cluster::default_priority::max_requests::1024
xds_cluster::default_priority::max_retries::3
xds_cluster::high_priority::max_connections::1024
xds_cluster::high_priority::max_pending_requests::1024
xds_cluster::high_priority::max_requests::1024
xds_cluster::high_priority::max_retries::3
xds_cluster::added_via_api::false
xds_cluster::172.31.15.5:18000::cx_active::1
xds_cluster::172.31.15.5:18000::cx_connect_fail::0
xds_cluster::172.31.15.5:18000::cx_total::1
xds_cluster::172.31.15.5:18000::rq_active::4
xds_cluster::172.31.15.5:18000::rq_error::0
xds_cluster::172.31.15.5:18000::rq_success::0
xds_cluster::172.31.15.5:18000::rq_timeout::0
xds_cluster::172.31.15.5:18000::rq_total::4
xds_cluster::172.31.15.5:18000::hostname::xdsserver
xds_cluster::172.31.15.5:18000::health_flags::healthy
xds_cluster::172.31.15.5:18000::weight::1
xds_cluster::172.31.15.5:18000::region::
xds_cluster::172.31.15.5:18000::zone::
xds_cluster::172.31.15.5:18000::sub_zone::
xds_cluster::172.31.15.5:18000::canary::false
xds_cluster::172.31.15.5:18000::priority::0
xds_cluster::172.31.15.5:18000::success_rate::-1.0
xds_cluster::172.31.15.5:18000::local_origin_success_rate::-1.0
root@k8s-node-1:~# curl 172.31.15.2:8081
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.15.11!
root@k8s-node-1:~# curl 172.31.15.2:8081
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.15.12!
root@k8s-node-1:~# curl 172.31.15.2:8081
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.15.12!
root@k8s-node-1:~# curl 172.31.15.2:8081
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.15.11!
root@k8s-node-1:~# curl 172.31.15.2:8081
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.15.12!
二、基于gRPC动态配置实现【ADS】


1.Readme
[root@xksmaster1 ads-grpc]# cat README.md
# xDS ADS grpc Demo
### 环境说明
六个Service:
- envoy:Front Proxy,地址为172.31.16.2
- webserver01:第一个后端服务
- webserver01-sidecar:第一个后端服务的Sidecar Proxy,地址为172.31.16.11
- webserver02:第二个后端服务
- webserver02-sidecar:第二个后端服务的Sidecar Proxy,地址为172.31.16.12
- xdsserver: xDS management server,地址为172.31.16.5
### 运行和测试
1. 创建
```
docker-compose up
```
2. 测试
```
# 查看Cluster及Endpoints信息;
curl 172.31.16.2:9901/clusters
或者查看动态Clusters的相关信息
curl -s 172.31.16.2:9901/config_dump | jq '.configs[1].dynamic_active_clusters'
# 查看Listener列表;
curl 172.31.16.2:9901/listeners
或者查看动态的Listener信息
curl -s 172.31.16.2:9901/config_dump?resource=dynamic_listeners | jq '.configs[0].active_state.listener.address'
# 接入xdsserver容器的交互式接口,修改config.yaml文件中的内容,将另一个endpoint添加进文件中,或进行其它修改;
docker-compose exec xdsserver /bin/sh
cd /etc/envoy-xds-server/config
cat config.yaml-v2 > config.yaml
提示:以上修改操作也可以直接在宿主机上的存储卷目录中进行。
# 再次查看Cluster中的Endpoint信息
curl 172.31.16.2:9901/clusters
```
3. 停止后清理
```
docker-compose down
2.docker-compose.yaml
[root@xksmaster1 ads-grpc]# cat docker-compose.yaml
version: '3.3'
services:
envoy:
image: envoyproxy/envoy-alpine:v1.21-latest
environment:
- ENVOY_UID=0
- ENVOY_GID=0
volumes:
- ./front-envoy.yaml:/etc/envoy/envoy.yaml
networks:
envoymesh:
ipv4_address: 172.31.16.2
aliases:
- front-proxy
depends_on:
- webserver01
- webserver02
- xdsserver
webserver01:
image: ikubernetes/demoapp:v1.0
environment:
- PORT=8080
- HOST=127.0.0.1
hostname: webserver01
networks:
envoymesh:
ipv4_address: 172.31.16.11
webserver01-sidecar:
image: envoyproxy/envoy-alpine:v1.21-latest
environment:
- ENVOY_UID=0
- ENVOY_GID=0
volumes:
- ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
network_mode: "service:webserver01"
depends_on:
- webserver01
webserver02:
image: ikubernetes/demoapp:v1.0
environment:
- PORT=8080
- HOST=127.0.0.1
hostname: webserver02
networks:
envoymesh:
ipv4_address: 172.31.16.12
webserver02-sidecar:
image: envoyproxy/envoy-alpine:v1.21-latest
environment:
- ENVOY_UID=0
- ENVOY_GID=0
volumes:
- ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
network_mode: "service:webserver02"
depends_on:
- webserver02
xdsserver:
image: ikubernetes/envoy-xds-server:v0.1
environment:
- SERVER_PORT=18000
- NODE_ID=envoy_front_proxy
- RESOURCES_FILE=/etc/envoy-xds-server/config/config.yaml
volumes:
- ./resources:/etc/envoy-xds-server/config/
networks:
envoymesh:
ipv4_address: 172.31.16.5
aliases:
- xdsserver
- xds-service
expose:
- "18000"
networks:
envoymesh:
driver: bridge
ipam:
config:
- subnet: 172.31.16.0/24
3.front-envoy.yaml
一共2个cluster:
通过GRPC加载LDS实现CDS和LDS的按需加载,避免了因为先后顺序的原因造成一些定义的内容被丢弃. envoy_grpc来自于xds_cluster集群,当前xds_cluster集群只有一个container即172.31.16.5 cds_config定义从ads获取配置 lds_config定义从ads获取配置 通过STRICT_DNS参数解析加载所有xdsserver的IP
ads_config:
[root@xksmaster1 ads-grpc]# cat front-envoy.yaml
node:
id: envoy_front_proxy
cluster: webcluster
admin:
profile_path: /tmp/envoy.prof
access_log_path: /tmp/admin_access.log
address:
socket_address:
address: 0.0.0.0
port_value: 9901
dynamic_resources:
ads_config:
api_type: GRPC
transport_api_version: V3
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
set_node_on_first_message_only: true
cds_config:
resource_api_version: V3
ads: {}
lds_config:
resource_api_version: V3
ads: {}
static_resources:
clusters:
- name: xds_cluster
connect_timeout: 0.25s
type: STRICT_DNS
# The extension_protocol_options field is used to provide extension-specific protocol options for upstream connections.
typed_extension_protocol_options:
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
"@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
explicit_http_config:
http2_protocol_options: {}
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: xds_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: xdsserver
port_value: 18000
4.envoy-sidecar-proxy.yaml
[root@xksmaster1 ads-grpc]# cat envoy-sidecar-proxy.yaml
admin:
profile_path: /tmp/envoy.prof
access_log_path: /tmp/admin_access.log
address:
socket_address:
address: 0.0.0.0
port_value: 9901
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 80 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: local_cluster }
http_filters:
- name: envoy.filters.http.router
clusters:
- name: local_cluster
connect_timeout: 0.25s
type: STATIC
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: local_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address: { address: 127.0.0.1, port_value: 8080 }
5.config.yaml
[root@xksmaster1 resources]# cat config.yaml
name: myconfig
spec:
listeners:
- name: listener_http
address: 0.0.0.0
port: 80
routes:
- name: local_route
prefix: /
clusters:
- webcluster
clusters:
- name: webcluster
endpoints:
- address: 172.31.16.11
port: 80
[root@xksmaster1 resources]# cat config.yaml-v1
name: myconfig
spec:
listeners:
- name: listener_http
address: 0.0.0.0
port: 80
routes:
- name: local_route
prefix: /
clusters:
- webcluster
clusters:
- name: webcluster
endpoints:
- address: 172.31.16.11
port: 80
[root@xksmaster1 resources]# cat config.yaml-v2
name: myconfig
spec:
listeners:
- name: listener_http
address: 0.0.0.0
port: 80
routes:
- name: local_route
prefix: /
clusters:
- webcluster
clusters:
- name: webcluster
endpoints:
- address: 172.31.16.11
port: 80
- address: 172.31.16.12
port: 80
6.运行测试
此时config.yaml指向1个ep 172.31.16.11
# docker-compose up
## 访问测试
root@k8s-node-1:~# curl 172.31.16.2
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.16.11!
root@k8s-node-1:~# curl 172.31.16.2
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.16.11!
root@k8s-node-1:~# curl 172.31.16.2
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.16.11!
root@k8s-node-1:~# curl 172.31.16.2:9901/listeners
listener_http::0.0.0.0:80
root@k8s-node-1:~# curl 172.31.16.2:9901/clusters
xds_cluster::observability_name::xds_cluster
xds_cluster::default_priority::max_connections::1024
xds_cluster::default_priority::max_pending_requests::1024
xds_cluster::default_priority::max_requests::1024
xds_cluster::default_priority::max_retries::3
xds_cluster::high_priority::max_connections::1024
xds_cluster::high_priority::max_pending_requests::1024
xds_cluster::high_priority::max_requests::1024
xds_cluster::high_priority::max_retries::3
xds_cluster::added_via_api::false
xds_cluster::172.31.16.5:18000::cx_active::1
xds_cluster::172.31.16.5:18000::cx_connect_fail::0
xds_cluster::172.31.16.5:18000::cx_total::1
xds_cluster::172.31.16.5:18000::rq_active::3
xds_cluster::172.31.16.5:18000::rq_error::0
xds_cluster::172.31.16.5:18000::rq_success::0
xds_cluster::172.31.16.5:18000::rq_timeout::0
xds_cluster::172.31.16.5:18000::rq_total::3
xds_cluster::172.31.16.5:18000::hostname::xdsserver
xds_cluster::172.31.16.5:18000::health_flags::healthy
xds_cluster::172.31.16.5:18000::weight::1
xds_cluster::172.31.16.5:18000::region::
xds_cluster::172.31.16.5:18000::zone::
xds_cluster::172.31.16.5:18000::sub_zone::
xds_cluster::172.31.16.5:18000::canary::false
xds_cluster::172.31.16.5:18000::priority::0
xds_cluster::172.31.16.5:18000::success_rate::-1.0
xds_cluster::172.31.16.5:18000::local_origin_success_rate::-1.0
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::true
webcluster::172.31.16.11:80::cx_active::3
webcluster::172.31.16.11:80::cx_connect_fail::0
webcluster::172.31.16.11:80::cx_total::3
webcluster::172.31.16.11:80::rq_active::0
webcluster::172.31.16.11:80::rq_error::0
webcluster::172.31.16.11:80::rq_success::5
webcluster::172.31.16.11:80::rq_timeout::0
webcluster::172.31.16.11:80::rq_total::5
webcluster::172.31.16.11:80::hostname::
webcluster::172.31.16.11:80::health_flags::healthy
webcluster::172.31.16.11:80::weight::1
webcluster::172.31.16.11:80::region::
webcluster::172.31.16.11:80::zone::
webcluster::172.31.16.11:80::sub_zone::
webcluster::172.31.16.11:80::canary::false
webcluster::172.31.16.11:80::priority::0
webcluster::172.31.16.11:80::success_rate::-1.0
webcluster::172.31.16.11:80::local_origin_success_rate::-1.0
7.修改listener和endpoint
为了避免修改时意外同步,先将配置复制出来,在新的文件修改后,将配置同步到原文件触发生效
修改内如如下:
- 修改配置文件,将监听由80改为8081
- endpoint追加172.31.16.12
docker exec -it adsgrpc_xdsserver_1 sh
### 修改配置文件,将监听由80改为8080,endpoint追加172.31.16.12
/ # cd /etc/envoy-xds-server/config/
/etc/envoy-xds-server/config # cat config.yaml
name: myconfig
spec:
listeners:
- name: listener_http
address: 0.0.0.0
port: 80
routes:
- name: local_route
prefix: /
clusters:
- webcluster
clusters:
- name: webcluster
endpoints:
- address: 172.31.16.11
port: 80
/etc/envoy-xds-server/config # cp config.yaml config2.yaml
/etc/envoy-xds-server/config # vi config2.yaml
/etc/envoy-xds-server/config # cat config2.yaml
name: myconfig
spec:
listeners:
- name: listener_http
address: 0.0.0.0
port: 8080
routes:
- name: local_route
prefix: /
clusters:
- webcluster
clusters:
- name: webcluster
endpoints:
- address: 172.31.16.11
port: 80
- address: 172.31.16.12
port: 80
### 同步配置
/etc/envoy-xds-server/config # cat config2.yaml > config.yaml
/etc/envoy-xds-server/config # cat config.yaml
name: myconfig
spec:
listeners:
- name: listener_http
address: 0.0.0.0
port: 8080
routes:
- name: local_route
prefix: /
clusters:
- webcluster
clusters:
- name: webcluster
endpoints:
- address: 172.31.16.11
port: 80
- address: 172.31.16.12
port: 80
再次测试
- 监听地址已经变为8080
- webcluster内endpoint追加了172.31.16.12
- 访问172.31.16.2:8081可以在2个endpoint之间做轮询
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:9901/listeners
listener_http::0.0.0.0:8080
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:9901/clusters
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::true
webcluster::172.31.16.11:80::cx_active::2
webcluster::172.31.16.11:80::cx_connect_fail::0
webcluster::172.31.16.11:80::cx_total::2
webcluster::172.31.16.11:80::rq_active::0
webcluster::172.31.16.11:80::rq_error::0
webcluster::172.31.16.11:80::rq_success::7
webcluster::172.31.16.11:80::rq_timeout::0
webcluster::172.31.16.11:80::rq_total::7
webcluster::172.31.16.11:80::hostname::
webcluster::172.31.16.11:80::health_flags::healthy
webcluster::172.31.16.11:80::weight::1
webcluster::172.31.16.11:80::region::
webcluster::172.31.16.11:80::zone::
webcluster::172.31.16.11:80::sub_zone::
webcluster::172.31.16.11:80::canary::false
webcluster::172.31.16.11:80::priority::0
webcluster::172.31.16.11:80::success_rate::-1.0
webcluster::172.31.16.11:80::local_origin_success_rate::-1.0
webcluster::172.31.16.12:80::cx_active::0
webcluster::172.31.16.12:80::cx_connect_fail::0
webcluster::172.31.16.12:80::cx_total::0
webcluster::172.31.16.12:80::rq_active::0
webcluster::172.31.16.12:80::rq_error::0
webcluster::172.31.16.12:80::rq_success::0
webcluster::172.31.16.12:80::rq_timeout::0
webcluster::172.31.16.12:80::rq_total::0
webcluster::172.31.16.12:80::hostname::
webcluster::172.31.16.12:80::health_flags::healthy
webcluster::172.31.16.12:80::weight::1
webcluster::172.31.16.12:80::region::
webcluster::172.31.16.12:80::zone::
webcluster::172.31.16.12:80::sub_zone::
webcluster::172.31.16.12:80::canary::false
webcluster::172.31.16.12:80::priority::0
webcluster::172.31.16.12:80::success_rate::-1.0
webcluster::172.31.16.12:80::local_origin_success_rate::-1.0
xds_cluster::observability_name::xds_cluster
xds_cluster::default_priority::max_connections::1024
xds_cluster::default_priority::max_pending_requests::1024
xds_cluster::default_priority::max_requests::1024
xds_cluster::default_priority::max_retries::3
xds_cluster::high_priority::max_connections::1024
xds_cluster::high_priority::max_pending_requests::1024
xds_cluster::high_priority::max_requests::1024
xds_cluster::high_priority::max_retries::3
xds_cluster::added_via_api::false
xds_cluster::172.31.16.5:18000::cx_active::1
xds_cluster::172.31.16.5:18000::cx_connect_fail::0
xds_cluster::172.31.16.5:18000::cx_total::1
xds_cluster::172.31.16.5:18000::rq_active::3
xds_cluster::172.31.16.5:18000::rq_error::0
xds_cluster::172.31.16.5:18000::rq_success::0
xds_cluster::172.31.16.5:18000::rq_timeout::0
xds_cluster::172.31.16.5:18000::rq_total::3
xds_cluster::172.31.16.5:18000::hostname::xdsserver
xds_cluster::172.31.16.5:18000::health_flags::healthy
xds_cluster::172.31.16.5:18000::weight::1
xds_cluster::172.31.16.5:18000::region::
xds_cluster::172.31.16.5:18000::zone::
xds_cluster::172.31.16.5:18000::sub_zone::
xds_cluster::172.31.16.5:18000::canary::false
xds_cluster::172.31.16.5:18000::priority::0
xds_cluster::172.31.16.5:18000::success_rate::-1.0
xds_cluster::172.31.16.5:18000::local_origin_success_rate::-1.0
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!
You have mail in /var/spool/mail/root
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.16.11!
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.16.11!
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.16.11!
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:8080
8.再次调整endpoint
删除endpoint172.31.16.11
/etc/envoy-xds-server/config # vi config2.yaml
/etc/envoy-xds-server/config # cat config2.yaml
name: myconfig
spec:
listeners:
- name: listener_http
address: 0.0.0.0
port: 8080
routes:
- name: local_route
prefix: /
clusters:
- webcluster
clusters:
- name: webcluster
endpoints:
- address: 172.31.16.12
port: 80
/etc/envoy-xds-server/config # cat config2.yaml > config.yaml
访问测试
- 由于没有修改监听,listeners并没有发生改变
- webcluster只剩下了172.31.16.12
- 访问listener只会往172.31.16.12发起调度
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:9901/listeners
listener_http::0.0.0.0:8080
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:9901/clusters
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::true
webcluster::172.31.16.12:80::cx_active::2
webcluster::172.31.16.12:80::cx_connect_fail::0
webcluster::172.31.16.12:80::cx_total::2
webcluster::172.31.16.12:80::rq_active::0
webcluster::172.31.16.12:80::rq_error::0
webcluster::172.31.16.12:80::rq_success::3
webcluster::172.31.16.12:80::rq_timeout::0
webcluster::172.31.16.12:80::rq_total::3
webcluster::172.31.16.12:80::hostname::
webcluster::172.31.16.12:80::health_flags::healthy
webcluster::172.31.16.12:80::weight::1
webcluster::172.31.16.12:80::region::
webcluster::172.31.16.12:80::zone::
webcluster::172.31.16.12:80::sub_zone::
webcluster::172.31.16.12:80::canary::false
webcluster::172.31.16.12:80::priority::0
webcluster::172.31.16.12:80::success_rate::-1.0
webcluster::172.31.16.12:80::local_origin_success_rate::-1.0
xds_cluster::observability_name::xds_cluster
xds_cluster::default_priority::max_connections::1024
xds_cluster::default_priority::max_pending_requests::1024
xds_cluster::default_priority::max_requests::1024
xds_cluster::default_priority::max_retries::3
xds_cluster::high_priority::max_connections::1024
xds_cluster::high_priority::max_pending_requests::1024
xds_cluster::high_priority::max_requests::1024
xds_cluster::high_priority::max_retries::3
xds_cluster::added_via_api::false
xds_cluster::172.31.16.5:18000::cx_active::1
xds_cluster::172.31.16.5:18000::cx_connect_fail::0
xds_cluster::172.31.16.5:18000::cx_total::1
xds_cluster::172.31.16.5:18000::rq_active::3
xds_cluster::172.31.16.5:18000::rq_error::0
xds_cluster::172.31.16.5:18000::rq_success::0
xds_cluster::172.31.16.5:18000::rq_timeout::0
xds_cluster::172.31.16.5:18000::rq_total::3
xds_cluster::172.31.16.5:18000::hostname::xdsserver
xds_cluster::172.31.16.5:18000::health_flags::healthy
xds_cluster::172.31.16.5:18000::weight::1
xds_cluster::172.31.16.5:18000::region::
xds_cluster::172.31.16.5:18000::zone::
xds_cluster::172.31.16.5:18000::sub_zone::
xds_cluster::172.31.16.5:18000::canary::false
xds_cluster::172.31.16.5:18000::priority::0
xds_cluster::172.31.16.5:18000::success_rate::-1.0
xds_cluster::172.31.16.5:18000::local_origin_success_rate::-1.0
You have mail in /var/spool/mail/root
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!
[root@xksmaster1 ads-grpc]# curl 172.31.16.2:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!



浙公网安备 33010602011771号