Istio流量治理快速入门及配置案例 【四】
Envoy直接代替了Service进行负载均衡
Envoy很早进行了截流,服务发现还是通过Service进行,Envoy通过数据平面查询APISERVER进行查询端点。
通过EDS配置到APISERVER中,下发到各个Envoy中,此时Envoy都会有出向侦听器(Engress Listener)
把Endpoint组成建立成Cluster和Sercvice Port建立关系和Envoy相关联,从而Envoy获取到每个Endpoint
================================================
群组
[root@xksmaster1 04-proxy-gateway]# kubectl api-resources --api-group=networking.istio.io
NAME SHORTNAMES APIVERSION NAMESPACED KIND
destinationrules dr networking.istio.io/v1beta1 true DestinationRule
envoyfilters networking.istio.io/v1alpha3 true EnvoyFilter
gateways gw networking.istio.io/v1beta1 true Gateway
proxyconfigs networking.istio.io/v1beta1 true ProxyConfig
serviceentries se networking.istio.io/v1beta1 true ServiceEntry
sidecars networking.istio.io/v1beta1 true Sidecar
virtualservices vs networking.istio.io/v1beta1 true VirtualService
workloadentries we networking.istio.io/v1beta1 true WorkloadEntry
workloadgroups wg networking.istio.io/v1beta1 true WorkloadGroup
You have new mail in /var/spool/mail/root
[root@xksmaster1 04-proxy-gateway]# kubectl api-resources --api-group=security.istio.io
NAME SHORTNAMES APIVERSION NAMESPACED KIND
authorizationpolicies security.istio.io/v1 true AuthorizationPolicy
peerauthentications pa security.istio.io/v1beta1 true PeerAuthentication
requestauthentications ra security.istio.io/v1 true RequestAuthentication
修改外部IP:之后先到这个外部地址再转到 istio-ingressgateway
[root@xksmaster1 network-scripts]# kubectl edit svc istio-ingressgateway -n istio-system
service/istio-ingressgateway edited
添加
externalIPs:
- 192.168.19.200
[root@xksmaster1 network-scripts]# kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-egressgateway ClusterIP 10.106.118.36 <none> 80/TCP,443/TCP 68d
istio-ingressgateway LoadBalancer 10.108.113.206 192.168.19.200 15021:31695/TCP,80:31246/TCP,443:30196/TCP,31400:30817/TCP,15443:31775/TCP 68d
istiod ClusterIP 10.110.214.145 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 69d
部署demoapp进行测试
#测试目录/root/istio/istio-in-practise-main/Traffic-Management-Basics/ms-demo/01-demoapp-v10
[root@xksmaster1 01-demoapp-v10]# cat deploy-demoapp.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: demoappv10
version: v1.0
name: demoappv10
spec:
progressDeadlineSeconds: 600
replicas: 3
selector:
matchLabels:
app: demoapp
version: v1.0
template:
metadata:
labels:
app: demoapp
version: v1.0
spec:
containers:
- image: ikubernetes/demoapp:v1.0
imagePullPolicy: IfNotPresent
name: demoapp
env:
- name: "PORT"
value: "8080"
ports:
- containerPort: 8080
name: web
protocol: TCP
resources:
limits:
cpu: 50m
---
apiVersion: v1
kind: Service
metadata:
name: demoappv10
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: demoapp
version: v1.0
type: ClusterIP
---
[root@xksmaster1 01-demoapp-v10]# kubectl apply -f deploy-demoapp.yaml
[root@xksmaster1 01-demoapp-v10]# kubectl get pods
NAME READY STATUS RESTARTS AGE
demoappv10-54757f48d6-2ltqm 1/2 Running 0 5s
demoappv10-54757f48d6-bdwg6 1/2 Running 0 5s
demoappv10-54757f48d6-dnfq2 1/2 Running 0 5s
[root@xksmaster1 01-demoapp-v10]# kubectl describe svc demoappv10
Name: demoappv10
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=demoapp,version=v1.0
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.111.85.207
IPs: 10.111.85.207
Port: http 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.244.182.30:8080,10.244.182.31:8080,10.244.207.90:8080
Session Affinity: None
Events: <none>
[root@xksmaster1 01-demoapp-v10]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoappv10 ClusterIP 10.111.85.207 <none> 8080/TCP 35s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 89d
sleep ClusterIP 10.109.100.95 <none> 80/TCP 39m
# 应用sleep 客户端程序进行测试
[root@xksmaster1 01-demoapp-v10]# kubectl apply -f /usr/local/istio/samples/sleep/sleep.yaml
[root@xksmaster1 01-demoapp-v10]# kubectl exec -it sleep-bc9998558-q8fdk /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ $ curl demoappv10.default.svc:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-54757f48d6-bdwg6, ServerIP: 10.244.182.30!
/ $ curl demoappv10.default.svc:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-54757f48d6-dnfq2, ServerIP: 10.244.207.90!
/ $ curl demoappv10.default.svc:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-54757f48d6-2ltqm, ServerIP: 10.244.182.31!
istioctl
拓扑图:

istioctl proxy-config
istio 读取 APIServer 上 demoappv10 service 然后配置到每个Envoy出向侦听器中,每个Envoy都会定义 出入向侦听器
[root@xksmaster1 ~]# istioctl proxy-config
A group of commands used to retrieve information about proxy configuration from the Envoy config dump
Usage:
istioctl proxy-config [command]
Aliases:
proxy-config, pc
Examples:
# Retrieve information about proxy configuration from an Envoy instance.
istioctl proxy-config <clusters|listeners|routes|endpoints|bootstrap|log|secret> <pod-name[.namespace]>
Available Commands:
all Retrieves all configuration for the Envoy in the specified pod
bootstrap Retrieves bootstrap configuration for the Envoy in the specified pod
cluster Retrieves cluster configuration for the Envoy in the specified pod
ecds Retrieves typed extension configuration for the Envoy in the specified pod
endpoint Retrieves endpoint configuration for the Envoy in the specified pod
listener Retrieves listener configuration for the Envoy in the specified pod
log (experimental) Retrieves logging levels of the Envoy in the specified pod
rootca-compare Compare ROOTCA values for the two given pods
route Retrieves route configuration for the Envoy in the specified pod
secret Retrieves secret configuration for the Envoy in the specified pod
Flags:
-h, --help help for proxy-config
-o, --output string Output format: one of json|yaml|short (default "short")
--proxy-admin-port int Envoy proxy admin port (default 15000)
Global Flags:
--context string The name of the kubeconfig context to use
-i, --istioNamespace string Istio system namespace (default "istio-system")
-c, --kubeconfig string Kubernetes configuration file
-n, --namespace string Config namespace
--vklog Level number for the log level verbosity. Like -v flag. ex: --vklog=9
Use "istioctl proxy-config [command] --help" for more information about a command.
[root@xksmaster1 ~]# istioctl ps
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
demoappv10-54757f48d6-2ltqm.default Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-65dcb8497-v4g8j 1.17.1
demoappv10-54757f48d6-bdwg6.default Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-65dcb8497-v4g8j 1.17.1
demoappv10-54757f48d6-dnfq2.default Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-65dcb8497-v4g8j 1.17.1
istio-egressgateway-774d6846df-wgkb9.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-65dcb8497-v4g8j 1.17.1
istio-ingressgateway-69499dc-vcz7d.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-65dcb8497-v4g8j 1.17.1
sleep-bc9998558-q8fdk.default Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-65dcb8497-v4g8j 1.17.1
#查看所有监听端口
[root@xksmaster1 ~]# istioctl pc listeners sleep-bc9998558-q8fdk.default
ADDRESS PORT MATCH DESTINATION
10.96.0.10 53 ALL Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local
0.0.0.0 80 Trans: raw_buffer; App: http/1.1,h2c Route: 80
0.0.0.0 80 ALL PassthroughCluster
10.106.118.36 443 ALL Cluster: outbound|443||istio-egressgateway.istio-system.svc.cluster.local
10.108.113.206 443 ALL Cluster: outbound|443||istio-ingressgateway.istio-system.svc.cluster.local
10.110.214.145 443 ALL Cluster: outbound|443||istiod.istio-system.svc.cluster.local
10.96.0.1 443 ALL Cluster: outbound|443||kubernetes.default.svc.cluster.local
10.96.128.159 443 Trans: raw_buffer; App: http/1.1,h2c Route: kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443
10.96.128.159 443 ALL Cluster: outbound|443||kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
10.103.71.204 8000 Trans: raw_buffer; App: http/1.1,h2c Route: dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local:8000
10.103.71.204 8000 ALL Cluster: outbound|8000||dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local
0.0.0.0 8080 Trans: raw_buffer; App: http/1.1,h2c Route: 8080
0.0.0.0 8080 ALL PassthroughCluster
10.96.0.10 9153 Trans: raw_buffer; App: http/1.1,h2c Route: kube-dns.kube-system.svc.cluster.local:9153
10.96.0.10 9153 ALL Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0 15001 ALL PassthroughCluster
0.0.0.0 15001 Addr: *:15001 Non-HTTP/Non-TCP
0.0.0.0 15006 Addr: *:15006 Non-HTTP/Non-TCP
0.0.0.0 15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 Trans: raw_buffer; App: http/1.1,h2c; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 Trans: tls; App: TCP TLS; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 Trans: raw_buffer; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 Trans: tls; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 Trans: tls; App: istio,istio-peer-exchange,istio-http/1.0,istio-http/1.1,istio-h2; Addr: *:80 Cluster: inbound|80||
0.0.0.0 15006 Trans: raw_buffer; Addr: *:80 Cluster: inbound|80||
0.0.0.0 15010 Trans: raw_buffer; App: http/1.1,h2c Route: 15010
0.0.0.0 15010 ALL PassthroughCluster
10.110.214.145 15012 ALL Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0 15014 Trans: raw_buffer; App: http/1.1,h2c Route: 15014
0.0.0.0 15014 ALL PassthroughCluster
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
10.108.113.206 15021 Trans: raw_buffer; App: http/1.1,h2c Route: istio-ingressgateway.istio-system.svc.cluster.local:15021
10.108.113.206 15021 ALL Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
10.108.113.206 15443 ALL Cluster: outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.108.113.206 31400 ALL Cluster: outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
#查看监听端口8080
[root@xksmaster1 ~]# istioctl pc listeners --port 8080 sleep-bc9998558-q8fdk.default
ADDRESS PORT MATCH DESTINATION
0.0.0.0 8080 Trans: raw_buffer; App: http/1.1,h2c Route: 8080
0.0.0.0 8080 ALL PassthroughCluster
#查看Route =》 8080 demoappv10, demoappv10.default
[root@xksmaster1 ~]# istioctl pc route sleep-bc9998558-q8fdk.default
NAME DOMAINS MATCH VIRTUAL SERVICE
dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local:8000 * /*
15014 istiod.istio-system, 10.110.214.145 /*
kube-dns.kube-system.svc.cluster.local:9153 * /*
kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443 * /*
15010 istiod.istio-system, 10.110.214.145 /*
istio-ingressgateway.istio-system.svc.cluster.local:15021 * /*
8080 demoappv10, demoappv10.default + 1 more... /*
80 istio-egressgateway.istio-system, 10.106.118.36 /*
80 istio-ingressgateway.istio-system, 10.108.113.206 /*
80 sleep, sleep.default + 1 more... /*
inbound|80|| * /*
* /stats/prometheus*
inbound|80|| * /*
InboundPassthroughClusterIpv4 * /*
* /healthz/ready*
InboundPassthroughClusterIpv4 * /*
#查看Cluster =》 demoappv10.default.svc.cluster.local
root@xksmaster1 ~]# istioctl pc cluster sleep-bc9998558-q8fdk.default SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
80 - inbound ORIGINAL_DST
BlackHoleCluster - - - STATIC
InboundPassthroughClusterIpv4 - - - ORIGINAL_DST
PassthroughCluster - - - ORIGINAL_DST
agent - - - STATIC
dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local 8000 - outbound EDS
demoappv10.default.svc.cluster.local 8080 - outbound EDS
istio-egressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-egressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15021 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 31400 - outbound EDS
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local 443 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
prometheus_stats - - - STATIC
sds-grpc - - - STATIC
sleep.default.svc.cluster.local 80 - outbound EDS
xds-grpc - - - STATIC
zipkin - - - STRICT_DNS
#查看是否是对应到demoappv10 三个端点
[root@xksmaster1 ~]# istioctl pc endpoint sleep-bc9998558-q8fdk.default | grep demoappv10
10.244.182.30:8080 HEALTHY OK outbound|8080||demoappv10.default.svc.cluster.local
10.244.182.31:8080 HEALTHY OK outbound|8080||demoappv10.default.svc.cluster.local
10.244.207.90:8080 HEALTHY OK outbound|8080||demoappv10.default.svc.cluster.local
#istio-ingressgateway 只会配置集群端点 但是不会配置监听端口 所有需要配置 网管来接入 流量
[root@xksmaster1 ~]# istioctl pc clusters istio-ingressgateway-69499dc-vcz7d.istio-system
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
BlackHoleCluster - - - STATIC
agent - - - STATIC
dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local 8000 - outbound EDS
demoappv10.default.svc.cluster.local 8080 - outbound EDS
istio-egressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-egressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15021 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 31400 - outbound EDS
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local 443 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
prometheus_stats - - - STATIC
sds-grpc - - - STATIC
sleep.default.svc.cluster.local 80 - outbound EDS
xds-grpc - - - STATIC
zipkin - - - STRICT_DNS
You have new mail in /var/spool/mail/root
[root@xksmaster1 ~]# istioctl pc listeners istio-ingressgateway-69499dc-vcz7d.istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
[root@xksmaster1 ~]# istioctl proxy-config all istio-egressgateway-774d6846df-wgkb9.istio-system
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
BlackHoleCluster - - - STATIC
agent - - - STATIC
dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local 8000 - outbound EDS
istio-egressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-egressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15021 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 31400 - outbound EDS
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local 443 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
prometheus_stats - - - STATIC
sds-grpc - - - STATIC
xds-grpc - - - STATIC
zipkin - - - STRICT_DNS
ADDRESS PORT MATCH DESTINATION
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
NAME DOMAINS MATCH VIRTUAL SERVICE
* /healthz/ready*
* /stats/prometheus*
RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
default Cert Chain ACTIVE true 195456362096320435436272505007023255030 2023-03-23T02:33:54Z 2023-03-22T02:31:54Z
ROOTCA CA ACTIVE true 1636514340800943321555862606975082741 2033-03-19T02:11:32Z 2023-03-22T02:11:32Z
创建Gateway网关进行接入 作为入向流量监听器
[root@xksmaster1 04-proxy-gateway]# cat gateway-demoappv10.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: demoappv10-gateway
namespace: istio-system # 要指定为ingress gateway pod所在名称空间
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "demoappv10.magedu.com"
[root@xksmaster1 04-proxy-gateway]# kubectl apply -f gateway-demoappv10.yaml
gateway.networking.istio.io/demoappv10-gateway created
[root@xksmaster1 04-proxy-gateway]# kubectl get gw -n istio-system
NAME AGE
demoappv10-gateway 17s
[root@xksmaster1 04-proxy-gateway]# kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-egressgateway-774d6846df-wgkb9 1/1 Running 0 69d
istio-ingressgateway-69499dc-vcz7d 1/1 Running 0 69d
istiod-65dcb8497-v4g8j 1/1 Running 0 69d
#此时有个8080 端口 其实就是80 端口
[root@xksmaster1 04-proxy-gateway]# istioctl pc listeners istio-ingressgateway-69499dc-vcz7d.istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 8080 ALL Route: http.8080
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
#此时ingressgateway service 通过80 转发到后面的8080
#Port: http2 80/TCP
#TargetPort: 8080/TCP
[root@xksmaster1 04-proxy-gateway]# kubectl describe svc istio-ingressgateway -n istio-system
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
install.operator.istio.io/owning-resource=unknown
install.operator.istio.io/owning-resource-namespace=istio-system
istio=ingressgateway
istio.io/rev=default
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.17.1
release=istio
Annotations: <none>
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.108.113.206
IPs: 10.108.113.206
External IPs: 192.168.19.200
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 31695/TCP
Endpoints: 10.244.207.87:15021
Port: http2 80/TCP
TargetPort: 8080/TCP
NodePort: http2 31246/TCP
Endpoints: 10.244.207.87:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 30196/TCP
Endpoints: 10.244.207.87:8443
Port: tcp 31400/TCP
TargetPort: 31400/TCP
NodePort: tcp 30817/TCP
Endpoints: 10.244.207.87:31400
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 31775/TCP
Endpoints: 10.244.207.87:15443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
#还需要配置 后端路由才可以进行访问否则报错404
[root@xksmaster1 04-proxy-gateway]# kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-egressgateway-774d6846df-wgkb9 1/1 Running 0 69d
istio-ingressgateway-69499dc-vcz7d 1/1 Running 0 69d
istiod-65dcb8497-v4g8j 1/1 Running 0 69d
[root@xksmaster1 04-proxy-gateway]# istioctl pc routes istio-ingressgateway-69499dc-vcz7d -n istio-system
NAME DOMAINS MATCH VIRTUAL SERVICE
http.8080 * /* 404
* /stats/prometheus*
* /healthz/ready*
创建VirtualService
[root@xksmaster1 04-proxy-gateway]# cat virtualservice-demoappv10.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: demoappv10vs
spec:
hosts:
- "demoappv10.magedu.com" # 对应于gateways/proxy-gateway
gateways:
- istio-system/demoappv10-gateway # 相关定义仅应用于Ingress Gateway上
#- mesh
http:
- name: default
route:
- destination:
host: demoappv10.default.svc
[root@xksmaster1 04-proxy-gateway]# kubectl apply -f virtualservice-demoappv10.yaml
virtualservice.networking.istio.io/demoappv10vs unchanged
[root@xksmaster1 04-proxy-gateway]# kubectl get vs
NAME GATEWAYS HOSTS AGE
demoappv10vs ["istio-system/demoappv10-gateway"] ["demoappv10.magedu.com"] 99s
[root@xksmaster1 04-proxy-gateway]# istioctl pc routes istio-ingressgateway-69499dc-vcz7d -n istio-system
NAME DOMAINS MATCH VIRTUAL SERVICE
http.8080 demoappv10.magedu.com /* demoappv10vs.default
* /stats/prometheus*
* /healthz/ready*
创建DestinationRule
#host: demoappv10 此时这里的host名字就是 service的名字
[root@xksmaster1 03-demoapp-subset]# cat destinationrule-demoapp.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: demoapp-dr
spec:
host: demoappv10
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
[root@xksmaster1 03-demoapp-subset]# kubectl apply -f destinationrule-demoapp.yaml
destinationrule.networking.istio.io/demoapp-dr created
[root@xksmaster1 03-demoapp-subset]# kubectl get dr
NAME HOST AGE
demoapp-dr demoappv10 3s
[root@xksmaster1 03-demoapp-subset]# kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-pod 2/2 Running 28 (10m ago) 76d
demo-pod-1 1/1 Running 0 76d
demoappv10-54757f48d6-bdwg6 2/2 Running 0 38m
demoappv10-54757f48d6-kkg28 2/2 Running 0 38m
demoappv10-54757f48d6-q8fdk 2/2 Running 0 38m
nginx-deployment-777c9d64c8-blfqm 1/1 Running 1 (77d ago) 85d
nginx-deployment-777c9d64c8-gr8tl 1/1 Running 1 (77d ago) 85d
nginx-deployment-777c9d64c8-tnmxv 1/1 Running 1 (77d ago) 85d
pod-first 0/1 CrashLoopBackOff 321 (4m18s ago) 76d
pod-node-affinity-demo 1/1 Running 0 76d
pod-node-affinity-demo-2 1/1 Running 0 76d
pod-second 0/1 Pending 0 76d
sleep-bc9998558-bl49z 2/2 Running 0 21m
#会有对应的 DESTINATION RULE
#demoappv10.default.svc.cluster.local 8080 - outbound EDS demoapp-dr.default
[root@xksmaster1 03-demoapp-subset]# istioctl pc clusters sleep-bc9998558-bl49z
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
80 - inbound ORIGINAL_DST
BlackHoleCluster - - - STATIC
InboundPassthroughClusterIpv4 - - - ORIGINAL_DST
PassthroughCluster - - - ORIGINAL_DST
agent - - - STATIC
dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local 8000 - outbound EDS
demoappv10.default.svc.cluster.local 8080 - outbound EDS demoapp-dr.default
istio-egressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-egressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15021 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 31400 - outbound EDS
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local 443 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
prometheus_stats - - - STATIC
sds-grpc - - - STATIC
sleep.default.svc.cluster.local 80 - outbound EDS
xds-grpc - - - STATIC
zipkin - - - STRICT_DNS
#通过 --fqdn 指定处 条目
[root@xksmaster1 03-demoapp-subset]# istioctl pc clusters --fqdn demoappv10.default.svc.cluster.local sleep-bc9998558-bl49z
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
demoappv10.default.svc.cluster.local 8080 - outbound EDS demoapp-dr.default
#将条目 保存处yaml格式
[root@xksmaster1 03-demoapp-subset]# istioctl pc clusters --fqdn demoappv10.default.svc.cluster.local sleep-bc9998558-bl49z -o yaml
- circuitBreakers:
thresholds:
- maxConnections: 4294967295
maxPendingRequests: 4294967295
maxRequests: 4294967295
maxRetries: 4294967295
trackRemaining: true
commonLbConfig:
localityWeightedLbConfig: {}
connectTimeout: 10s
edsClusterConfig:
edsConfig:
ads: {}
initialFetchTimeout: 0s
resourceApiVersion: V3
serviceName: outbound|8080||demoappv10.default.svc.cluster.local
filters:
- name: istio.metadata_exchange
typedConfig:
'@type': type.googleapis.com/envoy.tcp.metadataexchange.config.MetadataExchange
protocol: istio-peer-exchange
lbPolicy: LEAST_REQUEST
metadata:
filterMetadata:
istio:
config: /apis/networking.istio.io/v1alpha3/namespaces/default/destination-rule/demoapp-dr
default_original_port: 8080
services:
- host: demoappv10.default.svc.cluster.local
name: demoappv10
namespace: default
name: outbound|8080||demoappv10.default.svc.cluster.local
transportSocketMatches:
- match:
tlsMode: istio
name: tlsMode-istio
transportSocket:
name: envoy.transport_sockets.tls
typedConfig:
'@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
commonTlsContext:
alpnProtocols:
- istio-peer-exchange
- istio
combinedValidationContext:
defaultValidationContext:
matchSubjectAltNames:
- exact: spiffe://cluster.local/ns/default/sa/default
validationContextSdsSecretConfig:
name: ROOTCA
sdsConfig:
apiConfigSource:
apiType: GRPC
grpcServices:
- envoyGrpc:
clusterName: sds-grpc
setNodeOnFirstMessageOnly: true
transportApiVersion: V3
initialFetchTimeout: 0s
resourceApiVersion: V3
tlsCertificateSdsSecretConfigs:
- name: default
sdsConfig:
apiConfigSource:
apiType: GRPC
grpcServices:
- envoyGrpc:
clusterName: sds-grpc
setNodeOnFirstMessageOnly: true
transportApiVersion: V3
initialFetchTimeout: 0s
resourceApiVersion: V3
tlsParams:
tlsMaximumProtocolVersion: TLSv1_3
tlsMinimumProtocolVersion: TLSv1_2
sni: outbound_.8080_._.demoappv10.default.svc.cluster.local
- match: {}
name: tlsMode-disabled
transportSocket:
name: envoy.transport_sockets.raw_buffer
typedConfig:
'@type': type.googleapis.com/envoy.extensions.transport_sockets.raw_buffer.v3.RawBuffer
type: EDS
流量治理的相关资源
Gateway:生效在istio-ingressgateway上,用于配置接入哪一类型的外部流量
生成一个Listener,若Listener已经存在,则在该Listener之上生成一个VirtualHost,但该VirtualHost接收到的流量,发往何处(流量目标)并不会自动生成;
VirtualService:配置流量的具体路径路径
存在两种生效逻辑:
生效在istio-ingressgateway上:作为Ingress Listener存在
生效在mesh上(配置在网格内的所有Sidecar Envoy上):作为Egress Listener存在
DestinationRule:配置集群内部的流量分发逻辑
负载均衡算法
连接池
……
网格内部的通信
将任何一个Service自动配置为每个Sidecar Envoy上的:
Listener:由Service Port指定;80会被自动处理为8080;
额外生成一个VirtualHost的定义,主机头的名称为该Service的名称;
Route:由一个Service的Listener进入的所有流量(match /*)全部路由给该Service的各Pod生成的Cluster
Cluster:由Service基于其名称生成,并通过EDS下发给每个Sidecar,所有集群名称同Service的名称;
Endpoint:由Service基于其Label Selector发现的各Pod的IP生成
Client -->
Envoy Sidecar (outbound:egress listener
--> route
--> cluster
--> endpoint (由目标生生成))
--> Server Pod Envoy Sidecar (inbound: ingress listener
--> route
--> local cluster
--> localhost(业务容器)(自由所属的服务生成))
网格外部的流量
流量必须经由某个IngressGateway进入
因而,接入流量的前提是在某个IngressGateway上自行开启一个Listener,开启Listener的方法,就是在该IngressGateway定义一个Gateway CRD资源;
在该Listener基于目标Host匹配流量
匹配到的流量,需要经由VirtualService定义其路由目标(网格内的某个服务);
External Client
--> IngressGateway Service
--> IngressGateway Pod(Listener(由Gateway定义)
--> Route(由VirtualService定义)
--> Cluster(可由控制平面通过发现的的Service自动配置)
--> endpoint )
总结
Gateway:若Listener已经存在,于IngressGateway上基于指定的端口启用一个Listener,并在该Listener上配置一个VirtualHost;
VirtualService:在Listener上,为某Host(通常会由VirtualHost的定义所匹配)定制高级路由
如何匹配Listener?
由关联的Gateway CRD资源所定义
由关联的Service(通过hosts指定)的端口生成的Listener
默认的路由配置:/* --> 由同一Service生成的Cluster
DestinationRule:在Cluster上,为Cluster定义高级配置
默认的配置:把Service Name定义成集群,该Service匹配到的Pod,定义为集群端点
高级配置:lbPolicy,Connection Pool
Istio小结:
基于某个Service Registry(API Server)发现Service
所有Service都会被纳入网格治理体系:配置在Sidecar和Gateway
非网格治理下的Service,只要能够被Istio发现
而且,只要客户端在网格内部,就同样能够支持经VS和DR定义高级流量治理功能;
原因在于:客户端自身的Sidecar上的Listener发挥了作用;
但是,网格外部的服务没有Sidecar,某些功能无法支持
例如,双向tls
支持经由Gateway,将特定的Service暴露至网格外部;
实践:
把Istio的各Addons发布到网格外部;
prometheus
grafana
tracing
kiali:
gw port: 20001
需要修改InGW Service资源上的端口定义
gw port: 80




浙公网安备 33010602011771号