kubernetes + istio进行流量管理

实验目的:

本文介绍如何通过istio实现域名访问k8s部署的nginx服务

前提:

  已经安装了kubernetes的服务器

      了解 kubernetes 基本命令如何使用 (kubectl create/delete/get/apply 等基本命令)

      注意文章红色加粗字体

      能上网(^_^)

      tip:  kubernetes安装参考:centos7 使用kubeadm 快速部署 kubernetes 国内源 

实验环境:

[root@k8s-master ~]# uname -a
Linux k8s-master 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@k8s-master ~]# kubectl get node,pod,svc -o wide
NAME              STATUS     ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
node/k8s-master   Ready      master   2d17h   v1.14.0   10.211.55.6   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.6.1
node/k8s-node     NotReady   <none>   2d14h   v1.14.0   10.211.55.7   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.6.1

NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
pod/qf-test-nginx-45k8x   1/1     Running   0          15h   10.244.0.21   k8s-master   <none>           <none>
pod/qf-test-nginx-k97vc   1/1     Running   0          15h   10.244.10.4   k8s-node     <none>           <none>

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE     SELECTOR
service/kubernetes      ClusterIP   10.96.0.1      <none>        443/TCP        2d17h   <none>
service/qf-test-nginx   NodePort    10.98.49.158   <none>        80:31412/TCP   15h     app=nginx

 

安装istio

#下载istio
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.1.1 sh -
#进入到istio目录
cd istio-1.1.1

#修改 ~/.bash_profile 添加下面内容 导入istio
#如果用的是 zsh  修改 ~/.zhsrc

export PATH="$PATH:/root/k8s/istio-1.1.1/bin"

在kubernetes使用istio : istio需要的docker镜像都需要在docker.io上拉取,所以可能会有一些慢,稍作等待 or 睡一觉之后再看

for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done

#我们这里使用的是宽容模式,要使用严禁模式看文章最后的参考文档
kubectl apply -f install/kubernetes/istio-demo.yaml

#确认下列 Kubernetes 服务已经部署并都具有各自的 CLUSTER-IP
kubectl get svc -n istio-system

#确认必要的 Kubernetes Pod 都已经创建并且其 STATUS 的值是 Running
kubectl get pods -n istio-system

部署完成后我们看一下 istio 相关Pod 状态

[root@k8s-master ~]# kubectl get pods -n istio-system
NAME                                      READY   STATUS      RESTARTS   AGE
grafana-7b9f5d484f-mf28j                  1/1     Running     0          11h
istio-citadel-848f4c8489-s4bm9            1/1     Running     0          11h
istio-cleanup-secrets-1.1.1-4zd5w         0/1     Completed   0          12h
istio-egressgateway-7469db8c68-jlr9b      1/1     Running     0          12h
istio-galley-86bcf86779-858jv             1/1     Running     0          12h
istio-grafana-post-install-1.1.1-t7qqg    0/1     Completed   0          12h
istio-ingressgateway-56bbdd69bf-j7swp     1/1     Running     0          12h
istio-pilot-77b99c499-xxhfk               2/2     Running     1          12h
istio-policy-85f58d8775-wd8wm             2/2     Running     6          12h
istio-security-post-install-1.1.1-nfhb8   0/1     Completed   0          12h
istio-sidecar-injector-5464f674c4-rcvpk   1/1     Running     0          12h
istio-telemetry-9b844886f-h9rzd           2/2     Running     6          12h
istio-tracing-7f5d8c5d98-s72nv            1/1     Running     0          12h
kiali-589d55b4db-vljzq                    1/1     Running     0          12h
prometheus-878999949-qntkc                1/1     Running     0          12h

部署应用 添加gateway 和 virtualservice

先来看一下kubernetes的部署文件

[root@k8s-master testnginx]# cat nginx-daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: qf-test-nginx
spec:
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: qingfenglian/test_nginx
        ports:
        - containerPort: 80


---
apiVersion: v1
kind: Service
metadata:
  name: qf-test-nginx
  namespace: default
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    name: http

用上面的yaml文件部署成功后看一下 daemonset, pod,svc,node 信息, 

tip:: 实验环境中本身是有两个node,有一个node是NotReady,原因是k8s-node这台机器属于关机(^_^)状态,后面我会启动,不要捉急

[root@k8s-master ~]# kubectl get daemonset,pod,svc,node -o wide
NAME                                 DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES                    SELECTOR
daemonset.extensions/qf-test-nginx   1         1         1       1            1           <none>          16h   nginx        qingfenglian/test_nginx   app=nginx

NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
pod/qf-test-nginx-45k8x   1/1     Running   0          16h   10.244.0.21   k8s-master   <none>           <none>
pod/qf-test-nginx-k97vc   1/1     Running   0          16h   10.244.10.4   k8s-node     <none>           <none>

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE     SELECTOR
service/kubernetes      ClusterIP   10.96.0.1      <none>        443/TCP        2d18h   <none>
service/qf-test-nginx   NodePort    10.98.49.158   <none>        80:31412/TCP   16h     app=nginx

NAME              STATUS     ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
node/k8s-master   Ready      master   2d18h   v1.14.0   10.211.55.6   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.6.1
node/k8s-node     NotReady   <none>   2d15h   v1.14.0   10.211.55.7   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.6.1

创建gateway

#查看gateway内容
[root@k8s-master testnginx]# cat qingfeng-deve-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: qingfeng-deve
spec:
  selector:
    istio: ingressgateway # 使用 istio 默认的 ingress gateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

#创建gateway
[root@k8s-master testnginx]# kubectl create -f <(istioctl kube-inject -f qingfeng-deve-gateway.yaml)
gateway.networking.istio.io/qingfeng-deve created

#查看结果
[root@k8s-master testnginx]# kubectl get gateway
NAME            AGE
qingfeng-deve   12s
[root@k8s-master testnginx]#

创建virtualservice 

注意::本文nginx服务命名空间是default ,如果是其他命名空间的服务 需要这样写 格式 "serviceName.namespaceName.svc.cluster.local" 例 "qf-test-nginx.default.svc.cluster.local"

解释::  serviceName.namespaceName.svc.cluster.local     :: serviceName=k8s的service名称 ; namespaceName=service服务所在的命名空间; .svc.cluster.local 这个是固定的不变

[root@k8s-master testnginx]# cat nginx-virutalservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: vs-nginx
spec:
  hosts:
  - "nginx.local.com"
  gateways:
  - qingfeng-deve
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: qf-test-nginx
[root@k8s-master testnginx]# kubectl create -f <(istioctl kube-inject -f nginx-virutalservice.yaml)
virtualservice.networking.istio.io/vs-nginx created
[root@k8s-master testnginx]#

 修改 istio-ingressgateway

kubectl -n istio-system edit deployment istio-ingressgateway

找到下面内容,并做修改

将 80 端口和 443 端口配置为 hostPort 模式,

等待几秒让 istio-ingressgateway 重新调度

image: docker.io/istio/proxyv2:1.1.1
        imagePullPolicy: IfNotPresent
        name: istio-proxy
        ports:
        - containerPort: 80
          hostPort: 80                ------------######## 这里增加这一行
          protocol: TCP
        - containerPort: 443
          hostPort: 443          ------------######## 这里增加一行
          protocol: TCP
        - containerPort: 31400
          protocol: TCP
        - containerPort: 15029
          protocol: TCP
        - containerPort: 15030
          protocol: TCP
        - containerPort: 15031
          protocol: TCP
        - containerPort: 15032
          protocol: TCP
        - containerPort: 15443
          protocol: TCP
        - containerPort: 15020
          protocol: TCP
        - containerPort: 15090
          name: http-envoy-prom
          protocol: TCP

验证结果

host绑定::由于我的域名是 nginx.local.com  没有域名解析,所以需要在host里面添加一条记录

绑定host之后通过浏览器访问  nginx.local.com 查看返回信息,,我是为了偷懒 所以用curl 请求

先 ping 一下看看 host绑定是否生效,,然后用curl 请求

~ » ping nginx.local.com                                                                                                                                          lianqingfeng@bogon
PING nginx.local.com (10.211.55.6): 56 data bytes
64 bytes from 10.211.55.6: icmp_seq=0 ttl=64 time=0.194 ms
64 bytes from 10.211.55.6: icmp_seq=1 ttl=64 time=0.160 ms
^C
--- nginx.local.com ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.160/0.177/0.194/0.017 ms
------------------------------------------------------------
~ » curl nginx.local.com                                                                                                                                          lianqingfeng@bogon
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
<p><em>qingfeng.lian</em></p>
</body>
</html>
------------------------------------------------------------
~ »

让我们来看一istio-ingressgateway的日志, 发现 "10.244.0.21:80"  ,回到文章最上面找找这个ip地址会发现这个是nginx的svc服务ip地址,

还有一点上虽然k8s-node节点上的pod仍然是运行状态,但是注意看返回信息最后面有 "qingfeng.lian" 字样,这说明服务并没有打到 k8s-node节点,因为k8s-node节点上我并没有加这样的输出

[root@k8s-master testnginx]# kubectl logs -f istio-ingressgateway-64fcc46bb-zx6tx -n istio-system --tail=3
[2019-04-05T03:03:45.837Z] "GET / HTTP/1.1" 304 - "-" 0 0 0 0 "10.211.55.2" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36" "a822ba28-cd5e-9272-a423-7bb051c0dac5" "nginx.local.com" "10.244.0.21:80" outbound|80||qf-test-nginx.default.svc.cluster.local - 10.244.0.39:80 10.211.55.2:57799 -
[2019-04-05T03:03:54.526Z] "GET / HTTP/1.1" 304 - "-" 0 0 0 0 "10.211.55.2" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36" "10ab86c0-8716-977a-965a-007fe11129c0" "nginx.local.com" "10.244.0.21:80" outbound|80||qf-test-nginx.default.svc.cluster.local - 10.244.0.39:80 10.211.55.2:57799 -
[2019-04-05T03:12:37.524Z] "GET / HTTP/1.1" 200 - "-" 0 642 2 0 "10.211.55.2" "curl/7.54.0" "2c51476c-79bf-9969-82a1-588b69e50fa6" "nginx.local.com" "10.244.0.21:80" outbound|80||qf-test-nginx.default.svc.cluster.local - 10.244.0.39:80 10.211.55.2:57877 -

我们再做一个实验 现在吧k8s-node节点启动,(把node节点开机即可(^_^)),现在来看一下node节点状态,发现k8s-node已经就绪,这个基本上是node节点开机成功就会变成Ready ,秒级

这里为了看所有信息 所以我把 node,svc,pod全列出啦,也可以只看node   

[root@k8s-master testnginx]# kubectl get node,svc,pod -o wide
NAME              STATUS   ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
node/k8s-master   Ready    master   2d19h   v1.14.0   10.211.55.6   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.6.1
node/k8s-node     Ready    <none>   2d16h   v1.14.0   10.211.55.7   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.6.1

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE     SELECTOR
service/kubernetes      ClusterIP   10.96.0.1      <none>        443/TCP        2d19h   <none>
service/qf-test-nginx   NodePort    10.98.49.158   <none>        80:31412/TCP   18h     app=nginx

NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
pod/qf-test-nginx-45k8x   1/1     Running   0          18h   10.244.0.21   k8s-master   <none>           <none>
pod/qf-test-nginx-k97vc   1/1     Running   0          18h   10.244.10.4   k8s-node     <none>           <none>
[root@k8s-master testnginx]#

我们在来发几次curl请求验证一下k8s-node上的pod是否生效, istio-ingressgateway日志 可以看到 请求已经开始打到不同的pod上面

[2019-04-10T01:31:52.547Z] "GET / HTTP/1.1" 200 - "-" 0 642 2 0 "10.211.55.2" "curl/7.54.0" "2c3f9423-d05d-94fa-a5cf-9476f16aeb0e" "nginx.local.com" "10.244.0.21:80" outbound|80||qf-test-nginx.default.svc.cluster.local - 10.244.0.39:80 10.211.55.2:58525 -
[2019-04-10T01:31:54.426Z] "GET / HTTP/1.1" 200 - "-" 0 612 2 0 "10.211.55.2" "curl/7.54.0" "5428cb78-4456-910f-bba0-2a76188a3bf3" "nginx.local.com" "10.244.10.4:80" outbound|80||qf-test-nginx.default.svc.cluster.local - 10.244.0.39:80 10.211.55.2:58526 -
[2019-04-10T01:31:55.772Z] "GET / HTTP/1.1" 200 - "-" 0 642 0 0 "10.211.55.2" "curl/7.54.0" "d9213d72-e508-951e-b2ea-1c1a106638b8" "nginx.local.com" "10.244.0.21:80" outbound|80||qf-test-nginx.default.svc.cluster.local - 10.244.0.39:80 10.211.55.2:58527 -
[2019-04-10T01:31:58.441Z] "GET / HTTP/1.1" 200 - "-" 0 612 1 1 "10.211.55.2" "curl/7.54.0" "8f44729b-46c9-9b90-8903-48adbfcb547c" "nginx.local.com" "10.244.10.4:80" outbound|80||qf-test-nginx.default.svc.cluster.local - 10.244.0.39:80 10.211.55.2:58528 -

 

安装过程中遇到的问题以及报错解决办法

本次实验需要注意的就是 pod的命名空间别弄错了,基本上安装步骤都可以成功。

 

参考文档:

istio安装 :

https://istio.io/zh/docs/setup/kubernetes/download/ 

https://istio.io/zh/docs/setup/kubernetes/install/kubernetes/

https://istio.io/latest/zh/docs/setup/install/istioctl/ (2022-02-20 1.13安装文档)

istio配置:

http://blog.daocloud.io/istio-ingress/

 

其他资料: 在https://www.ipaddress.com/查询raw.githubusercontent.com的真实IP。

 

posted @ 2019-04-10 09:38  zzphper  阅读(2071)  评论(0编辑  收藏  举报