Kubernetes 之 Cilium 完全替代 Kube-proxy【2024-02-29 测试成功】

一、配置集群

#安装文档
#https://mp.weixin.qq.com/s/D-ps2qnVfaesgRGd5Zo4_A?
OS: Ubuntu 22.04 amd64
Kubernetes:v1.28.2
Continerd.io 1.6.28

二、拉起集群-不安装kube-proxy 使用Cilium进行完全替代kube-proxy

# 部署Kubernetes集群
# 指明了 不需要安装 kube-proxy 
# [addons] Applied essential addon: CoreDNS 只安装了CoreDNS
root@ubuntu-k8s-master01:~#     kubeadm init --control-plane-endpoint 192.168.40.132 \
        --kubernetes-version=v1.29.2 \
        --pod-network-cidr=10.244.0.0/16 \
        --service-cidr=10.96.0.0/12 \
        --upload-certs \
        --image-repository=registry.aliyuncs.com/google_containers \
        --skip-phases=addon/kube-proxy
        
[init] Using Kubernetes version: v1.29.2
[preflight] Running pre-flight checks
        [WARNING KubernetesVersion]: Kubernetes version is greater than kubeadm version. Please consider to upgrade kubeadm. Kubernetes version: 1.29.2. Kubeadm version: 1.28.x
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ubuntu-k8s-master01] and IPs [10.96.0.1 192.168.40.132]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-k8s-master01] and IPs [192.168.40.132 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-k8s-master01] and IPs [192.168.40.132 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.504346 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
a2a9975066088cc753b45f6b1a228241c8cf32aa84da0e293518cb8cd553256c
[mark-control-plane] Marking the node ubuntu-k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ubuntu-k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 9z3bxl.qq5nzq9gdmjs9kod
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.40.132:6443 --token 9z3bxl.qq5nzq9gdmjs9kod \
        --discovery-token-ca-cert-hash sha256:e41b93033397d75b0fe1fe2380dcf3f59957684c965e70776959d77391282a1c \
        --control-plane --certificate-key a2a9975066088cc753b45f6b1a228241c8cf32aa84da0e293518cb8cd553256c

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.40.132:6443 --token 9z3bxl.qq5nzq9gdmjs9kod \
        --discovery-token-ca-cert-hash sha256:e41b93033397d75b0fe1fe2380dcf3f59957684c965e70776959d77391282a1c

root@ubuntu-k8s-master01:~# mkdir .kube
root@ubuntu-k8s-master01:~# cp /etc/kubernetes/admin.conf .kube/config

三、安装Cilium网络插件:默认是vxlan隧道模式模式

#vxlan隧道模式
    cilium install \
        --set kubeProxyReplacement=strict \
        --set ipam.mode=kubernetes \
        --set routingMode=tunnel \
        --set tunnelProtocol=vxlan \
        --set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 \
        --set ipam.Operator.ClusterPoolIPv4MaskSize=24    

#查看状态
root@ubuntu-k8s-master01:~/software# cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet              cilium             Desired: 3, Ready: 3/3, Available: 3/3
Containers:            cilium             Running: 3
                       cilium-operator    Running: 1
Cluster Pods:          2/2 managed by Cilium
Helm chart version:    1.15.0
Image versions         cilium-operator    quay.io/cilium/operator-generic:v1.15.0@sha256:e26ecd316e742e4c8aa1e302ba8b577c2d37d114583d6c4cdd2b638493546a79: 1
                       cilium             quay.io/cilium/cilium:v1.15.0@sha256:9cfd6a0a3a964780e73a11159f93cc363e616f7d9783608f62af6cfdf3759619: 3

#添加node01、node02
#node01
root@ubuntu-k8s-node01:# kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers
I0229 05:46:38.340849    2751 version.go:256] remote version is much newer: v1.29.2; falling back to: stable-1.28
W0229 05:46:38.650194    2751 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.28.txt": Get "https://dl.k8s.i
W0229 05:46:38.650228    2751 version.go:105] falling back to the local client version: v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1
root@ubuntu-k8s-node01:# kubeadm join 192.168.40.132:6443 --token 9z3bxl.qq5nzq9gdmjs9kod \
        --discovery-token-ca-cert-hash sha256:e41b93033397d75b0fe1fe2380dcf3f59957684c965e70776959d77391282a1c \
        --control-plane --certificate-key a2a9975066088cc753b45f6b1a228241c8cf32aa84da0e293518cb8cd553256c
#node02       
root@ubuntu-k8s-node02:~# kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers
I0229 05:46:38.340849    2751 version.go:256] remote version is much newer: v1.29.2; falling back to: stable-1.28
W0229 05:46:38.650194    2751 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.28.txt": Get "https://dl.k8s.i
W0229 05:46:38.650228    2751 version.go:105] falling back to the local client version: v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1
root@ubuntu-k8s-node02:~# kubeadm join 192.168.40.132:6443 --token 9z3bxl.qq5nzq9gdmjs9kod \
        --discovery-token-ca-cert-hash sha256:e41b93033397d75b0fe1fe2380dcf3f59957684c965e70776959d77391282a1c \
        --control-plane --certificate-key a2a9975066088cc753b45f6b1a228241c8cf32aa84da0e293518cb8cd553256c

#这里3个都是控制节点了 要去掉node01、node02 标签 如下图
root@ubuntu-k8s-master01:~/software# kubectl get nodes -o wide
NAME                  STATUS   ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION      CONTAINER-RUNTIME
ubuntu-k8s-master01   Ready    control-plane   43m     v1.28.2   192.168.40.132   <none>        Ubuntu 22.04 LTS   5.15.0-97-generic   containerd://1.6.28
ubuntu-k8s-node01     Ready    control-plane   11m     v1.28.2   192.168.40.133   <none>        Ubuntu 22.04 LTS   5.15.0-97-generic   containerd://1.6.28
ubuntu-k8s-node02     Ready    control-plane   6m59s   v1.28.2   192.168.40.134   <none>        Ubuntu 22.04 LTS   5.15.0-97-generic   containerd://1.6.28
root@ubuntu-k8s-master01:~/software# kubectl edit node ubuntu-k8s-node01
root@ubuntu-k8s-master01:~/software# kubectl edit node ubuntu-k8s-node02

#查看节点状态
root@ubuntu-k8s-master01:~/software# kubectl get nodes -o wide
NAME                  STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION      CONTAINER-RUNTIME
ubuntu-k8s-master01   Ready    control-plane   75m   v1.28.2   192.168.40.132   <none>        Ubuntu 22.04 LTS   5.15.0-97-generic   containerd://1.6.28
ubuntu-k8s-node01     Ready    <none>          43m   v1.28.2   192.168.40.133   <none>        Ubuntu 22.04 LTS   5.15.0-97-generic   containerd://1.6.28
ubuntu-k8s-node02     Ready    <none>          38m   v1.28.2   192.168.40.134   <none>        Ubuntu 22.04 LTS   5.15.0-97-generic   containerd://1.6.28

 3.1-测试创建Pod-连通性

kubectl create deployment demoapp --image=ikubernetes/demoapp:v1.0 --replicas=3

root@ubuntu-k8s-master01:~/software# kubectl  get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
demoapp-7c58cd6bb-4x8kw   1/1     Running   0          2m47s   10.244.2.46   ubuntu-k8s-node02   <none>           <none>
demoapp-7c58cd6bb-6gjxw   1/1     Running   0          2m47s   10.244.2.73   ubuntu-k8s-node02   <none>           <none>
demoapp-7c58cd6bb-dbw76   1/1     Running   0          2m47s   10.244.1.44   ubuntu-k8s-node01   <none>           <none>

#node01 app 访问 node02 app -正常通信
root@ubuntu-k8s-master01:~/software# kubectl exec -it demoapp-7c58cd6bb-dbw76 /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@demoapp-7c58cd6bb-dbw76 /]# curl 10.244.2.73
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-6gjxw, ServerIP: 10.244.2.73!
[root@demoapp-7c58cd6bb-dbw76 /]# curl 10.244.2.46
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-4x8kw, ServerIP: 10.244.2.46!
[root@demoapp-7c58cd6bb-dbw76 /]# curl 10.244.1.44
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-dbw76, ServerIP: 10.244.1.44!

#node01 查看ip link show
root@ubuntu-k8s-node01:~# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:a1:dd:c5 brd ff:ff:ff:ff:ff:ff
    altname enp2s1
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether 02:42:1c:1a:47:a3 brd ff:ff:ff:ff:ff:ff
4: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 6a:52:7f:db:83:d4 brd ff:ff:ff:ff:ff:ff
5: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2e:6f:b8:74:ff:93 brd ff:ff:ff:ff:ff:ff
6: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fa:b8:56:66:98:32 brd ff:ff:ff:ff:ff:ff
8: lxc_health@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2a:83:bb:e6:9c:ba brd ff:ff:ff:ff:ff:ff link-netnsid 0
16: lxcba44b31b4d71@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ea:de:ed:f8:e9:52 brd ff:ff:ff:ff:ff:ff link-netns cni-6b472b24-056d-1afc-c862-92b1db004b37

#cilium_host 分配到第一个IP地址 作为当前节点上 Pod的网关使用
root@ubuntu-k8s-node01:~# ifconfig
cilium_host: flags=4291<UP,BROADCAST,RUNNING,NOARP,MULTICAST>  mtu 1500
        inet 10.244.1.229  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::2c6f:b8ff:fe74:ff93  prefixlen 64  scopeid 0x20<link>
        ether 2e:6f:b8:74:ff:93  txqueuelen 1000  (Ethernet)
        RX packets 543  bytes 41878 (41.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 78  bytes 4772 (4.7 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
#查看路由表
root@ubuntu-k8s-node01:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.40.2    0.0.0.0         UG    0      0        0 ens33
10.244.0.0      10.244.1.229    255.255.255.0   UG    0      0        0 cilium_host
10.244.1.0      10.244.1.229    255.255.255.0   UG    0      0        0 cilium_host
10.244.1.229    0.0.0.0         255.255.255.255 UH    0      0        0 cilium_host
10.244.2.0      10.244.1.229    255.255.255.0   UG    0      0        0 cilium_host
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.40.0    0.0.0.0         255.255.255.0   U     0      0        0 ens33

#查看每个cilium pod中 cilium情况
root@ubuntu-k8s-master01:~# kubectl  get pods -n kube-system
NAME                                          READY   STATUS    RESTARTS      AGE
cilium-9cnv4                                  1/1     Running   0             54m
cilium-b74v8                                  1/1     Running   0             66m
cilium-mmqj4                                  1/1     Running   0             50m
cilium-operator-7764cf64d6-tcqxk              1/1     Running   1 (54m ago)   66m
coredns-774bbd8588-5qx92                      1/1     Running   0             86m
coredns-774bbd8588-ggfjp                      1/1     Running   0             86m
etcd-ubuntu-k8s-master01                      1/1     Running   0             86m
etcd-ubuntu-k8s-node01                        1/1     Running   0             54m
etcd-ubuntu-k8s-node02                        1/1     Running   0             50m
kube-apiserver-ubuntu-k8s-master01            1/1     Running   0             86m
kube-apiserver-ubuntu-k8s-node01              1/1     Running   0             54m
kube-apiserver-ubuntu-k8s-node02              1/1     Running   0             50m
kube-controller-manager-ubuntu-k8s-master01   1/1     Running   1 (54m ago)   86m
kube-controller-manager-ubuntu-k8s-node01     1/1     Running   0             54m
kube-controller-manager-ubuntu-k8s-node02     1/1     Running   0             50m
kube-scheduler-ubuntu-k8s-master01            1/1     Running   1 (54m ago)   86m
kube-scheduler-ubuntu-k8s-node01              1/1     Running   0             54m
kube-scheduler-ubuntu-k8s-node02              1/1     Running   0             50m
root@ubuntu-k8s-master01:~# kubectl  exec -it cilium-9cnv4 /bin/sh -n kube-system
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
# LXC接口的MAC地址用于响应ARP请求
# cilium bpf tunnel list
TUNNEL       VALUE
10.244.2.0   192.168.40.134:0
10.244.0.0   192.168.40.132:0

# cilium status
KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.29 (v1.29.2) [linux/amd64]
Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    Strict   [ens33   192.168.40.133 fe80::20c:29ff:fea1:ddc5 (Direct Routing)]
Host firewall:           Disabled
SRv6:                    Disabled
CNI Chaining:            none
Cilium:                  Ok   1.15.0 (v1.15.0-2db45c46)
NodeMonitor:             Listening for events on 128 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok
IPAM:                    IPv4: 3/254 allocated from 10.244.1.0/24,
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       24/24 healthy
Proxy Status:            OK, ip 10.244.1.229, 0 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 5.45   Metrics: Disabled
Encryption:              Disabled
Cluster health:          3/3 reachable   (2024-02-29T06:43:37Z)
Modules Health:          Stopped(0) Degraded(0) OK(11) Unknown(3)

#查看更加详细 
# cilium status --verbose
KVStore:                Ok   Disabled
Kubernetes:             Ok   1.29 (v1.29.2) [linux/amd64]
Kubernetes APIs:        ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:   Strict   [ens33   192.168.40.133 fe80::20c:29ff:fea1:ddc5 (Direct Routing)]
Host firewall:          Disabled
SRv6:                   Disabled
CNI Chaining:           none
Cilium:                 Ok   1.15.0 (v1.15.0-2db45c46)
NodeMonitor:            Listening for events on 128 CPUs with 64x4096 of shared memory
Cilium health daemon:   Ok
IPAM:                   IPv4: 3/254 allocated from 10.244.1.0/24,
Allocated addresses:
  10.244.1.157 (health)
  10.244.1.229 (router)
  10.244.1.44 (default/demoapp-7c58cd6bb-dbw76)
IPv4 BIG TCP:           Disabled
IPv6 BIG TCP:           Disabled
BandwidthManager:       Disabled
Host Routing:           Legacy
Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
Clock Source for BPF:   ktime
Controller Status:      24/24 healthy
  Name                                             Last success   Last error   Count   Message
  cilium-health-ep                                 8s ago         never        0       no error
  dns-garbage-collector-job                        11s ago        never        0       no error
  endpoint-1112-regeneration-recovery              never          never        0       no error
  endpoint-2605-regeneration-recovery              never          never        0       no error
  endpoint-391-regeneration-recovery               never          never        0       no error
  endpoint-gc                                      3m12s ago      never        0       no error
  ep-bpf-prog-watchdog                             9s ago         never        0       no error
  ipcache-inject-labels                            9s ago         58m11s ago   0       no error
  k8s-heartbeat                                    10s ago        never        0       no error
  link-cache                                       9s ago         never        0       no error
  neighbor-table-refresh                           9s ago         never        0       no error
  resolve-identity-1112                            2m31s ago      never        0       no error
  resolve-identity-2605                            4m9s ago       never        0       no error
  resolve-identity-391                             3m8s ago       never        0       no error
  resolve-labels-default/demoapp-7c58cd6bb-dbw76   19m9s ago      never        0       no error
  sync-host-ips                                    9s ago         never        0       no error
  sync-lb-maps-with-k8s-services                   58m9s ago      never        0       no error
  sync-policymap-1112                              13m8s ago      never        0       no error
  sync-policymap-2605                              4m9s ago       never        0       no error
  sync-policymap-391                               13m7s ago      never        0       no error
  sync-to-k8s-ciliumendpoint (2605)                9s ago         never        0       no error
  sync-utime                                       9s ago         never        0       no error
  template-dir-watcher                             never          never        0       no error
  write-cni-file                                   58m12s ago     never        0       no error
Proxy Status:            OK, ip 10.244.1.229, 0 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range:   min 256, max 65535
Hubble:                  Ok   Current/Max Flows: 4095/4095 (100.00%), Flows/s: 5.46   Metrics: Disabled
KubeProxyReplacement Details:
  Status:                 Strict
  Socket LB:              Enabled
  Socket LB Tracing:      Enabled
  Socket LB Coverage:     Full
  Devices:                ens33   192.168.40.133 fe80::20c:29ff:fea1:ddc5 (Direct Routing)
  Mode:                   SNAT
  Backend Selection:      Random
  Session Affinity:       Enabled
  Graceful Termination:   Enabled
  NAT46/64 Support:       Disabled
  XDP Acceleration:       Disabled
  Services:
  - ClusterIP:      Enabled
  - NodePort:       Enabled (Range: 30000-32767)
  - LoadBalancer:   Enabled
  - externalIPs:    Enabled
  - HostPort:       Enabled
BPF Maps:   dynamic sizing: on (ratio: 0.002500)
  Name                          Size
  Auth                          524288
  Non-TCP connection tracking   65536
  TCP connection tracking       131072
  Endpoint policy               65535
  IP cache                      512000
  IPv4 masquerading agent       16384
  IPv6 masquerading agent       16384
  IPv4 fragmentation            8192
  IPv4 service                  65536
  IPv6 service                  65536
  IPv4 service backend          65536
  IPv6 service backend          65536
  IPv4 service reverse NAT      65536
  IPv6 service reverse NAT      65536
  Metrics                       1024
  NAT                           131072
  Neighbor table                131072
  Global policy                 16384
  Session affinity              65536
  Sock reverse NAT              65536
  Tunnel                        65536
Encryption:                                  Disabled
Cluster health:                              3/3 reachable    (2024-02-29T06:47:37Z)
  Name                                       IP               Node        Endpoints
  kubernetes/ubuntu-k8s-node01 (localhost)   192.168.40.133   reachable   reachable
  kubernetes/ubuntu-k8s-master01             192.168.40.132   reachable   reachable
  kubernetes/ubuntu-k8s-node02               192.168.40.134   reachable   reachable
Modules Health:
agent
├── datapath
│   ├── node-address
│   │   └── job-node-address-update                         [OK] 10.244.1.229 (cilium_host), fe80::2c6f:b8ff:fe74:ff93 (cilium_host) (58m, x1)
│   ├── agent-liveness-updater
│   │   └── timer-job-agent-liveness-updater                [OK] OK (43.484µs) (58m, x1)
│   └── l2-responder
│       └── job-l2-responder-reconciler                     [OK] Running (58m, x1)
└── controlplane
    ├── bgp-cp
    │   └── job-diffstore-events                            [OK] Running (58m, x2)
    ├── stale-endpoint-cleanup                              [OK]  (58m, x1)
    ├── daemon
    │   └── ep-bpf-prog-watchdog                            [OK] ep-bpf-prog-watchdog (58m, x117)
    ├── envoy-proxy
    │   └── timer-job-version-check                         [OK] OK (22.142005ms) (58m, x1)
    ├── auth
    │   ├── timer-job-auth gc-cleanup                       [OK] OK (17.343µs) (58m, x1)
    │   ├── observer-job-auth request-authentication        [OK] Primed (58m, x1)
    │   └── observer-job-auth gc-identity-events            [OK] OK (1.495µs) [3] (58m, x1)
    ├── node-manager
    │   ├── background-sync                                 [OK] Node validation successful (58m, x43)
    │   ├── nodes-add                                       [OK] Node adds successful (58m, x3)
    │   └── nodes-update                                    [OK] Node updates successful (20m, x1)
    ├── l2-announcer
    │   └── leader-election                                 [OK]  (58m, x1)
    └── endpoint-manager
        ├── endpoint-gc                                     [OK] endpoint-gc (58m, x12)
        ├── cilium-endpoint-2605 (default/demoapp-7c58cd6bb-dbw76)
        │   ├── cep-k8s-sync                                [OK] sync-to-k8s-ciliumendpoint (2605) (19m, x116)
        │   ├── datapath-regenerate                         [OK] Endpoint regeneration successful (19m, x1)
        │   └── policymap-sync                              [OK] sync-policymap-2605 (19m, x2)
        ├── cilium-endpoint-1112
        │   ├── datapath-regenerate                         [OK] Endpoint regeneration successful (58m, x4)
        │   └── policymap-sync                              [OK] sync-policymap-1112 (58m, x4)
        └── cilium-endpoint-391
            ├── policymap-sync                              [OK] sync-policymap-391 (58m, x4)
            └── datapath-regenerate                         [OK] Endpoint regeneration successful (58m, x3)

3.2-抓包查看隧道协议

#node01 上访问 node02
root@ubuntu-k8s-master01:~/software# kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
demoapp-7c58cd6bb-4x8kw   1/1     Running   0          35m   10.244.2.46   ubuntu-k8s-node02   <none>           <none>
demoapp-7c58cd6bb-6gjxw   1/1     Running   0          35m   10.244.2.73   ubuntu-k8s-node02   <none>           <none>
demoapp-7c58cd6bb-dbw76   1/1     Running   0          35m   10.244.1.44   ubuntu-k8s-node01   <none>           <none>
root@ubuntu-k8s-master01:~/software# kubectl exec -it demoapp-7c58cd6bb-dbw76 /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@demoapp-7c58cd6bb-dbw76 /]# while true;do curl 10.244.2.73;sleep 0.5;done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-6gjxw, ServerIP: 10.244.2.73!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-6gjxw, ServerIP: 10.244.2.73!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-6gjxw, ServerIP: 10.244.2.73!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-6gjxw, ServerIP: 10.244.2.73!

#node02上抓包 看到报文是 OTV格式的
root@ubuntu-k8s-node02:~# tcpdump -i ens33 -nn host 192.168.40.133
07:03:08.130951 IP 192.168.40.133.53708 > 192.168.40.132.8472: OTV, flags [I] (0x08), overlay 0, instance 6
IP 10.244.1.229.46296 > 10.244.0.10.4240: Flags [.], ack 1, win 507, options [nop,nop,TS val 790883019 ecr 933322394], length 0
07:03:08.131047 IP 192.168.40.132.4240 > 192.168.40.133.60230: Flags [.], ack 1, win 509, options [nop,nop,TS val 2098914843 ecr 2650752157], length 0
07:03:08.131106 IP 192.168.40.132.44099 > 192.168.40.133.8472: OTV, flags [I] (0x08), overlay 0, instance 4
IP 10.244.0.10.4240 > 10.244.1.229.46296: Flags [.], ack 1, win 502, options [nop,nop,TS val 933353227 ecr 790882434], length 0
07:03:08.134676 IP 192.168.40.133.41740 > 192.168.40.134.8472: OTV, flags [I] (0x08), overlay 0, instance 6
IP 10.244.1.229.51664 > 10.244.2.34.4240: Flags [.], ack 1, win 507, options [nop,nop,TS val 321128557 ecr 1049504486], length 0
07:03:08.134677 IP 192.168.40.133.39562 > 192.168.40.134.4240: Flags [.], ack 1, win 502, options [nop,nop,TS val 4017018528 ecr 3140974699], length 0
07:03:08.134739 IP 192.168.40.134.4240 > 192.168.40.133.39562: Flags [.], ack 1, win 509, options [nop,nop,TS val 3141005534 ecr 4017018010], length 0
07:03:08.134804 IP 192.168.40.134.36500 > 192.168.40.133.8472: OTV, flags [I] (0x08), overlay 0, instance 4
IP 10.244.2.34.4240 > 10.244.1.229.51664: Flags [.], ack 1, win 502, options [nop,nop,TS val 1049535321 ecr 321128039], length 0
07:03:08.160881 IP 192.168.40.133.43693 > 192.168.40.134.8472: OTV, flags [I] (0x08), overlay 0, instance 30794
IP 10.244.1.44.37332 > 10.244.2.73.80: Flags [S], seq 1527644460, win 64860, options [mss 1410,sackOK,TS val 3295612257 ecr 0,nop,wscale 7], length 0
07:03:08.161017 IP 192.168.40.134.53622 > 192.168.40.133.8472: OTV, flags [I] (0x08), overlay 0, instance 30794
IP 10.244.2.73.80 > 10.244.1.44.37332: Flags [S.], seq 52131856, ack 1527644461, win 64308, options [mss 1410,sackOK,TS val 1835212763 ecr 3295612257,nop,wscale 7], length 0
07:03:08.161212 IP 192.168.40.133.43693 > 192.168.40.134.8472: OTV, flags [I] (0x08), overlay 0, instance 30794
IP 10.244.1.44.37332 > 10.244.2.73.80: Flags [.], ack 1, win 507, options [nop,nop,TS val 3295612257 ecr 1835212763], length 0
07:03:08.161260 IP 192.168.40.133.43693 > 192.168.40.134.8472: OTV, flags [I] (0x08), overlay 0, instance 30794
IP 10.244.1.44.37332 > 10.244.2.73.80: Flags [P.], seq 1:76, ack 1, win 507, options [nop,nop,TS val 3295612257 ecr 1835212763], length 75: HTTP: GET / HTTP/1.1
07:03:08.161298 IP 192.168.40.134.53622 > 192.168.40.133.8472: OTV, flags [I] (0x08), overlay 0, instance 30794
IP 10.244.2.73.80 > 10.244.1.44.37332: Flags [.], ack 76, win 502, options [nop,nop,TS val 1835212763 ecr 3295612257], length 0
07:03:08.162321 IP 192.168.40.134.53622 > 192.168.40.133.8472: OTV, flags [I] (0x08), overlay 0, instance 30794
IP 10.244.2.73.80 > 10.244.1.44.37332: Flags [P.], seq 1:18, ack 76, win 502, options [nop,nop,TS val 1835212764 ecr 3295612257], length 17: HTTP: HTTP/1.0 200 OK
07:03:08.162576 IP 192.168.40.133.43693 > 192.168.40.134.8472: OTV, flags [I] (0x08), overlay 0, instance 30794
IP 10.244.1.44.37332 > 10.244.2.73.80: Flags [.], ack 18, win 507, options [nop,nop,TS val 3295612259 ecr 1835212764], length 0
07:03:08.162591 IP 192.168.40.134.53622 > 192.168.40.133.8472: OTV, flags [I] (0x08), overlay 0, instance 30794
IP 10.244.2.73.80 > 10.244.1.44.37332: Flags [FP.], seq 18:266, ack 76, win 502, options [nop,nop,TS val 1835212764 ecr 3295612257], length 248: HTTP
07:03:08.162804 IP 192.168.40.133.43693 > 192.168.40.134.8472: OTV, flags [I] (0x08), overlay 0, instance 30794
IP 10.244.1.44.37332 > 10.244.2.73.80: Flags [F.], seq 76, ack 267, win 506, options [nop,nop,TS val 3295612259 ecr 1835212764], length 0
07:03:08.162855 IP 192.168.40.134.53622 > 192.168.40.133.8472: OTV, flags [I] (0x08), overlay 0, instance 30794
IP 10.244.2.73.80 > 10.244.1.44.37332: Flags [.], ack 77, win 502, options [nop,nop,TS val 1835212765 ecr 3295612259], length 0

3.3-测试Service功能

root@ubuntu-k8s-master01:~/software# kubectl create service clusterip demoapp --tcp=80:80
service/demoapp created
root@ubuntu-k8s-master01:~/software# kubectl  get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
demoapp      ClusterIP   10.96.47.49   <none>        80/TCP    4s
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP   111m
root@ubuntu-k8s-master01:~/software# kubectl get ep
NAME         ENDPOINTS                                                     AGE
demoapp      10.244.1.44:80,10.244.2.46:80,10.244.2.73:80                  11s
kubernetes   192.168.40.132:6443,192.168.40.133:6443,192.168.40.134:6443   111m
root@ubuntu-k8s-master01:~/software# kubectl exec -it demoapp-7c58cd6bb-dbw76 /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@demoapp-7c58cd6bb-dbw76 /]# curl demoapp
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-6gjxw, ServerIP: 10.244.2.73!
[root@demoapp-7c58cd6bb-dbw76 /]# curl demoapp
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-4x8kw, ServerIP: 10.244.2.46!
[root@demoapp-7c58cd6bb-dbw76 /]# curl demoapp
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-dbw76, ServerIP: 10.244.1.44!
[root@demoapp-7c58cd6bb-dbw76 /]# curl demoapp
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-6gjxw, ServerIP: 10.244.2.73!

三、安装Cilium网络插件:Native routing模式

#由于刚安装过 先卸载
root@ubuntu-k8s-master01:~# cilium uninstall
🔥 Deleting pods in cilium-test namespace...
🔥 Deleting cilium-test namespace...
root@ubuntu-k8s-master01:~# kubectl get pods -n kube-system
NAME                                          READY   STATUS    RESTARTS      AGE
coredns-774bbd8588-5qx92                      1/1     Running   0             116m
coredns-774bbd8588-ggfjp                      1/1     Running   0             116m
etcd-ubuntu-k8s-master01                      1/1     Running   0             116m
etcd-ubuntu-k8s-node01                        1/1     Running   0             84m
etcd-ubuntu-k8s-node02                        1/1     Running   0             80m
kube-apiserver-ubuntu-k8s-master01            1/1     Running   0             116m
kube-apiserver-ubuntu-k8s-node01              1/1     Running   0             84m
kube-apiserver-ubuntu-k8s-node02              1/1     Running   0             80m
kube-controller-manager-ubuntu-k8s-master01   1/1     Running   1 (84m ago)   116m
kube-controller-manager-ubuntu-k8s-node01     1/1     Running   0             84m
kube-controller-manager-ubuntu-k8s-node02     1/1     Running   0             80m
kube-scheduler-ubuntu-k8s-master01            1/1     Running   1 (84m ago)   116m
kube-scheduler-ubuntu-k8s-node01              1/1     Running   0             84m
kube-scheduler-ubuntu-k8s-node02              1/1     Running   0             80m

#native 模式
root@ubuntu-k8s-master01:~# cilium install \
        --set kubeProxyReplacement=strict \
        --set ipam.mode=kubernetes \
        --set routingMode=native \
        --set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 \
        --set ipam.Operator.ClusterPoolIPv4MaskSize=24 \
        --set ipv4NativeRoutingCIDR=10.244.0.0/16 \
        --set autoDirectNodeRoutes=true
ℹ️  Using Cilium version 1.15.0
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected kube-proxy has not been installed
ℹ️  Cilium will fully replace all functionalities of kube-proxy

#可以看到node01 vxlan的接口没有了

root@ubuntu-k8s-node01:~# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:a1:dd:c5 brd ff:ff:ff:ff:ff:ff
    altname enp2s1
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether 02:42:1c:1a:47:a3 brd ff:ff:ff:ff:ff:ff
4: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 6a:52:7f:db:83:d4 brd ff:ff:ff:ff:ff:ff
5: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2e:6f:b8:74:ff:93 brd ff:ff:ff:ff:ff:ff
16: lxcba44b31b4d71@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ea:de:ed:f8:e9:52 brd ff:ff:ff:ff:ff:ff link-netns cni-6b472b24-056d-1afc-c862-92b1db004b37
18: lxc_health@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ea:89:f0:55:a5:8d brd ff:ff:ff:ff:ff:ff link-netnsid 1

#查看路由表 直接路由通信模式
root@ubuntu-k8s-node01:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.40.2    0.0.0.0         UG    0      0        0 ens33
10.244.0.0      192.168.40.132  255.255.255.0   UG    0      0        0 ens33
10.244.1.0      10.244.1.229    255.255.255.0   UG    0      0        0 cilium_host
10.244.1.229    0.0.0.0         255.255.255.255 UH    0      0        0 cilium_host
10.244.2.0      192.168.40.134  255.255.255.0   UG    0      0        0 ens33
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.40.0    0.0.0.0         255.255.255.0   U     0      0        0 ens33

3.1-测试服务和抓包

root@ubuntu-k8s-master01:~# kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
demoapp-7c58cd6bb-4x8kw   1/1     Running   0          47m   10.244.2.46   ubuntu-k8s-node02   <none>           <none>
demoapp-7c58cd6bb-6gjxw   1/1     Running   0          47m   10.244.2.73   ubuntu-k8s-node02   <none>           <none>
demoapp-7c58cd6bb-dbw76   1/1     Running   0          47m   10.244.1.44   ubuntu-k8s-node01   <none>           <none>
root@ubuntu-k8s-master01:~# kubectl exec -it demoapp-7c58cd6bb-dbw76 /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@demoapp-7c58cd6bb-dbw76 /]# while true;do curl 10.244.2.73;sleep 0.5;done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-6gjxw, ServerIP: 10.244.2.73!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-6gjxw, ServerIP: 10.244.2.73!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.44, ServerName: demoapp-7c58cd6bb-6gjxw, ServerIP: 10.244.2.73!

#node02 抓包
root@ubuntu-k8s-node02:~# tcpdump -i ens33 -nn net 10.244.0.0/16
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), snapshot length 262144 bytes
07:18:06.584934 IP 10.244.1.44.52386 > 10.244.2.73.80: Flags [S], seq 3368588847, win 64860, options [mss 1410,sackOK,TS val 3296510681 ecr 0,nop,wscale 7], length 0
07:18:06.585066 IP 10.244.2.73.80 > 10.244.1.44.52386: Flags [S.], seq 63939041, ack 3368588848, win 64308, options [mss 1410,sackOK,TS val 1836111187 ecr 3296510681,nop,wscale 7], length 0
07:18:06.585236 IP 10.244.1.44.52386 > 10.244.2.73.80: Flags [.], ack 1, win 507, options [nop,nop,TS val 3296510681 ecr 1836111187], length 0
07:18:06.585325 IP 10.244.1.44.52386 > 10.244.2.73.80: Flags [P.], seq 1:76, ack 1, win 507, options [nop,nop,TS val 3296510681 ecr 1836111187], length 75: HTTP: GET / HTTP/1.1
07:18:06.585356 IP 10.244.2.73.80 > 10.244.1.44.52386: Flags [.], ack 76, win 502, options [nop,nop,TS val 1836111187 ecr 3296510681], length 0
07:18:06.586163 IP 10.244.2.73.80 > 10.244.1.44.52386: Flags [P.], seq 1:18, ack 76, win 502, options [nop,nop,TS val 1836111188 ecr 3296510681], length 17: HTTP: HTTP/1.0 200 OK
07:18:06.586409 IP 10.244.1.44.52386 > 10.244.2.73.80: Flags [.], ack 18, win 507, options [nop,nop,TS val 3296510682 ecr 1836111188], length 0
07:18:06.586419 IP 10.244.2.73.80 > 10.244.1.44.52386: Flags [FP.], seq 18:266, ack 76, win 502, options [nop,nop,TS val 1836111188 ecr 3296510681], length 248: HTTP
07:18:06.586671 IP 10.244.1.44.52386 > 10.244.2.73.80: Flags [F.], seq 76, ack 267, win 506, options [nop,nop,TS val 3296510682 ecr 1836111188], length 0
07:18:06.586724 IP 10.244.2.73.80 > 10.244.1.44.52386: Flags [.], ack 77, win 502, options [nop,nop,TS val 1836111188 ecr 3296510682], length 0
07:18:07.090001 IP 10.244.1.44.52388 > 10.244.2.73.80: Flags [S], seq 1682480510, win 64860, options [mss 1410,sackOK,TS val 3296511186 ecr 0,nop,wscale 7], length 0

四、Hubble

#配置clilium命令启用
root@ubuntu-k8s-master01:~# cilium hubble enable --ui

#查看状态 稍等error就好了
root@ubuntu-k8s-master01:~# cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       1 errors
    \__/       ClusterMesh:        disabled

DaemonSet              cilium             Desired: 3, Ready: 3/3, Available: 3/3
Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-ui          Desired: 1, Unavailable: 1/1
Deployment             hubble-relay       Desired: 1, Unavailable: 1/1
Containers:            cilium             Running: 3
                       cilium-operator    Running: 1
                       hubble-ui          Pending: 1
                       hubble-relay       Running: 1
Cluster Pods:          7/7 managed by Cilium
Helm chart version:    1.15.0
Image versions         cilium-operator    quay.io/cilium/operator-generic:v1.15.0@sha256:e26ecd316e742e4c8aa1e302ba8b577c2d37d114583d6c4cdd2b638493546a79: 1
                       hubble-ui          quay.io/cilium/hubble-ui:v0.12.3@sha256:e6b825302fc1e406b1305363fe0bcd1fdf95730b32c2b99a2b36dfa37bdaeec2: 1
                       hubble-ui          quay.io/cilium/hubble-ui-backend:v0.12.3@sha256:1cd84251cec46e20f9e839ee0afba9b51c8de59d35681234f701d7f42062f138: 1
                       hubble-relay       quay.io/cilium/hubble-relay:v1.15.0@sha256:45b3ea70b73aee01644f800b8f6138c36446bfb130d2b88b0f75775ebe6a9ab6: 1
                       cilium             quay.io/cilium/cilium:v1.15.0@sha256:9cfd6a0a3a964780e73a11159f93cc363e616f7d9783608f62af6cfdf3759619: 3
Errors:                hubble-ui          hubble-ui                     1 pods of Deployment hubble-ui are not ready
                       hubble-relay       hubble-relay                  1 pods of Deployment hubble-relay are not ready
Warnings:              hubble-ui          hubble-ui-6b4d867c59-4hgph    pod is pending

root@ubuntu-k8s-master01:~# cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled

Deployment             hubble-ui          Desired: 1, Ready: 1/1, Available: 1/1
Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet              cilium             Desired: 3, Ready: 3/3, Available: 3/3
Containers:            cilium             Running: 3
                       cilium-operator    Running: 1
                       hubble-relay       Running: 1
                       hubble-ui          Running: 1
Cluster Pods:          7/7 managed by Cilium
Helm chart version:    1.15.0
Image versions         cilium             quay.io/cilium/cilium:v1.15.0@sha256:9cfd6a0a3a964780e73a11159f93cc363e616f7d9783608f62af6cfdf3759619: 3
                       cilium-operator    quay.io/cilium/operator-generic:v1.15.0@sha256:e26ecd316e742e4c8aa1e302ba8b577c2d37d114583d6c4cdd2b638493546a79: 1
                       hubble-relay       quay.io/cilium/hubble-relay:v1.15.0@sha256:45b3ea70b73aee01644f800b8f6138c36446bfb130d2b88b0f75775ebe6a9ab6: 1
                       hubble-ui          quay.io/cilium/hubble-ui:v0.12.3@sha256:e6b825302fc1e406b1305363fe0bcd1fdf95730b32c2b99a2b36dfa37bdaeec2: 1
                       hubble-ui          quay.io/cilium/hubble-ui-backend:v0.12.3@sha256:1cd84251cec46e20f9e839ee0afba9b51c8de59d35681234f701d7f42062f138: 1

#
#hubble-relay-59b8bfd6fb-wdqxw                 1/1     Running   0             93s
#hubble-ui-6b4d867c59-4hgph                    2/2     Running   0             93s
root@ubuntu-k8s-master01:~# kubectl get pods -n kube-system
NAME                                          READY   STATUS    RESTARTS      AGE
cilium-ds8rv                                  1/1     Running   0             13m
cilium-operator-7764cf64d6-46fk5              1/1     Running   0             13m
cilium-s7hzj                                  1/1     Running   0             13m
cilium-x2b4j                                  1/1     Running   0             13m
coredns-774bbd8588-5qx92                      1/1     Running   0             131m
coredns-774bbd8588-ggfjp                      1/1     Running   0             131m
etcd-ubuntu-k8s-master01                      1/1     Running   0             131m
etcd-ubuntu-k8s-node01                        1/1     Running   0             99m
etcd-ubuntu-k8s-node02                        1/1     Running   0             94m
hubble-relay-59b8bfd6fb-wdqxw                 1/1     Running   0             93s
hubble-ui-6b4d867c59-4hgph                    2/2     Running   0             93s
kube-apiserver-ubuntu-k8s-master01            1/1     Running   0             131m
kube-apiserver-ubuntu-k8s-node01              1/1     Running   0             99m
kube-apiserver-ubuntu-k8s-node02              1/1     Running   0             94m
kube-controller-manager-ubuntu-k8s-master01   1/1     Running   1 (99m ago)   131m
kube-controller-manager-ubuntu-k8s-node01     1/1     Running   0             99m
kube-controller-manager-ubuntu-k8s-node02     1/1     Running   0             94m
kube-scheduler-ubuntu-k8s-master01            1/1     Running   1 (99m ago)   131m
kube-scheduler-ubuntu-k8s-node01              1/1     Running   0             99m
kube-scheduler-ubuntu-k8s-node02              1/1     Running   0             94m

#hubble service
root@ubuntu-k8s-master01:~# kubectl get svc -n kube-system
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
hubble-peer    ClusterIP   10.109.160.34   <none>        443/TCP                  14m
hubble-relay   ClusterIP   10.97.98.150    <none>        80/TCP                   2m36s
hubble-ui      ClusterIP   10.111.245.6    <none>        80/TCP                   2m36s
kube-dns       ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   132m

#edit 为 Nodeport 访问UI
#32154
root@ubuntu-k8s-master01:~# kubectl edit svc hubble-ui -n kube-system
service/hubble-ui edited
root@ubuntu-k8s-master01:~# kubectl get svc -n kube-system
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
hubble-peer    ClusterIP   10.109.160.34   <none>        443/TCP                  16m
hubble-relay   ClusterIP   10.97.98.150    <none>        80/TCP                   3m47s
hubble-ui      NodePort    10.111.245.6    <none>        80:32154/TCP             3m47s
kube-dns       ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   133m

#访问 192.168.40.133:32154

其他命令安装UI和监控

#直接部署时 安装hubble和让Prometheus采集
vxlan隧道模式
    cilium install \
        --set kubeProxyReplacement=strict \
        --set ipam.mode=kubernetes \
        --set routingMode=tunnel \
        --set tunnelProtocol=vxlan \
        --set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 \
        --set ipam.Operator.ClusterPoolIPv4MaskSize=24 \     
        --set hubble.enabled="true" \
        --set hubble.listenAddress=":4244" \
        --set hubble.relay.enabled="true" \
        --set hubble.ui.enabled="true" \
        --set prometheus.enabled=true \
        --set operator.prometheus.enabled=true \
        --set hubble.metrics.port=9665 \
        --set hubble.metrics.enableOpenMetrics=true \
        --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\,source_namespace\,source_workload\,destination_ip\,destination_namespace\,destination_workload\,traffic_direction}"

 

posted @ 2024-02-29 13:24  しみずよしだ  阅读(217)  评论(0)    收藏  举报