Cilium VXLAN 模式使用说明

模式介绍

项目文档:https://docs.cilium.io/en/stable/network/concepts/routing/#encapsulation

VXLAN 模式对底层网络基础设施的要求最。在此模式下,所有集群节点通过基于 UDP 的封装协议(VXLANGeneve)建立起全互联的隧道网格,Cilium 节点之间的所有流量都会经过封装。

网络要求:

  • 封装依赖于节点间的正常连接。也就是说,只要 Cilium 节点之间能够相互访问,则所有路由要求都满足了;
  • 底层网络和防火墙必须允许封装报文通过:
    • VXLAN 8472/UDP
    • Geneve 6081/UDP

模式优势:

  • 配置方便:连接集群节点的网络不需要感知 PodCIDR。集群节点可以跨越多个路由域或链路层域。底层网络的拓扑结构无关紧要,只要集群节点之间能够通过 IP/UDP 互相通信即可;
  • 网段容量:由于不依赖任何底层网络限制,可用的地址空间大大增加。如果 PodCIDR 范围配置得当,则每个节点上可以运行任意数量的 Pod;
  • 节点扩充:当与 Kubernetes 等编排系统一起使用时,集群中所有节点的信息(包括每个节点分配到的 PodCIDR)会自动同步给所有 Cilium Agent。新节点加入集群后,会自动被纳入隧道网格,无需手动配置;
  • 身份传递:封装协议允许在网络报文中携带元数据。Cilium 利用这一能力来传递源安全身份等元数据信息。身份传递是一种优化手段,旨在避免在远端节点上再进行一次身份查找。

模式缺点:

  • 由于需要添加封装头部,实际可用于载荷的 MTU 比原生路由模式更低(VXLAN 每个报文增加 50 字节开销),这会导致特定网络连接的最大吞吐量降低。开启巨帧(Jumbo Frames)可以大幅缓解这一问题(同样是 50 字节开销,在标准帧 1500 字节中占比较高,而在巨帧 9000 字节中占比就很低了)。

部署流程

通过 Kind 快速生成集群并部署 Cilium VXLAN 模式

#!/bin/bash
set -v

# 1. Prepare NoCNI kubernetes environment
cat <<EOF | kind create cluster --name=cilium-kubeproxy-vxlan --image=kindest/node:v1.27.3 --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  disableDefaultCNI: true
  # kubeProxyMode: "none" # Enable KubeProxy
nodes:
  - role: control-plane
  - role: worker
  - role: worker
containerdConfigPatches:
EOF

# 2. Remove kubernetes node taints
controller_node_ip=`kubectl get node -o wide --no-headers | grep -E "control-plane|bpf1" | awk -F " " '{print $6}'`
kubectl taint nodes $(kubectl get nodes -o name | grep control-plane) node-role.kubernetes.io/control-plane:NoSchedule-

# 3. Install CNI[Cilium 1.17.15]
cilium_version=v1.17.15
docker pull quay.io/cilium/cilium:$cilium_version && docker pull quay.io/cilium/operator-generic:$cilium_version
kind load docker-image quay.io/cilium/cilium:$cilium_version quay.io/cilium/operator-generic:$cilium_version --name cilium-kubeproxy-vxlan

helm repo add cilium https://helm.cilium.io ; helm repo update;
# routingMode=tunnel
# tunnelProtocol=vxlan
helm install cilium cilium/cilium \
  --version v1.17.15 \
  --namespace kube-system \
  --set k8sServiceHost=$controller_node_ip \
  --set k8sServicePort=6443 \
  --set image.pullPolicy=IfNotPresent \
  --set debug.enabled=true \
  --set debug.verbose="datapath flow kvstore envoy policy" \
  --set bpf.monitorAggregation=none \
  --set monitor.enabled=true \
  --set ipam.mode=cluster-pool \
  --set cluster.name=cilium-kubeproxy-vxlan \
  --set routingMode=tunnel \
  --set tunnelProtocol=vxlan \
  --set ipv4NativeRoutingCIDR="10.0.0.0/8"

# 4. Separate namesapce and cgroup v2 verify [https://github.com/cilium/cilium/pull/16259 && https://docs.cilium.io/en/stable/installation/kind/#install-cilium]
#for container in $(docker ps -a --format "table {{.Names}}" | grep cilium-kubeproxy-vxlan);do docker exec $container ls -al /proc/self/ns/cgroup;done
#mount -l | grep cgroup && docker info | grep "Cgroup Version" | awk '$1=$1'

创建测试 Pod

本质是 Nginx,仅用于通过访问时抓包使用

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: nginx
  name: pod
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: burlyluo/nettool:latest
        name: nettoolbox
        env:
          - name: NETTOOL_NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
        securityContext:
          privileged: true
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app: nginx
            topologyKey: kubernetes.io/hostname

---
apiVersion: v1
kind: Service
metadata:
  name: pod
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 32000

查看部署结果

root@network-demo:~# kubectl get pods -A -o wide
NAMESPACE            NAME                             READY   STATUS    RESTARTS   AGE     IP           NODE
default              pod-0                            1/1     Running   0          53s     10.0.2.140   cilium-kubeproxy-vxlan-worker2
default              pod-1                            1/1     Running   0          47s     10.0.1.97    cilium-kubeproxy-vxlan-worker
default              pod-2                            1/1     Running   0          42s     10.0.0.33    cilium-kubeproxy-vxlan-control-plane
kube-system          cilium-764sh                     2/2     Running   0          5m32s   172.18.0.2   cilium-kubeproxy-vxlan-worker
kube-system          cilium-7bc47                     2/2     Running   0          5m32s   172.18.0.4   cilium-kubeproxy-vxlan-worker2
kube-system          cilium-envoy-ljdnj               1/1     Running   0          5m32s   172.18.0.2   cilium-kubeproxy-vxlan-worker
kube-system          cilium-envoy-p4jx8               1/1     Running   0          5m32s   172.18.0.4   cilium-kubeproxy-vxlan-worker2
kube-system          cilium-envoy-wkb5m               1/1     Running   0          5m32s   172.18.0.3   cilium-kubeproxy-vxlan-control-plane
kube-system          cilium-j8sb5                     2/2     Running   0          5m32s   172.18.0.3   cilium-kubeproxy-vxlan-control-plane
kube-system          cilium-operator-7bfd9d69f4-nckns 1/1     Running   0          5m32s   172.18.0.2   cilium-kubeproxy-vxlan-worker
kube-system          cilium-operator-7bfd9d69f4-tdrdv 1/1     Running   0          5m32s   172.18.0.4   cilium-kubeproxy-vxlan-worker2
kube-system          coredns-5d78c9869d-7ttv2         1/1     Running   0          7m16s   10.0.1.30    cilium-kubeproxy-vxlan-worker
kube-system          coredns-5d78c9869d-d7ls6         1/1     Running   0          7m16s   10.0.1.226   cilium-kubeproxy-vxlan-worker
kube-system          etcd-cilium                      1/1     Running   0          7m30s   172.18.0.3   cilium-kubeproxy-vxlan-control-plane
kube-system          kube-apiserver-cilium            1/1     Running   0          7m31s   172.18.0.3   cilium-kubeproxy-vxlan-control-plane
kube-system          kube-controller-manager-cilium   1/1     Running   0          7m30s   172.18.0.3   cilium-kubeproxy-vxlan-control-plane
kube-system          kube-proxy-4dpp6                 1/1     Running   0          7m11s   172.18.0.4   cilium-kubeproxy-vxlan-worker2
kube-system          kube-proxy-8x95v                 1/1     Running   0          7m9s    172.18.0.2   cilium-kubeproxy-vxlan-worker
kube-system          kube-proxy-gx2hj                 1/1     Running   0          7m16s   172.18.0.3   cilium-kubeproxy-vxlan-control-plane
kube-system          kube-scheduler-cilium            1/1     Running   0          7m31s   172.18.0.3   cilium-kubeproxy-vxlan-control-plane

查询 Cilium 详细信息

1.查询 Cilium 详细运行状态

root@network-demo:~# kubectl exec -it -n kube-system cilium-j8sb5 -- cilium status

KVStore:                 Disabled   
Kubernetes:              Ok         1.27 (v1.27.3) [linux/amd64]
Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]

## 没有使用 cilium 代替 k8s kube-proxy
KubeProxyReplacement:    False   
Host firewall:           Disabled
SRv6:                    Disabled
CNI Chaining:            none
CNI Config file:         successfully wrote CNI configuration file to /host/etc/cni/net.d/05-cilium.conflist
Cilium:                  Ok   1.17.15 (v1.17.15-4206eaa5)
NodeMonitor:             Listening for events on 8 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok   
IPAM:                    IPv4: 3/254 allocated from 10.0.0.0/24, 
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled

## vxlan 网络模式
Routing:                 Network: Tunnel [vxlan]   Host: Legacy
Attach Mode:             TCX
Device Mode:             veth
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       26/26 healthy
Proxy Status:            OK, ip 10.0.0.23, 0 redirects active on ports 10000-20000, Envoy: external
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 44.59   Metrics: Disabled
Encryption:              Disabled        
Cluster health:          3/3 reachable   (2026-05-05T08:59:02Z)
Name                     IP              Node   Endpoints
Modules Health:          Stopped(0) Degraded(0) OK(52)

2.查询 Cilium Endpoint 信息

在 Cilium 中,Endpoint 术语含义:Cilium 为容器分配 IP。一个 Pod 中可以包含多个容器(多个容器共享同一个 Pod IP)。所有共享同一地址的容器被分组在一起,Cilium 将其称为一个 Endpoint。

每个节点的 Cilium Agent 只管理本节点的 Endpoint,所以不同节点的 cilium endpoint list 输出不同,本次以 Controller 节点 Pod 作为示例:

root@network-demo:~# kubectl exec -it -n kube-system cilium-j8sb5 -- cilium endpoint list

ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                       IPv4         STATUS   
           ENFORCEMENT        ENFORCEMENT                                                                                                               
467        Disabled           Disabled          17110      k8s:app=nginx                                                     10.0.0.33    ready   
                                                           k8s:io.cilium.k8s.namespace/metadata.name=default
                                                           k8s:io.cilium.k8s.policy.cluster=cilium-kubeproxy-vxlan
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default
                                                           k8s:io.kubernetes.pod.namespace=default
1364       Disabled           Disabled          1          k8s:node-role.kubernetes.io/control-plane                                      ready
                                                           k8s:node.kubernetes.io/exclude-from-external-load-balancers
                                                           reserved:host
3682       Disabled           Disabled          4          reserved:health                                                   10.0.0.204   ready

3.查询 Cilium Service 信息

在 Cilium 中,Service 术语含义:k8s svc 在 Cilium eBPF Map 中实际转发状态。官网文档中提到,如果不使用 Cilium 代替 kube-proxy,则只会启用 ClusterIP services 的负载:

By default, Helm sets kubeProxyReplacement=false, which only enables per-packet in-cluster load-balancing of ClusterIP services.

root@network-demo:~# kubectl exec -it -n kube-system cilium-j8sb5 -- cilium service list

ID   Frontend                Service Type   Backend                             
1    10.96.0.1:443/TCP       ClusterIP      1 => 172.18.0.3:6443/TCP (active)   
2    10.96.153.200:443/TCP   ClusterIP      1 => 172.18.0.3:4244/TCP (active)   
3    10.96.0.10:53/UDP       ClusterIP      1 => 10.0.1.30:53/UDP (active)      
                                            2 => 10.0.1.226:53/UDP (active)     
4    10.96.0.10:53/TCP       ClusterIP      1 => 10.0.1.30:53/TCP (active)      
                                            2 => 10.0.1.226:53/TCP (active)     
5    10.96.0.10:9153/TCP     ClusterIP      1 => 10.0.1.30:9153/TCP (active)    
                                            2 => 10.0.1.226:9153/TCP (active)   
7    10.96.128.232:80/TCP    ClusterIP      1 => 10.0.2.140:80/TCP (active)     
                                            2 => 10.0.1.97:80/TCP (active)      
                                            3 => 10.0.0.33:80/TCP (active)

验证效果

查询 Cilium 主机路由、网卡设备、tunnel 信息

1.查询 Cilium Node 网卡设备

1.1.查询 Cilium cilium_host 设备信息

查询后发现,cilium_host 设备是一个 veth pair,而不是 VXLAN 设备:

root@network-demo:~# docker exec -it cilium-kubeproxy-vxlan-control-plane ip address show cilium_host
5: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:c2:82:31:a5:f9 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.23/32 scope global cilium_host
       valid_lft forever preferred_lft forever
root@network-demo:~# docker exec -it cilium-kubeproxy-vxlan-control-plane ip -d link show cilium_host
5: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0a:c2:82:31:a5:f9 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 
    veth addrgenmode eui64 numtxqueues 8 numrxqueues 8 gso_max_size 65536 gso_max_segs 65535

1.2.查询 Cilium cilium_vxlan 设备信息

真正的 VXLAN 设备是 cilium_vxlan 通过与其他 CNI VXLAN 模式对比发现,cilium_vxlan 设备中并没有指定 local/remote 信息,但在对应位置添加了 external 关键字,表示这个 VXLAN 设备的 FDB 转发表由外部程序管理,而不是靠内核自动学习。这里的"外部程序"指的就是 eBPF 程序。

root@network-demo:~# docker exec -it cilium-kubeproxy-vxlan-control-plane ip address show cilium_vxlan
6: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether da:36:b5:e9:0a:30 brd ff:ff:ff:ff:ff:ff

root@network-demo:~# docker exec -it cilium-kubeproxy-vxlan-control-plane ip -d link show cilium_vxlan
6: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether da:36:b5:e9:0a:30 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 
    vxlan external addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
root@ce-demo-1:~# ip address show vxlan.calico
30499: vxlan.calico: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 66:e0:bb:93:52:4f brd ff:ff:ff:ff:ff:ff
    inet 10.244.142.0/32 scope global vxlan.calico
       valid_lft forever preferred_lft forever

root@ce-demo-1:~# ip -d link show vxlan.calico
30499: vxlan.calico: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 66:e0:bb:93:52:4f brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 65535 
info: Using default fan map value (33)
    ## calico 通过 local 10.51.0.100 指定了这个 VXLAN 设备用本机此 IP 地址作为 VXLAN 外层封装的 src ip
    vxlan id 4096 local 10.51.0.100 dev ens160 srcport 0 0 dstport 4789 nolearning ttl auto ageing 300 udpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 

2.查询 Cilium Node 路由表

从路由表中发现,跨节点 Pod 请求并没有通过 VXLAN 设备进行转发,而是使用的 cilium_host。

但实际上,跨节点 Pod 访问并不会经过 cilium_host 设备,因为 Pod veth pair lxc 设备挂载的 eBPF ingress cil_from_container 看到目标 IP 非本节点 Pod IP,查询 cilium_tunnel map 后,获取跨节点 Pod IP 应该转发到哪个 Node 后,直接转发给 cilium_vxlan 设备进行 VXLAN 封装了。

root@cilium-kubeproxy-vxlan-control-plane:/# ip route show
default via 172.18.0.1 dev eth0 
10.0.0.0/24 via 10.0.0.23 dev cilium_host proto kernel src 10.0.0.23 
10.0.0.23 dev cilium_host proto kernel scope link 
10.0.1.0/24 via 10.0.0.23 dev cilium_host proto kernel src 10.0.0.23 mtu 1450 
10.0.2.0/24 via 10.0.0.23 dev cilium_host proto kernel src 10.0.0.23 mtu 1450 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3

3.查询 cilium_tunnel 表

cilium_tunnel_m 是 Cilium VXLAN 隧道的核心转发表,这张 eBPF map 告诉 eBPF 程序:要访问某个远端 PodCIDR,应该把 VXLAN 包发到哪个节点 IP

root@cilium-kubeproxy-vxlan-control-plane:/# bpftool map show name cilium_tunnel_m
2026: hash  name cilium_tunnel_m  flags 0x1
        key 20B  value 20B  max_entries 65536  memlock 1050000B
        pids cilium-agent(386835)
2046: hash  name cilium_tunnel_m  flags 0x1
        key 20B  value 20B  max_entries 65536  memlock 1050000B
2108: hash  name cilium_tunnel_m  flags 0x1
        key 20B  value 20B  max_entries 65536  memlock 1050000B


root@cilium-kubeproxy-vxlan-control-plane:/# bpftool map dump id 2026
key:
0a 00 02 00 00 00 00 00  00 00 00 00 00 00 00 00
01 00 00 00
value:
ac 12 00 04 00 00 00 00  00 00 00 00 00 00 00 00
01 00 00 00
key:
0a 00 01 00 00 00 00 00  00 00 00 00 00 00 00 00
01 00 00 00
value:
ac 12 00 02 00 00 00 00  00 00 00 00 00 00 00 00
01 00 00 00

root@cilium-kubeproxy-vxlan-control-plane:/# bpftool map dump id 2046
key:
0a 00 02 00 00 00 00 00  00 00 00 00 00 00 00 00
01 00 00 00
value:
ac 12 00 04 00 00 00 00  00 00 00 00 00 00 00 00
01 00 00 00
key:
0a 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
01 00 00 00
value:
ac 12 00 03 00 00 00 00  00 00 00 00 00 00 00 00
01 00 00 00

root@cilium-kubeproxy-vxlan-control-plane:/# bpftool map dump id 2108
key:
0a 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
01 00 00 00
value:
ac 12 00 03 00 00 00 00  00 00 00 00 00 00 00 00
01 00 00 00
key:
0a 00 01 00 00 00 00 00  00 00 00 00 00 00 00 00
01 00 00 00
value:
ac 12 00 02 00 00 00 00  00 00 00 00 00 00 00 00
01 00 00 00

十进制,以 control 节点为例,翻译后效果:

root@network-demo:~# kubectl exec -it -n kube-system cilium-j8sb5 -- cilium bpf tunnel list

TUNNEL     VALUE
10.0.2.0   172.18.0.4:0   
10.0.1.0   172.18.0.2:0

请求抓包

1.Pod 处抓包

1.1.查询 Pod 网卡与 veth pair 设备信息

root@network-demo:~# kubectl exec -it pod-2 -- ip address show eth0
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e2:3a:9b:ff:1c:35 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.0.33/32 scope global eth0
       valid_lft forever preferred_lft forever

root@network-demo:~# docker exec -it cilium-kubeproxy-vxlan-control-plane ip address show lxcc940ffbbad39
10: lxcc940ffbbad39@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9a:b1:a4:85:88:7d brd ff:ff:ff:ff:ff:ff link-netns cni-023e3f51-8655-c9cd-ea8e-df190877f4b4

1.2.查询 Pod 路由

由于 Pod IP 掩码是 32 位,下一跳地址只能是网关 cilium_host 10.0.0.23。但通过 scope link 配置后,这条路由被标记为本地链路路由,通信走的是二层转发,依赖的是 MAC 地址而非 IP 地址,与 Pod 直连的是对应的 veth pair 设备,所以在转发时,也只能发给 veth pair 设备,具体验证看 1.3。

root@network-demo:~# kubectl exec -it pod-2 -- ip route show
default via 10.0.0.23 dev eth0 mtu 1450 
10.0.0.23 dev eth0 scope link 

root@network-demo:~# kubectl exec -it pod-2 -- route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.23       0.0.0.0         UG    0      0        0 eth0
10.0.0.23       0.0.0.0         255.255.255.255 UH    0      0        0 eth0

1.3.在 Pod veth pair 设备处抓包

从抓包信息中 ARP 广播可以看出,Pod 询问谁有 10.0.0.23,回复的 MAC 地址是 veth pair lxcc940ffbbad39 的 9a:b1:a4:85:88:7d:

同样的,这里抓到的数据是 Pod --> lxc 的 ingress 入站请求。lxc 判断这是一个跨节点 Pod 后通过 cilium_tunnel_m 这个 eBPF map,发现 10.0.1.0/24 对应的隧道远端是 172.18.0.2,然后决定进行 VXLAN 封装

root@network-demo:~# docker exec -it cilium-kubeproxy-vxlan-control-plane tcpdump -pnei lxcc940ffbbad39

12:50:02.473976 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 74: 10.0.0.33.47718 > 10.0.1.97.80: Flags [S], seq 2406447395, win 64860, options [mss 1410,sackOK,TS val 2464436213 ecr 0,nop,wscale 7], length 0
12:50:02.474216 9a:b1:a4:85:88:7d > e2:3a:9b:ff:1c:35, ethertype IPv4 (0x0800), length 74: 10.0.1.97.80 > 10.0.0.33.47718: Flags [S.], seq 1358194231, ack 2406447396, win 64308, options [mss 1410,sackOK,TS val 1813145307 ecr 2464436213,nop,wscale 7], length 0
12:50:02.474227 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [.], ack 1, win 507, options [nop,nop,TS val 2464436214 ecr 1813145307], length 0
12:50:02.474394 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 139: 10.0.0.33.47718 > 10.0.1.97.80: Flags [P.], seq 1:74, ack 1, win 507, options [nop,nop,TS val 2464436214 ecr 1813145307], length 73: HTTP: GET / HTTP/1.1
12:50:02.474508 9a:b1:a4:85:88:7d > e2:3a:9b:ff:1c:35, ethertype IPv4 (0x0800), length 66: 10.0.1.97.80 > 10.0.0.33.47718: Flags [.], ack 74, win 502, options [nop,nop,TS val 1813145307 ecr 2464436214], length 0
12:50:02.474674 9a:b1:a4:85:88:7d > e2:3a:9b:ff:1c:35, ethertype IPv4 (0x0800), length 302: 10.0.1.97.80 > 10.0.0.33.47718: Flags [P.], seq 1:237, ack 74, win 502, options [nop,nop,TS val 1813145307 ecr 2464436214], length 236: HTTP: HTTP/1.1 200 OK
12:50:02.474688 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [.], ack 237, win 506, options [nop,nop,TS val 2464436214 ecr 1813145307], length 0
12:50:02.474796 9a:b1:a4:85:88:7d > e2:3a:9b:ff:1c:35, ethertype IPv4 (0x0800), length 108: 10.0.1.97.80 > 10.0.0.33.47718: Flags [P.], seq 237:279, ack 74, win 502, options [nop,nop,TS val 1813145307 ecr 2464436214], length 42: HTTP
12:50:02.474799 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [.], ack 279, win 506, options [nop,nop,TS val 2464436214 ecr 1813145307], length 0
12:50:02.474994 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [F.], seq 74, ack 279, win 506, options [nop,nop,TS val 2464436214 ecr 1813145307], length 0
12:50:02.475706 9a:b1:a4:85:88:7d > e2:3a:9b:ff:1c:35, ethertype IPv4 (0x0800), length 66: 10.0.1.97.80 > 10.0.0.33.47718: Flags [F.], seq 279, ack 75, win 502, options [nop,nop,TS val 1813145308 ecr 2464436214], length 0
12:50:02.475714 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [.], ack 280, win 506, options [nop,nop,TS val 2464436215 ecr 1813145308], length 0

## 此处 ARP 广播看出,响应 10.0.0.23 的 MAC 是 Pod veth pair lxcc940ffbbad39 设备的
12:50:07.718099 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype ARP (0x0806), length 42: Request who-has 10.0.0.23 tell 10.0.0.33, length 28
12:50:07.718178 9a:b1:a4:85:88:7d > e2:3a:9b:ff:1c:35, ethertype ARP (0x0806), length 42: Reply 10.0.0.23 is-at 9a:b1:a4:85:88:7d, length 28

2.Node 处抓包

此中子标题内容,应作为一个整体流程来看,且其中所有 tcpdump 相关抓包,均为同一请求在不同位置处抓包产生的数据

2.1.在 Node cilium_vxlan 设备处抓包

本次抓包与上面 1.3 veth pair 处为同一请求不同位置。在本次抓包中发现了两组不同的 src mac --> dst mac:

  • Control Pod-2 --> lxc:e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d
  • Work Pod-1 --> lxc:0e:02:58:07:78:f7 > 0e:30:23:1e:1c:73

按照常规 CNI 转发逻辑,从 lxc 转发到 cilium_vxlan 时,MAC 应该被改写为 lxc MAC → cilium_vxlan MAC。但对比上面 1.3 中抓包数据发现,所有出去的包 TCP seq 号都是一样的,说明这个数据包未经修改直接转给了 cilium_vxlan 上。经过 bpftool 反汇编验证,流程如下:

  1. 数据包从 Pod-2 出发后通过 veth pair 到达宿主机端的 lxc 设备,触发挂载在 TC ingress 钩子上的 eBPF 主程序 cil_from_container
  2. 主程序通过 tail call 跳转到子程序,完成源 IP 校验、连接跟踪(CT)查询、服务发现(LB)等逻辑后,再次 tail call 进入子程序
  3. 子程序查询 cilium_tunnel_m eBPF map,获取目标 Pod 10.0.1.97 属于 10.0.1.0/24 网段,对应远端节点 172.18.0.2。设置 VXLAN 隧道元数据,通过 bpf_redirect() 将数据包原封不动地传给 cilium_vxlan
  4. cilium_vxlan 设备上的内核 VXLAN 驱动读取隧道元数据,为数据包封装外层 VXLAN 头部,匹配主机路由 172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3 最终从 eth0 发出
  5. 将包发给 Server Node 8472 端口后,由 Server 端 cilium_vxlan 解包(设备监听的端口就是 8472)
  6. 触发 cilium_vxlan 上挂载的 TC ingress 钩子程序 cil_from_overlay,从隧道元数据中提取 Client 身份标识,然后查找目标 IP 10.0.1.97 对应的本地 endpoint
  7. 找到目标后,eBPF 程序将数据包的以太网头改写为 Pod-1 eth0 MAC → Pod-1 lxc MAC(这就是为什么回包的双端 mac 跟过来时不同),然后通过 bpf_redirect() 传送到 Pod-1 对应的 lxc 设备
root@network-demo:~# docker exec -it cilium-kubeproxy-vxlan-control-plane tcpdump -pnei cilium_vxlan port 80

12:50:02.474015 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 74: 10.0.0.33.47718 > 10.0.1.97.80: Flags [S], seq 2406447395, win 64860, options [mss 1410,sackOK,TS val 2464436213 ecr 0,nop,wscale 7], length 0
12:50:02.474199 0e:02:58:07:78:f7 > 0e:30:23:1e:1c:73, ethertype IPv4 (0x0800), length 74: 10.0.1.97.80 > 10.0.0.33.47718: Flags [S.], seq 1358194231, ack 2406447396, win 64308, options [mss 1410,sackOK,TS val 1813145307 ecr 2464436213,nop,wscale 7], length 0
12:50:02.474243 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [.], ack 1, win 507, options [nop,nop,TS val 2464436214 ecr 1813145307], length 0
12:50:02.474414 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 139: 10.0.0.33.47718 > 10.0.1.97.80: Flags [P.], seq 1:74, ack 1, win 507, options [nop,nop,TS val 2464436214 ecr 1813145307], length 73: HTTP: GET / HTTP/1.1
12:50:02.474495 0e:02:58:07:78:f7 > 0e:30:23:1e:1c:73, ethertype IPv4 (0x0800), length 66: 10.0.1.97.80 > 10.0.0.33.47718: Flags [.], ack 74, win 502, options [nop,nop,TS val 1813145307 ecr 2464436214], length 0
12:50:02.474659 0e:02:58:07:78:f7 > 0e:30:23:1e:1c:73, ethertype IPv4 (0x0800), length 302: 10.0.1.97.80 > 10.0.0.33.47718: Flags [P.], seq 1:237, ack 74, win 502, options [nop,nop,TS val 1813145307 ecr 2464436214], length 236: HTTP: HTTP/1.1 200 OK
12:50:02.474699 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [.], ack 237, win 506, options [nop,nop,TS val 2464436214 ecr 1813145307], length 0
12:50:02.474785 0e:02:58:07:78:f7 > 0e:30:23:1e:1c:73, ethertype IPv4 (0x0800), length 108: 10.0.1.97.80 > 10.0.0.33.47718: Flags [P.], seq 237:279, ack 74, win 502, options [nop,nop,TS val 1813145307 ecr 2464436214], length 42: HTTP
12:50:02.474811 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [.], ack 279, win 506, options [nop,nop,TS val 2464436214 ecr 1813145307], length 0
12:50:02.475017 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [F.], seq 74, ack 279, win 506, options [nop,nop,TS val 2464436214 ecr 1813145307], length 0
12:50:02.475693 0e:02:58:07:78:f7 > 0e:30:23:1e:1c:73, ethertype IPv4 (0x0800), length 66: 10.0.1.97.80 > 10.0.0.33.47718: Flags [F.], seq 279, ack 75, win 502, options [nop,nop,TS val 1813145308 ecr 2464436214], length 0
12:50:02.475732 e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [.], ack 280, win 506, options [nop,nop,TS val 2464436215 ecr 1813145308], length 0

2.2.查询 cilium_vxlan 挂载的 eBPF 程序

可通过 cilium bpf_overlay 源码查询这两个 eBPF 程序的的作用:

  • cil_from_overlay:处理入向,远端 Node 发来的 VXLAN 包被内核解封装后,内层包出现在 cilium_vxlan 上,这个程序负责查找本地 Pod 并 redirect 到对应的 lxc
  • cil_to_overlay:处理出向,lxc 的 eBPF redirect 包到 cilium_vxlan 后,在内核 VXLAN 驱动封装之前,可以做 trace/监控等
root@cilium-kubeproxy-vxlan-control-plane:/# bpftool net show dev cilium_vxlan
  xdp:

  tc:
  cilium_vxlan(6) tcx/ingress cil_from_overlay prog_id 10348 link_id 198
  cilium_vxlan(6) tcx/egress cil_to_overlay prog_id 10347 link_id 199

  flow_dissector:
  
  netfilter:

2.3.查询 lxc 设备挂载的 eBPF 程序

## 程序名称: cil_from_container
## 程序 ID: 10651

root@cilium-kubeproxy-vxlan-control-plane:/# bpftool net show dev lxcc940ffbbad39
xdp:

tc:
lxcc940ffbbad39(10) tcx/ingress cil_from_container prog_id 10651 link_id 227 

flow_dissector:

netfilter:

2.4.输出 eBPF 程序 cil_from_container 反汇编代码

通过 xlated 参数,输出内核处理后以可读形式呈现的 eBPF 虚拟机指令

root@cilium-kubeproxy-vxlan-control-plane:/# bpftool prog dump xlated id 10651

## 主程序通过 tail call 尾调用跳转到子程序
; tail_call_static(ctx, CALLS_MAP, index);
  82: (bf) r1 = r6
  83: (18) r2 = map[id:2360]      # CALLS_MAP
  85: (b7) r3 = 7                 # 跳转到 index 7 的程序
  86: (85) call bpf_tail_call#12
  87: (b4) w1 = -140
  88: (b4) w8 = 1792

2.5.查询通过 tail call 尾调用跳转到的子程序 ID 与索引

主程序通过 tail call 可以动态跳转到多个不同的子程序,这个 map 就是用来记录所有可跳转子程序的 ID 列表(以及对应的调用索引)。

root@cilium-kubeproxy-vxlan-control-plane:/# bpftool map dump id 2360
key: 01 00 00 00  value: 9f 29 00 00
key: 06 00 00 00  value: 9c 29 00 00
key: 07 00 00 00  value: a2 29 00 00
key: 0d 00 00 00  value: 9e 29 00 00
key: 19 00 00 00  value: 9d 29 00 00
key: 1b 00 00 00  value: a3 29 00 00
key: 1d 00 00 00  value: a6 29 00 00
key: 2f 00 00 00  value: a4 29 00 00

2.6.查询子程序反汇编代码

上面查到的子程序 ID 通过 16 进制转换后重复 2.3 操作,通过 xlated 参数,输出内核处理后以可读形式呈现的 eBPF 虚拟机指令

bpftool 输出中 bpf_redirect 上面的注释——这些注释来自源码的 BTF 调试信息,显示了调用 bpf_redirect 的原始 C 函数名。其中 10653 子程序 encap_and_redirect_with_nodeid 函数对应 bpf/lib/encap.h 源码,它做了两件事:

  1. encap:调用 ctx_set_encap_info4() 设置 VXLAN 隧道元数据(目标 Node IP、VNI、安全身份等)
  2. redirect:ctx_redirect() 重定向到 cilium_vxlan 设备
root@cilium-kubeproxy-vxlan-control-plane:/# for id in 10655 10652 10658 10654 10653 10659 10662 10660; do
  result=$(bpftool prog dump xlated id $id 2 > /dev/null | grep -i 'redirect')
  if [ -n "$result" ]; then
    echo "=== prog $id ==="
    echo "$result"
  fi
done

## 输出内容

=== prog 10652 ===
; return redirect(ifindex, flags);
 169: (85) call bpf_redirect#12800944
=== prog 10654 ===
; (ct_state->proxy_redirect && !tc_index_from_egress_proxy(ctx))) {
; (ct_state->proxy_redirect && !tc_index_from_egress_proxy(ctx))) {
; (ct_state->proxy_redirect && !tc_index_from_egress_proxy(ctx))) {
; ct_state_new.proxy_redirect = *proxy_port > 0;
; ct_state_new.proxy_redirect = *proxy_port > 0;
; return redirect(ifindex, flags);
1121: (85) call bpf_redirect#12800944
=== prog 10653 ===
; if (ct_state->proxy_redirect) {
; if (ct_state->proxy_redirect) {
; if (ct_state->proxy_redirect) {
; ct_state_new.proxy_redirect = proxy_port > 0;
; ct_state_new.proxy_redirect = proxy_port > 0;
; entry->proxy_redirect = state->proxy_redirect;
; ct_state_new.proxy_redirect = proxy_port > 0;
; if (unlikely(ct_state->proxy_redirect != ct_state_new.proxy_redirect))
; if (unlikely(ct_state->proxy_redirect != ct_state_new.proxy_redirect))

## 提供 VXLAN 封装信息(元数据):
; return encap_and_redirect_with_nodeid(ctx, tunnel->ip4, 0, seclabel, dstid,
; return encap_and_redirect_with_nodeid(ctx, tunnel->ip4, 0, seclabel, dstid,
; return encap_and_redirect_with_nodeid(ctx, tunnel->ip4, 0, seclabel, dstid,
; if (ret != CTX_ACT_REDIRECT)
; if (ret != CTX_ACT_REDIRECT)
1187: (85) call bpf_redirect#12800944
=== prog 10659 ===
; state->proxy_redirect = entry->proxy_redirect;
; state->proxy_redirect = entry->proxy_redirect;
=== prog 10662 ===
; state->proxy_redirect = entry->proxy_redirect;
; state->proxy_redirect = entry->proxy_redirect;
=== prog 10660 ===
; return redirect(ifindex, flags);
 194: (85) call bpf_redirect#12800944

2.7.在 Node eth0 处抓包

上文查询 cilium endpoint list 时,看到测试 Pod 对应的 IDENTITY ID 为 17110,抓包时通过偏移量过滤具体的请求

可以看出,只是对 cilium_vxlan 设备抓到的包进行了一层 VXLAN 封装

root@network-demo:~# docker exec -it cilium-kubeproxy-vxlan-control-plane tcpdump -i eth0 -pne 'udp port 8472 and ether[46:4] & 0xffffff00 = 0x0042d600'

12:50:02.474046 26:0c:2b:e0:c0:6a > da:0f:1f:a1:dc:df, ethertype IPv4 (0x0800), length 124: 172.18.0.3.49391 > 172.18.0.2.8472: OTV, flags [I] (0x08), overlay 0, instance 17110
e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 74: 10.0.0.33.47718 > 10.0.1.97.80: Flags [S], seq 2406447395, win 64860, options [mss 1410,sackOK,TS val 2464436213 ecr 0,nop,wscale 7], length 0
12:50:02.474182 da:0f:1f:a1:dc:df > 26:0c:2b:e0:c0:6a, ethertype IPv4 (0x0800), length 124: 172.18.0.2.36779 > 172.18.0.3.8472: OTV, flags [I] (0x08), overlay 0, instance 17110
0e:02:58:07:78:f7 > 0e:30:23:1e:1c:73, ethertype IPv4 (0x0800), length 74: 10.0.1.97.80 > 10.0.0.33.47718: Flags [S.], seq 1358194231, ack 2406447396, win 64308, options [mss 1410,sackOK,TS val 1813145307 ecr 2464436213,nop,wscale 7], length 0
12:50:02.474248 26:0c:2b:e0:c0:6a > da:0f:1f:a1:dc:df, ethertype IPv4 (0x0800), length 116: 172.18.0.3.49391 > 172.18.0.2.8472: OTV, flags [I] (0x08), overlay 0, instance 17110
e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [.], ack 1, win 507, options [nop,nop,TS val 2464436214 ecr 1813145307], length 0
12:50:02.474421 26:0c:2b:e0:c0:6a > da:0f:1f:a1:dc:df, ethertype IPv4 (0x0800), length 189: 172.18.0.3.49391 > 172.18.0.2.8472: OTV, flags [I] (0x08), overlay 0, instance 17110
e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 139: 10.0.0.33.47718 > 10.0.1.97.80: Flags [P.], seq 1:74, ack 1, win 507, options [nop,nop,TS val 2464436214 ecr 1813145307], length 73: HTTP: GET / HTTP/1.1
12:50:02.474484 da:0f:1f:a1:dc:df > 26:0c:2b:e0:c0:6a, ethertype IPv4 (0x0800), length 116: 172.18.0.2.36779 > 172.18.0.3.8472: OTV, flags [I] (0x08), overlay 0, instance 17110
0e:02:58:07:78:f7 > 0e:30:23:1e:1c:73, ethertype IPv4 (0x0800), length 66: 10.0.1.97.80 > 10.0.0.33.47718: Flags [.], ack 74, win 502, options [nop,nop,TS val 1813145307 ecr 2464436214], length 0
12:50:02.474645 da:0f:1f:a1:dc:df > 26:0c:2b:e0:c0:6a, ethertype IPv4 (0x0800), length 352: 172.18.0.2.36779 > 172.18.0.3.8472: OTV, flags [I] (0x08), overlay 0, instance 17110
0e:02:58:07:78:f7 > 0e:30:23:1e:1c:73, ethertype IPv4 (0x0800), length 302: 10.0.1.97.80 > 10.0.0.33.47718: Flags [P.], seq 1:237, ack 74, win 502, options [nop,nop,TS val 1813145307 ecr 2464436214], length 236: HTTP: HTTP/1.1 200 OK
12:50:02.474705 26:0c:2b:e0:c0:6a > da:0f:1f:a1:dc:df, ethertype IPv4 (0x0800), length 116: 172.18.0.3.49391 > 172.18.0.2.8472: OTV, flags [I] (0x08), overlay 0, instance 17110
e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [.], ack 237, win 506, options [nop,nop,TS val 2464436214 ecr 1813145307], length 0
12:50:02.474779 da:0f:1f:a1:dc:df > 26:0c:2b:e0:c0:6a, ethertype IPv4 (0x0800), length 158: 172.18.0.2.36779 > 172.18.0.3.8472: OTV, flags [I] (0x08), overlay 0, instance 17110
0e:02:58:07:78:f7 > 0e:30:23:1e:1c:73, ethertype IPv4 (0x0800), length 108: 10.0.1.97.80 > 10.0.0.33.47718: Flags [P.], seq 237:279, ack 74, win 502, options [nop,nop,TS val 1813145307 ecr 2464436214], length 42: HTTP
12:50:02.474816 26:0c:2b:e0:c0:6a > da:0f:1f:a1:dc:df, ethertype IPv4 (0x0800), length 116: 172.18.0.3.49391 > 172.18.0.2.8472: OTV, flags [I] (0x08), overlay 0, instance 17110
e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [.], ack 279, win 506, options [nop,nop,TS val 2464436214 ecr 1813145307], length 0
12:50:02.475026 26:0c:2b:e0:c0:6a > da:0f:1f:a1:dc:df, ethertype IPv4 (0x0800), length 116: 172.18.0.3.49391 > 172.18.0.2.8472: OTV, flags [I] (0x08), overlay 0, instance 17110
e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [F.], seq 74, ack 279, win 506, options [nop,nop,TS val 2464436214 ecr 1813145307], length 0
12:50:02.475680 da:0f:1f:a1:dc:df > 26:0c:2b:e0:c0:6a, ethertype IPv4 (0x0800), length 116: 172.18.0.2.36779 > 172.18.0.3.8472: OTV, flags [I] (0x08), overlay 0, instance 17110
0e:02:58:07:78:f7 > 0e:30:23:1e:1c:73, ethertype IPv4 (0x0800), length 66: 10.0.1.97.80 > 10.0.0.33.47718: Flags [F.], seq 279, ack 75, win 502, options [nop,nop,TS val 1813145308 ecr 2464436214], length 0
12:50:02.475738 26:0c:2b:e0:c0:6a > da:0f:1f:a1:dc:df, ethertype IPv4 (0x0800), length 116: 172.18.0.3.49391 > 172.18.0.2.8472: OTV, flags [I] (0x08), overlay 0, instance 17110
e2:3a:9b:ff:1c:35 > 9a:b1:a4:85:88:7d, ethertype IPv4 (0x0800), length 66: 10.0.0.33.47718 > 10.0.1.97.80: Flags [.], ack 280, win 506, options [nop,nop,TS val 2464436215 ecr 1813145308], length 0

2.8.查询 cilium monitor 监控信息

从 cilium monitor 监控中看到,请求通过 veth pair lxcc940ffbbad39 发出后,直接到了 cilium_vxlan 设备,与上文描述一致,没有经过 cilium_host:

root@network-demo:~# kubectl exec -it -n kube-system cilium-j8sb5 -- cilium monitor --type trace -v --from 467

<- endpoint 467 flow 0x867f10c1 , identity 17110->unknown state unknown ifindex 0 orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp SYN
-> overlay flow 0x867f10c1 , identity 17110->17110 state new ifindex cilium_vxlan orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp SYN
-> endpoint 467 flow 0x6004445b , identity 17110->17110 state reply ifindex lxcc940ffbbad39 orig-ip 10.0.1.97: 10.0.1.97:80 -> 10.0.0.33:47718 tcp SYN, ACK
<- endpoint 467 flow 0x867f10c1 , identity 17110->unknown state unknown ifindex 0 orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp ACK
-> overlay flow 0x867f10c1 , identity 17110->17110 state established ifindex cilium_vxlan orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp ACK
<- endpoint 467 flow 0x867f10c1 , identity 17110->unknown state unknown ifindex 0 orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp ACK
-> overlay flow 0x867f10c1 , identity 17110->17110 state established ifindex cilium_vxlan orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp ACK
-> endpoint 467 flow 0x6004445b , identity 17110->17110 state reply ifindex lxcc940ffbbad39 orig-ip 10.0.1.97: 10.0.1.97:80 -> 10.0.0.33:47718 tcp ACK
-> endpoint 467 flow 0x6004445b , identity 17110->17110 state reply ifindex lxcc940ffbbad39 orig-ip 10.0.1.97: 10.0.1.97:80 -> 10.0.0.33:47718 tcp ACK
<- endpoint 467 flow 0x867f10c1 , identity 17110->unknown state unknown ifindex 0 orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp ACK
-> overlay flow 0x867f10c1 , identity 17110->17110 state established ifindex cilium_vxlan orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp ACK
-> endpoint 467 flow 0x6004445b , identity 17110->17110 state reply ifindex lxcc940ffbbad39 orig-ip 10.0.1.97: 10.0.1.97:80 -> 10.0.0.33:47718 tcp ACK
<- endpoint 467 flow 0x867f10c1 , identity 17110->unknown state unknown ifindex 0 orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp ACK
-> overlay flow 0x867f10c1 , identity 17110->17110 state established ifindex cilium_vxlan orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp ACK
<- endpoint 467 flow 0x867f10c1 , identity 17110->unknown state unknown ifindex 0 orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp ACK, FIN
-> overlay flow 0x867f10c1 , identity 17110->17110 state established ifindex cilium_vxlan orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp ACK, FIN
-> endpoint 467 flow 0x6004445b , identity 17110->17110 state reply ifindex lxcc940ffbbad39 orig-ip 10.0.1.97: 10.0.1.97:80 -> 10.0.0.33:47718 tcp ACK, FIN
<- endpoint 467 flow 0x867f10c1 , identity 17110->unknown state unknown ifindex 0 orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp ACK
-> overlay flow 0x867f10c1 , identity 17110->17110 state established ifindex cilium_vxlan orig-ip 0.0.0.0: 10.0.0.33:47718 -> 10.0.1.97:80 tcp ACK
posted @ 2026-05-06 22:03  怎么还在写代码  阅读(13)  评论(0)    收藏  举报