istio 遇到的问题
问题1: ① ALB Ingress 日志中 URL 未解析,字段出现大量 -
检查service name
#resource service
apiVersion: v1
kind: Service
metadata:
labels:
app: test-releasegatekeeper
name: test-releasegatekeeper
namespace: ns
spec:
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http # 这个name得是http
port: 9900
protocol: TCP
targetPort: 8000
selector:
app: test-releasegatekeeper
sessionAffinity: None
type: ClusterIP
修改前的日志,
kubectl -n ns logs -l app=test-releasegatekeeper -c istio-proxy --tail=50 -f
[2026-01-09T11:36:09.267Z] "status=0 response_flags=- method=- url=- protocol=-" req-id=- traceid=- downstream_remote_address=10.0.0.0:36706 upstream_host=98.96.242.53:443 duration=721ms routename=-
修改后的日志,修改后不用重启任何服务等待日志的变化
kubectl -n ns logs -l app=test-releasegatekeeper -c istio-proxy --tail=50 -f
[2026-01-09T11:36:09.261Z] "status=200 response_flags=- method=POST url=/webhook protocol=HTTP/1.1" req-id=c3f186ea7c7ea9dd3b41802c7d50a7af traceid=c3f186ea7c7ea9dd3b41802c7d50a7af downstream_remote_address=47.94.150.88:0 upstream_host=10.0.0.0:8000 duration=670ms routename=default
- 日志格式自定义
kubectl -n istio-system get cm istio -o yaml
apiVersion: v1
data:
mesh: |-
extensionProviders:
- name: envoy-accesslog-stdout
envoyFileAccessLog:
path: /dev/stdout
logFormat:
text: |
[%START_TIME%] "status=%RESPONSE_CODE% response_flags=%RESPONSE_FLAGS% method=%REQ(:METHOD)% url=%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% protocol=%PROTOCOL%" req-id=%REQ(req-id)% traceid=%REQ(X-REQUEST-ID)% downstream_remote_address=%DOWNSTREAM_REMOTE_ADDRESS% upstream_host=%UPSTREAM_HOST% duration=%DURATION%ms routename=%ROUTE_NAME%
defaultConfig:
discoveryAddress: istiod.istio-system.svc:15012
tracing:
zipkin:
address: zipkin.istio-system:9411
accessLogFile: /dev/stdout
accessLogFormat: |
[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESPONSE_FLAGS% traceid=%REQ(X-B3-TRACEID)% src=%DOWNSTREAM_REMOTE_ADDRESS% upstream=%UPSTREAM_HOST% dur=%DURATION%ms
defaultProviders:
metrics:
- prometheus
enablePrometheusMerge: true
rootNamespace: istio-system
trustDomain: cluster.local
meshNetworks: 'networks: {}'
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: istiod
meta.helm.sh/release-namespace: istio-system
creationTimestamp: "2026-01-05T08:29:36Z"
labels:
app.kubernetes.io/managed-by: Helm
helm.toolkit.fluxcd.io/name: istiod
helm.toolkit.fluxcd.io/namespace: istio-system
install.operator.istio.io/owning-resource: unknown
istio.io/rev: default
operator.istio.io/component: Pilot
release: istiod
name: istio
namespace: istio-system
resourceVersion: "258243540"
uid: 4d1cac31-5fbb-4df8-95c0-ff5eda438176
问题2:Nginx Ingress 自定义 configuration-snippet 注解一直不生效
- 错误信息:
admission webhook "validate.nginx.ingress.kubernetes.io" denied the request
configuration-snippet annotation cannot be used
ConfigMap 中添加 allow-snippet-annotations: "true",不用重启 nginx-ingress-controller即可生效
allow-snippet-annotations: "true"
问题3:测试 Deployment 层面注入 Sidecar
- 阻塞点:
- namespace 不能添加这个label istio-injection
kubectl label ns us-backend istio-injection-
- 开启 sidecar
kubectl -n ns patch deploy test-releasegatekeeper -p '{"spec":{"template":{"metadata":{"labels":{"istio.io/rev":"default"},"annotations":{"sidecar.istio.io/inject":"true"}}}}}'
- 检查
kubectl -n ns get pods -l app=test-releasegatekeeper -o json | jq -r '.items[].spec.containers[].name' | sort -u
istio-proxy
test-releasegatekeeper
- 需要重启deployment
kubectl -n ns rollout restart deploy/test-releasegatekeeper
# 检查
kubectl -n ns rollout status deploy/test-releasegatekeeper
# 起来后可以看到READY 是2个
kubectl -n $NS get pod $POD --show-labels
NAME READY STATUS RESTARTS AGE LABELS
test-releasegatekeeper-x8jcr 2/2 Running 0 3h8m app=test-releasegatekeeper,pod-template-hash=6bfcdd7b7d,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=test-releasegatekeeper,service.istio.io/canonical-revision=latest,sidecar.istio.io/inject=true
问题4:日志模版没有被引用
service name不能是tcp,最好是http
- 创建日志sidcar 日志收集的yaml
cat telemetry-accesslog-v1.yaml
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: accesslog-releasegatekeeper
namespace: us-backend
spec:
selector:
matchLabels:
app: test-releasegatekeeper
accessLogging:
- providers:
- name: envoy-accesslog-stdout # 这个和kubectl -n istio-system get cm istio -o yaml 里面看到的是一样的才能引用模版
filter:
expression: "true"
disabled: false
问题5: 注入sidecar,istio init失败
- 报错如下所示
kubectl -n us logs test-server-76d969c565-2mdmv -c istio-init
2026-01-12T09:40:03.307268Z info Istio iptables environment:
ENVOY_PORT=
INBOUND_CAPTURE_PORT=
ISTIO_INBOUND_INTERCEPTION_MODE=
ISTIO_INBOUND_TPROXY_ROUTE_TABLE=
ISTIO_INBOUND_PORTS=
ISTIO_OUTBOUND_PORTS=
ISTIO_LOCAL_EXCLUDE_PORTS=
ISTIO_EXCLUDE_INTERFACES=
ISTIO_SERVICE_CIDR=
ISTIO_SERVICE_EXCLUDE_CIDR=
ISTIO_META_DNS_CAPTURE=
INVALID_DROP=
2026-01-12T09:40:03.307313Z info Istio iptables variables:
IPTABLES_VERSION=
PROXY_PORT=15001
PROXY_INBOUND_CAPTURE_PORT=15006
PROXY_TUNNEL_PORT=15008
PROXY_UID=1337
PROXY_GID=1337
INBOUND_INTERCEPTION_MODE=REDIRECT
INBOUND_TPROXY_MARK=1337
INBOUND_TPROXY_ROUTE_TABLE=133
INBOUND_PORTS_INCLUDE=*
INBOUND_PORTS_EXCLUDE=15090,15021,15020
OUTBOUND_OWNER_GROUPS_INCLUDE=*
OUTBOUND_OWNER_GROUPS_EXCLUDE=
OUTBOUND_IP_RANGES_INCLUDE=*
OUTBOUND_IP_RANGES_EXCLUDE=
OUTBOUND_PORTS_INCLUDE=
OUTBOUND_PORTS_EXCLUDE=
KUBE_VIRT_INTERFACES=
ENABLE_INBOUND_IPV6=false
DUAL_STACK=false
DNS_CAPTURE=false
DROP_INVALID=false
CAPTURE_ALL_DNS=false
DNS_SERVERS=[],[]
NETWORK_NAMESPACE=
CNI_MODE=false
EXCLUDE_INTERFACES=
2026-01-12T09:40:03.307352Z info Running iptables-restore with the following input:
* nat
-N ISTIO_INBOUND
-N ISTIO_REDIRECT
-N ISTIO_IN_REDIRECT
-N ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp --dport 15008 -j RETURN
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A ISTIO_INBOUND -p tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -p tcp ! --dport 15008 -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -p tcp ! --dport 15008 -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
COMMIT
2026-01-12T09:40:03.307389Z info Running command (with wait lock): iptables-restore --noflush --wait=30
2026-01-12T09:40:03.308595Z error Command error output: xtables parameter problem: iptables-restore: unable to initialize table 'nat'
Error occurred at line: 1
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2026-01-12T09:40:03.308626Z info Running command (without lock): iptables-save
2026-01-12T09:40:03.309591Z error exit status 2
- 问题分析:节点内核没启用/没加载 NAT 相关模块
在失败 Pod 所在节点上检查:
# 找到 Pod 在哪个节点
kubectl -n us-backend get pod test-server-76d969c565-2mdmv -o wide
# 登到该节点执行
lsmod | egrep 'iptable_nat|nf_nat|xt_nat'
cat /proc/net/ip_tables_names
- 如果 cat /proc/net/ip_tables_names 里没有 nat,那就是节点层面 NAT 表不可用。
- 修复方式通常是加载模块:
modprobe iptable_nat
modprobe nf_nat
modprobe xt_nat
- 修复后
lsmod | egrep 'iptable_nat|nf_nat|xt_nat'
xt_nat 16384 0
iptable_nat 16384 1
ip_tables 36864 1 iptable_nat
nf_nat 57344 5 xt_nat,nft_chain_nat,iptable_nat,xt_MASQUERADE,xt_REDIRECT
nf_conntrack 180224 8 xt_conntrack,nf_nat,xt_nat,nf_conntrack_netlink,xt_CT,xt_MASQUERADE,ip_vs,xt_REDIRECT
libcrc32c 16384 4 nf_conntrack,nf_nat,nf_tables,ip_vs
cat /proc/net/ip_tables_names
nat

浙公网安备 33010602011771号