16. Ingress

Ingress

在 Kubernetes 中,Ingress 是一种将集群内部服务暴露给外部用户的方式,比 NodePort 或 LoadBalancer 更加灵活,特别适合管理大量服务和路径路由。

Ingress 的作用

  1. 对外暴露服务:
    Ingress 允许你将集群内部的 Service 暴露给外部用户访问,而不需要给每个服务都分配 NodePort 或 LoadBalancer。

  2. 路径和域名路由:
    Ingress 可以基于 HTTP 路径或 Host 域名将请求路由到不同的 Service。例如:

  • /app1 → Service A
  • /app2 → Service B
  • foo.example.com → Service C
  1. 集成服务发现:
    Ingress Controller 会自动感知集群内 Service 的变化,动态更新路由配置,避免手动修改 Nginx 或 HAProxy。

Ingress 的核心组件

1. Ingress Controller

  • 相当于监听器,负责把外部请求转发到内部服务。
  • 常见实现:
    • NGINX Ingress Controller
    • Traefik
    • HAProxy
  • 功能:
    • 监听 API Server 中的 Ingress 资源变化
    • 动态更新反向代理配置
    • 支持路径、域名路由、TLS

2. Ingress 资源对象

Ingress 是 Kubernetes 内置对象,用来定义访问规则。核心字段如下:

字段 作用
rules 路由规则列表,包括 host、path、backend
defaultBackend 默认后端,当请求不匹配任何规则时使用
ingressClassName 指定使用的 IngressClass(即具体的 Ingress Controller)
tls 配置 HTTPS 证书

3. 路由规则(Rules)

每条规则可以配置:

  • host(可选)

    • 不填则匹配所有请求 IP
    • 支持精确匹配或通配符(如 *.example.com)
  • http.paths

    • 定义路径列表
    • 每个路径配置 backend,指向对应的 Service 或 Resource
  • backend

    • 指定请求转发的目标
    • 可以是:
      • Service(最常用)
      • Resource(Kubernetes 自定义资源,比如对象存储)

4. pathType

每个路径都必须指定匹配类型:

类型 描述
Exact 精确匹配 URL,区分大小写
Prefix 前缀匹配,按 / 分隔元素逐级匹配
ImplementationSpecific 由 Ingress Controller 决定匹配规则

匹配规则:

  • 多条路径匹配同一请求 → 最长路径优先
  • 如果长度相同 → Exact 优先于 Prefix

5. IngressClass

概念:

  • Ingress = “我要暴露这个服务的规则和路径”

  • IngressController = “我来帮你把规则实现成实际流量转发”

  • IngressClass = “这是这条规则应该交给哪个 IngressController 去处理”

  • Kubernetes 1.18+ 引入,用于指定 Ingress Controller

  • 示例:

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: external-lb
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true" # 可选,设置默认
spec:
  controller: nginx-ingress-controller
  • 在 Ingress 中引用:
spec:
  ingressClassName: external-lb

TLS

  • Ingress 通过 TLS Secret 提供证书和私钥,实现 HTTPS 加密
  • Kubernetes Ingress 只支持一个 TLS 端口:443
  • 多主机共享同一端口:
  • 利用 SNI (Server Name Indication) 扩展
  • 同一端口上可以根据域名选择对应的证书
  • 前提:Ingress Controller 支持 SNI(大部分主流 Controller 都支持)

TLS Secret 创建:

apiVersion: v1
kind: Secret
metadata:
  name: testsecret-tls
  namespace: default
type: kubernetes.io/tls
data:
  tls.crt: <base64 编码的证书内容>
  tls.key: <base64 编码的私钥内容>

Ingress 引用 TLS Secret:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tls-example-ingress
spec:
  tls:
  - hosts:
    - https-example.foo.com
    secretName: testsecret-tls
  rules:
  - host: https-example.foo.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: service1
            port:
              number: 80

说明:

  • tls.hosts:需要使用 HTTPS 的域名
  • secretName:上面创建的 TLS Secret
  • rules.host 必须与 TLS host 对应
  • paths.backend.service 指定实际服务

Ingress 的使用

Ingress 资源对象只是一个路由规则的配置文件,要让它真正生效,需要对应的 Ingress 控制器。这里介绍 ingress-nginx,这是最常用的基于 Nginx 的 Ingress 控制器。

运行原理

  1. Ingress → Nginx 配置文件
  • ingress-nginx 控制器会将集群中的 Ingress、Service、Endpoints、Secret、ConfigMap 等资源对象翻译成 nginx.conf 配置文件。
  • 任何相关资源发生变化时,控制器会重建配置模型,判断是否需要更新 Nginx 配置。
  1. 优化 Nginx 重新加载
  • 并不是每次资源变动都需要重载 Nginx。
  • 使用 Lua 模块处理 endpoint 更新,避免不必要的重新加载,提高性能。
  1. 触发 Nginx 配置更新的场景:
  • 创建新的 Ingress
  • TLS 添加到现有 Ingress
  • 添加或删除 path 路径
  • Ingress、Service、Secret 被删除或更新
  • 缺失的 Service 或 Secret 可用时

对大规模集群,频繁重载 Nginx 会影响性能,所以尽量减少不必要的配置更新。

安装

# 自己找版本对应yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
# 或者helm安装(之前装过面板,这里不举例了)
helm install ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --create-namespace \
  --set controller.service.type=LoadBalancer

查看:

# 查看POD
ubuntu@ubuntu:~/example/dns$ kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS        AGE
ingress-nginx-controller-6f449f6b9d-qw7s8   1/1     Running   5 (6h18m ago)   8d
# 查看Service
# ingress-nginx-controller:处理 Ingress 流量
# ingress-nginx-controller-admission:准入控制器,防止不合规 Ingress 创建
ubuntu@ubuntu:~/example/dns$ kubectl get svc -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.104.99.152    192.168.236.200   80:31694/TCP,443:31176/TCP   8d
ingress-nginx-controller-admission   ClusterIP      10.105.200.153   <none>            443/TCP                      8d
# 查看默认的 IngressClass
ubuntu@ubuntu:~/example/dns$ kubectl get ingressclass
NAME    CONTROLLER             PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>       8d

ubuntu@ubuntu:~/example/dns$ kubectl describe ingressclass nginx
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.14.0
              helm.sh/chart=ingress-nginx-4.14.0
Annotations:  meta.helm.sh/release-name: ingress-nginx
              meta.helm.sh/release-namespace: ingress-nginx
Controller:   k8s.io/ingress-nginx
Events:       <none>

Ingress示例

  1. 创建 Deployment 和 Service
# my-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      app: my-nginx
  template:
    metadata:
      labels:
        app: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  namespace: default
spec:
  selector:
    app: my-nginx
  type: LoadBalancer
  ports:
    - port: 80         # Service 暴露给 Ingress 的端口
      protocol: TCP
      name: http
  1. 创建 Ingress
# my-nginx-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-nginx
  namespace: default
spec:
  ingressClassName: nginx   # 使用 ingress-nginx 控制器
  rules:
  - host: my-nginx.192.168.236.200.sslip.io  # 映射到 Node IP
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-nginx
            port:
              number: 80

验证:

# 部署服务
ubuntu@ubuntu:~/example/ingress$ kubectl apply -f ./my-nginx.yaml 
deployment.apps/my-nginx created
service/my-nginx created
ubuntu@ubuntu:~/example/ingress$ kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
my-nginx-5bf844d4c-bbcs9   1/1     Running   0          10s
ubuntu@ubuntu:~/example/ingress$ kubectl get svc -o wide
NAME         TYPE           CLUSTER-IP    EXTERNAL-IP       PORT(S)        AGE   SELECTOR
kubernetes   ClusterIP      10.96.0.1     <none>            443/TCP        77d   <none>
my-nginx     LoadBalancer   10.99.55.43   192.168.236.201   80:31164/TCP   23s   app=my-nginx

# 部署ingress 
ubuntu@ubuntu:~/example/ingress$ kubectl apply -f ./my-nginx-ingress.yaml 
ingress.networking.k8s.io/my-nginx created

# 浏览器直接访问 my-nginx.192.168.236.200.sslip.io


# 我们也提到了 ingress-nginx 控制器的核心原理就是将我们的 Ingress 这些资源对象映射翻译成 Nginx 配置文件 nginx.conf,我们可以通过查看控制器中的配置文件来验证这点:
ubuntu@ubuntu:~/example/ingress$ kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS        AGE
ingress-nginx-controller-6f449f6b9d-qw7s8   1/1     Running   5 (6h41m ago)   8d
ubuntu@ubuntu:~/example/ingress$ kubectl exec -it ingress-nginx-controller-6f449f6b9d-qw7s8 -n ingress-nginx -- /bin/sh
/etc/nginx $ cat /etc/nginx/nginx.conf

# Configuration checksum: 986615300355039267

# setup custom paths that do not require root access
pid /tmp/nginx/nginx.pid;

daemon off;

worker_processes 4;

worker_rlimit_nofile 1047552;

worker_shutdown_timeout 240s ;

events {
	multi_accept        on;
	worker_connections  16384;
	use                 epoll;
	
}

http {
	
	lua_package_path "/etc/nginx/lua/?.lua;;";
	
	lua_shared_dict balancer_ewma 10M;
	lua_shared_dict balancer_ewma_last_touched_at 10M;
	lua_shared_dict balancer_ewma_locks 1M;
	lua_shared_dict certificate_data 20M;
	lua_shared_dict certificate_servers 5M;
	lua_shared_dict configuration_data 20M;
	lua_shared_dict ocsp_response_cache 5M;
	
	lua_shared_dict luaconfig 5m;
	
	init_by_lua_file /etc/nginx/lua/ngx_conf_init.lua;
	
	init_worker_by_lua_file /etc/nginx/lua/ngx_conf_init_worker.lua;
	
	aio                 threads;
	
	aio_write           on;
	
	tcp_nopush          on;
	tcp_nodelay         on;
	
	log_subrequest      on;
	
	reset_timedout_connection on;
	
	keepalive_timeout  75s;
	keepalive_requests 1000;
	
	client_body_temp_path           /tmp/nginx/client-body;
	fastcgi_temp_path               /tmp/nginx/fastcgi-temp;
	proxy_temp_path                 /tmp/nginx/proxy-temp;
	
	client_header_buffer_size       1k;
	client_header_timeout           60s;
	large_client_header_buffers     4 8k;
	client_body_buffer_size         8k;
	client_body_timeout             60s;
	
	http2_max_concurrent_streams    128;
	
	types_hash_max_size             2048;
	server_names_hash_max_size      1024;
	server_names_hash_bucket_size   64;
	map_hash_bucket_size            64;
	
	proxy_headers_hash_max_size     512;
	proxy_headers_hash_bucket_size  64;
	
	variables_hash_bucket_size      256;
	variables_hash_max_size         2048;
	
	underscores_in_headers          off;
	ignore_invalid_headers          on;
	
	limit_req_status                503;
	limit_conn_status               503;
	
	include /etc/nginx/mime.types;
	default_type text/html;
	
	# Custom headers for response
	
	server_tokens off;
	
	more_clear_headers Server;
	
	# disable warnings
	uninitialized_variable_warn off;
	
	# Additional available variables:
	# $namespace
	# $ingress_name
	# $service_name
	# $service_port
	log_format upstreaminfo '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';
	
	map $request_uri $loggable {
		
		default 1;
	}
	
	access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;
	
	error_log  /var/log/nginx/error.log notice;
	
	resolver 10.96.0.10 valid=30s;
	
	# See https://www.nginx.com/blog/websocket-nginx
	map $http_upgrade $connection_upgrade {
		default          upgrade;
		
		# See https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
		''               '';
		
	}
	
	# Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
	# If no such header is provided, it can provide a random value.
	map $http_x_request_id $req_id {
		default   $http_x_request_id;
		
		""        $request_id;
		
	}
	
	# Create a variable that contains the literal $ character.
	# This works because the geo module will not resolve variables.
	geo $literal_dollar {
		default "$";
	}
	
	server_name_in_redirect off;
	port_in_redirect        off;
	
	ssl_protocols TLSv1.2 TLSv1.3;
	
	ssl_early_data off;
	
	# turn on session caching to drastically improve performance
	
	ssl_session_cache shared:SSL:10m;
	ssl_session_timeout 10m;
	
	# allow configuring ssl session tickets
	ssl_session_tickets off;
	
	# slightly reduce the time-to-first-byte
	ssl_buffer_size 4k;
	
	# allow configuring custom ssl ciphers
	ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256';
	ssl_prefer_server_ciphers on;
	
	ssl_ecdh_curve auto;
	
	# PEM sha: b3e650e63981b9c1a21c2efb09c98b19307107cc
	ssl_certificate     /etc/ingress-controller/ssl/default-fake-certificate.pem;
	ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;
	
	proxy_ssl_session_reuse on;
	
	upstream upstream_balancer {
		### Attention!!!
		#
		# We no longer create "upstream" section for every backend.
		# Backends are handled dynamically using Lua. If you would like to debug
		# and see what backends ingress-nginx has in its memory you can
		# install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.
		# Once you have the plugin you can use "kubectl ingress-nginx backends" command to
		# inspect current backends.
		#
		###
		
		server 0.0.0.1; # placeholder
		
		balancer_by_lua_file /etc/nginx/lua/nginx/ngx_conf_balancer.lua;
		
		keepalive 320;
		keepalive_time 1h;
		keepalive_timeout  60s;
		keepalive_requests 10000;
		
	}
	
	# Cache for internal auth checks
	proxy_cache_path /tmp/nginx/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off;
	
	# Global filters
	
	## start server _
	server {
		server_name _ ;
		
		http2 on;
		
		listen 80 default_server reuseport backlog=4096 ;
		listen [::]:80 default_server reuseport backlog=4096 ;
		listen 443 default_server reuseport backlog=4096 ssl;
		listen [::]:443 default_server reuseport backlog=4096 ssl;
		
		set $proxy_upstream_name "-";
		
		ssl_reject_handshake off;
		
		ssl_certificate_by_lua_file /etc/nginx/lua/nginx/ngx_conf_certificate.lua;
		
		location / {
			
			set $namespace      "";
			set $ingress_name   "";
			set $service_name   "";
			set $service_port   "";
			set $location_path  "";
			
			set $force_ssl_redirect "false";
			set $ssl_redirect "false";
			set $force_no_ssl_redirect "false";
			set $preserve_trailing_slash "false";
			set $use_port_in_redirects "false";
			
			rewrite_by_lua_file /etc/nginx/lua/nginx/ngx_rewrite.lua;
			
			header_filter_by_lua_file /etc/nginx/lua/nginx/ngx_conf_srv_hdr_filter.lua;
			
			log_by_lua_file /etc/nginx/lua/nginx/ngx_conf_log_block.lua;
			
			access_log off;
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "upstream-default-backend";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For  $http_x_forwarded_for;
			# Pass the original X-Forwarded-Host
			proxy_set_header X-Original-Forwarded-Host $http_x_forwarded_host;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       4k;
			proxy_buffers                           4 4k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			# Custom Response Headers
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		# health checks in cloud providers require the use of port 80
		location /healthz {
			
			access_log off;
			return 200;
		}
		
		# this is required to avoid error if nginx is being monitored
		# with an external software (like sysdig)
		location /nginx_status {
			
			allow 127.0.0.1;
			
			allow ::1;
			
			deny all;
			
			access_log off;
			stub_status on;
		}
		
	}
	## end server _
	
	## start server my-nginx.192.168.236.200.sslip.io
	server {
		server_name my-nginx.192.168.236.200.sslip.io ;
		
		http2 on;
		
		listen 80  ;
		listen [::]:80  ;
		listen 443  ssl;
		listen [::]:443  ssl;
		
		set $proxy_upstream_name "-";
		
		ssl_certificate_by_lua_file /etc/nginx/lua/nginx/ngx_conf_certificate.lua;
		
		location / {
			
			set $namespace      "default";
			set $ingress_name   "my-nginx";
			set $service_name   "my-nginx";
			set $service_port   "80";
			set $location_path  "/";
			
			set $force_ssl_redirect "false";
			set $ssl_redirect "true";
			set $force_no_ssl_redirect "false";
			set $preserve_trailing_slash "false";
			set $use_port_in_redirects "false";
			
			rewrite_by_lua_file /etc/nginx/lua/nginx/ngx_rewrite.lua;
			
			header_filter_by_lua_file /etc/nginx/lua/nginx/ngx_conf_srv_hdr_filter.lua;
			
			log_by_lua_file /etc/nginx/lua/nginx/ngx_conf_log_block.lua;
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "default-my-nginx-80";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For  $http_x_forwarded_for;
			# Pass the original X-Forwarded-Host
			proxy_set_header X-Original-Forwarded-Host $http_x_forwarded_host;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       4k;
			proxy_buffers                           4 4k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			# Custom Response Headers
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
	}
	## end server my-nginx.192.168.236.200.sslip.io
	
	## start server rancher.192.168.236.200.sslip.io
	server {
		server_name rancher.192.168.236.200.sslip.io ;
		
		http2 on;
		
		listen 80  ;
		listen [::]:80  ;
		listen 443  ssl;
		listen [::]:443  ssl;
		
		set $proxy_upstream_name "-";
		
		ssl_certificate_by_lua_file /etc/nginx/lua/nginx/ngx_conf_certificate.lua;
		
		location / {
			
			set $namespace      "cattle-system";
			set $ingress_name   "rancher";
			set $service_name   "rancher";
			set $service_port   "80";
			set $location_path  "/";
			
			set $force_ssl_redirect "false";
			set $ssl_redirect "true";
			set $force_no_ssl_redirect "false";
			set $preserve_trailing_slash "false";
			set $use_port_in_redirects "false";
			
			rewrite_by_lua_file /etc/nginx/lua/nginx/ngx_rewrite.lua;
			
			header_filter_by_lua_file /etc/nginx/lua/nginx/ngx_conf_srv_hdr_filter.lua;
			
			log_by_lua_file /etc/nginx/lua/nginx/ngx_conf_log_block.lua;
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "cattle-system-rancher-80";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For  $http_x_forwarded_for;
			# Pass the original X-Forwarded-Host
			proxy_set_header X-Original-Forwarded-Host $http_x_forwarded_host;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   30s;
			proxy_send_timeout                      1800s;
			proxy_read_timeout                      1800s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       4k;
			proxy_buffers                           4 4k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			# Custom Response Headers
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
	}
	## end server rancher.192.168.236.200.sslip.io
	
	# backend for when default-backend-service is not configured or it does not have endpoints
	server {
		listen 8181 default_server reuseport backlog=4096;
		listen [::]:8181 default_server reuseport backlog=4096;
		set $proxy_upstream_name "internal";
		
		access_log off;
		
		location / {
			return 404;
		}
	}
	
	# default server, used for NGINX healthcheck and access to nginx stats
	server {
		# Ensure that modsecurity will not run on an internal location as this is not accessible from outside
		
		listen 127.0.0.1:10246;
		set $proxy_upstream_name "internal";
		
		keepalive_timeout 0;
		gzip off;
		
		access_log off;
		
		location /healthz {
			return 200;
		}
		
		location /is-dynamic-lb-initialized {
			content_by_lua_file /etc/nginx/lua/nginx/ngx_conf_is_dynamic_lb_initialized.lua;
		}
		
		location /nginx_status {
			stub_status on;
		}
		
		location /configuration {
			client_max_body_size                    21M;
			client_body_buffer_size                 21M;
			proxy_buffering                         off;
			
			content_by_lua_file /etc/nginx/lua/nginx/ngx_conf_configuration.lua;
		}
		
		location / {
			return 404;
		}
	}
}

stream {
	lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;";
	
	lua_shared_dict tcp_udp_configuration_data 5M;
	
	resolver 10.96.0.10 valid=30s;
	
	init_by_lua_file /etc/nginx/lua/ngx_conf_init_stream.lua;
	
	init_worker_by_lua_file /etc/nginx/lua/nginx/ngx_conf_init_tcp_udp.lua;
	
	lua_add_variable $proxy_upstream_name;
	
	log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';
	
	access_log /var/log/nginx/access.log log_stream ;
	
	error_log  /var/log/nginx/error.log notice;
	
	upstream upstream_balancer {
		server 0.0.0.1:1234; # placeholder
		balancer_by_lua_file /etc/nginx/lua/nginx/ngx_conf_balancer_tcp_udp.lua;
	}
	
	server {
		listen 127.0.0.1:10247;
		
		access_log off;
		
		content_by_lua_file /etc/nginx/lua/nginx/ngx_conf_content_tcp_udp.lua;
	}
	
	# TCP services
	
	# UDP services
	
	# Stream Snippets
	
}

nginx.conf 配置文件中看到上面我们新增的 Ingress 资源对象的相关配置信息,不过需要注意的是现在并不会为每个 backend 后端都创建一个 upstream 配置块,现在是使用 Lua 程序进行动态处理的,所以我们没有直接看到后端的 Endpoints 相关配置数据。

此外我们也可以安装一个 kubectl 插件 https://kubernetes.github.io/ingress-nginx/kubectl-plugin 来辅助使用 ingress-nginx。

posted @ 2025-11-20 17:50  beamsoflight  阅读(12)  评论(0)    收藏  举报