学习K8S之路.4--- K8S集群安装部署(三)
上章K8S的集群已经搭建完成,但是不同宿主机之间的容器不能相互通信,本篇继续讲解
一:在运算节点(192.168.6.94和192.168.6.95)上安装CNI网络插件-Flannel
Flannel官方下载地址:https://github.com/coreos/flannel/tags
1:下载软件,解压,做软链
在192.168.6.94部署为例:
[root@k8s-6-94 ~]# mkdir flannel-v0.11.0 [root@k8s-6-94 ~]# tar zxf flannel-v0.11.0-linux-amd64.tar.gz -C flannel-v0.11.0 [root@k8s-6-94 ~]# mv flannel-v0.11.0 /opt/ [root@k8s-6-94 ~]# ln -s /opt/flannel-v0.11.0 /opt/flannel
2:创建目录,拷贝证书到certs目录下
[root@k8s-6-94 ~]# mkdir /opt/flannel/certs [root@k8s-6-94 ~]# cd /opt/flannel/certs/ [root@k8s-6-94 certs]# scp 192.168.6.96:/opt/certs/ca.pem . [root@k8s-6-94 certs]# scp 192.168.6.96:/opt/certs/client.pem . [root@k8s-6-94 certs]# scp 192.168.6.96:/opt/certs/client-key.pem .
3:创建配置
[root@k8s-6-94 certs]# cd /opt/flannel [root@k8s-6-94 flannel]# vi subnet.env FLANNEL_NETWORK=172.6.0.0/16 FLANNEL_SUBNET=172.6.94.1/24 FLANNEL_MTU=1500 FLANNEL_IPMASQ=false # 集群其他服务器不同地方 // FLANNEL_NETWORK:容器集群网段 // FLANNEL_SUBNET: 本机容器的网关
4:创建启动脚本
[root@k8s-6-94 flannel]# vi /opt/flannel/flanneld.sh #!/bin/sh ./flanneld \ --public-ip=192.168.6.94 \ --etcd-endpoints=https://192.168.6.93:2379,https://192.168.6.94:2379,https://192.168.6.95:2379 \ --etcd-keyfile=./certs/client-key.pem \ --etcd-certfile=./certs/client.pem \ --etcd-cafile=./certs/ca.pem \ --iface=ens192 \ --subnet-file=./subnet.env \ --healthz-port=2401 # 集群其他服务器不同地方 // --public-ip:本机IP地址 // --iface : 本机网卡名称
5:检查配置,权限,创建日志目录
[root@k8s-6-94 flannel]# chmod +x /opt/flannel/flanneld.sh [root@k8s-6-94 flannel]# mkdir -p /data/logs/flanneld
6:创建supervisor配置
[root@k8s-6-94 flannel]# vi /etc/supervisord.d/flannel.ini [program:flanneld-6-94] command=/opt/flannel/flanneld.sh ; the program (relative uses PATH, can take args) numprocs=1 ; number of processes copies to start (def 1) directory=/opt/flannel ; directory to cwd to before exec (def no cwd) autostart=true ; start at supervisord start (default: true) autorestart=true ; retstart at unexpected quit (default: true) startsecs=30 ; number of secs prog must stay running (def. 1) startretries=3 ; max # of serial start failures (default 3) exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) stopsignal=QUIT ; signal used to kill process (default TERM) stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) user=root ; setuid to this UNIX account to run the program redirect_stderr=true ; redirect proc stderr to stdout (default false) stdout_logfile=/data/logs/flanneld/flanneld.stdout.log ; stderr log path, NONE for none; default AUTO stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB) stdout_logfile_backups=4 ; # of stdout logfile backups (default 10) stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) stdout_events_enabled=false ; emit events on stdout writes (default false) // 集群其他服务器不同地方 [program:flanneld-6-94]
7:操作etcd,增加host-gw
root@k8s-6-94 flannel]# cd /opt/etcd/ [root@k8s-6-94 etcd]# ./etcdctl set /coreos.com/network/config '{"Network": "172.6.0.0/16", "Backend": {"Type": "host-gw"}}' 返回结果 {"Network": "172.6.0.0/16", "Backend": {"Type": "host-gw"}} # 验证查看 [root@k8s-6-94 etcd]# ./etcdctl get /coreos.com/network/config 返回结果 {"Network": "172.6.0.0/16", "Backend": {"Type": "host-gw"}}
8:启动服务并检查
root@k8s-6-94 etcd]# supervisorctl update [root@k8s-6-94 etcd]# supervisorctl status
9:验证
[root@k8s-6-94 ~]# ping 172.6.95.2 PING 172.6.95.2 (172.6.95.2) 56(84) bytes of data. 64 bytes from 172.6.95.2: icmp_seq=1 ttl=63 time=0.350 ms 64 bytes from 172.6.95.2: icmp_seq=2 ttl=63 time=0.259 ms 64 bytes from 172.6.95.2: icmp_seq=3 ttl=63 time=0.286 ms [root@k8s-6-95 ~]# ping 172.6.94.2 PING 172.6.94.2 (172.6.94.2) 56(84) bytes of data. 64 bytes from 172.6.94.2: icmp_seq=1 ttl=63 time=0.472 ms 64 bytes from 172.6.94.2: icmp_seq=2 ttl=63 time=0.293 ms 64 bytes from 172.6.94.2: icmp_seq=3 ttl=63 time=0.297 ms
二:在各运算节点上优化iptables规则
此时容器跨主机已经可以正常通信,但是各运算节点之间的容器通信需要进行SNAT转换,很不合理,需要进行优化
1:安装iptables-services并设置开机启动
[root@k8s-6-94 ~]# yum install iptables-services -y [root@k8s-6-94 ~]# systemctl start iptables [root@k8s-6-94 ~]# systemctl enable iptables
2:优化SNAT规则,各运算节点之间的各POD之间的网络通信不再出网
以:运算节点1(192.168.6.94)优化为例 [root@k8s-6-94 ~]# iptables -t nat -D POSTROUTING -s 172.6.94.0/24 ! -o docker0 -j MASQUERADE [root@k8s-6-94 ~]# iptables -t nat -I POSTROUTING -s 172.6.94.0/24 ! -d 172.6.0.0/16 ! -o docker0 -j MASQUERADE [root@k8s-6-94 ~]# iptables-save |grep -i postrouting [root@k8s-6-94 ~]# iptables -t filter -D INPUT -j REJECT --reject-with icmp-host-prohibited [root@k8s-6-94 ~]# iptables -t filter -D FORWARD -j REJECT --reject-with icmp-host-prohibited
3:各运算节点保存iptables规则
[root@k8s-6-94 ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ 确定 ]
到此,容器之间已经网络互通,网络之间也进行了优化,现在开始安装K8S服务插件
三:K8S的服务发现插件-CoreDNS
实现k8s里的DNS功能的插件
- kube-dns-kebernetes-v1.2至v1.10
- Coredns-kubenetes-v1.11至今
注意k8s里的dns不是万能的!它应该只负责自动维护“服务名”-->“集群网络IP”之间的关系
1:在运维主机192.168.6.96上,配置一个nginx虚拟主机,用以提供k8s统一的资源访问清单入口
1.1:安装Nginx(略)
1.2:配置Nginx/vhosts
[root@k8s-6-96 ~]# vi /usr/local/nginx/conf/vhosts/k8s-yaml.auth.com.conf server { listen 80; server_name k8s-yaml.auth.com; location / { autoindex on; default_type text/plain; root /data/k8s-yaml; } }
1.3:检查Nginx配置并启动服务
[root@k8s-6-96 ~]# /usr/local/nginx/sbin/nginx -t [root@k8s-6-96 ~]# /usr/local/nginx/sbin/nginx -s reload
1.4:建立相应的目录
[root@k8s-6-96 ~]# mkdir /data/k8s-yaml [root@k8s-6-96 ~]# mkdir /data/k8s-yaml/coredns
1.5:配置dns解析
在192.168.6.92服务器上解析域名: [root@k8s-6-92 ~]# vi /var/named/auth.com.zone # 添加一条A记录,并修改serial序列号,进行 + 1 k8s-yaml A 192.168.6.96
1.6:重启named服务,并验证
[root@k8s-6-92 ~]# systemctl restart named [root@k8s-6-92 ~]# dig -t A k8s-yaml.auth.com @192.168.6.92 +short 192.168.6.96
1.7:浏览器访问k8s-yaml.auth.com
**可以看到所有的目录

2:部署coredns
官方github地址:
https://github.com/coredns/coredns/releases
2.1:在运维主机(192.168.6.96)上下载docker镜像并打包推到harbor仓库
[root@k8s-6-96 ~]# docker pull coredns/coredns:1.6.1 [root@k8s-6-96 ~]# docker tag c0f6e815079e harbor.auth.com/public/coredns:1.6.1 [root@k8s-6-96 ~]# docker push harbor.od.com/public/coredns:1.6.1
2.2:在运维主机(192.168.6.96)上准备资源配置清单
参考地址:https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base
[root@k8s-6-96 ~]# vi /data/k8s-yaml/coredns/rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system
[root@k8s-6-96 ~]# vi /data/k8s-yaml/coredns/cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors log health ready kubernetes cluster.local 10.100.0.0/16 forward . 192.168.6.92 cache 30 loop reload loadbalance } ===================== 注释: kubernetes cluster.local 10.100.0.0/16 // 集群地址 forward . 192.168.6.92 // 上层DNS
[root@k8s-6-96 ~]# vi /data/k8s-yaml/coredns/dp.yaml apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/name: "CoreDNS" spec: replicas: 1 selector: matchLabels: k8s-app: coredns template: metadata: labels: k8s-app: coredns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns containers: - name: coredns image: harbor.auth.com/public/coredns:1.6.1 args: - -conf - /etc/coredns/Corefile volumeMounts: - name: config-volume mountPath: /etc/coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile
[root@k8s-6-96 ~]# vi /data/k8s-yaml/coredns/svc.yaml apiVersion: v1 kind: Service metadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: coredns clusterIP: 10.100.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 - name: metrics port: 9153 protocol: TCP
2.3:应用资源配置清单
在任意运算节点上应用
[root@k8s-6-94 ~]# kubectl apply -f http://k8s-yaml.auth.com/coredns/rbac.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created [root@k8s-6-94 ~]# kubectl apply -f http://k8s-yaml.auth.com/coredns/cm.yaml configmap/coredns created [root@k8s-6-94 ~]# kubectl apply -f http://k8s-yaml.auth.com/coredns/dp.yaml deployment.apps/coredns created [root@k8s-6-94 ~]# kubectl apply -f http://k8s-yaml.auth.com/coredns/svc.yaml service/coredns created
2.4:查看创建的资源
[root@k8s-6-94 ~]# kubectl get all -n kube-system NAME READY STATUS RESTARTS AGE pod/coredns-6b6c4f9648-wrrbt 1/1 Running 0 111s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/coredns ClusterIP 10.100.0.2 <none> 53/UDP,53/TCP,9153/TCP 99s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/coredns 1/1 1 1 111s NAME DESIRED CURRENT READY AGE replicaset.apps/coredns-6b6c4f9648 1 1 1 111s
详细查看
[root@k8s-6-94 ~]# kubectl get all -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/coredns-6b6c4f9648-wrrbt 1/1 Running 0 4m56s 172.6.95.3 k8s-6-95.host.com <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/coredns ClusterIP 10.100.0.2 <none> 53/UDP,53/TCP,9153/TCP 4m44s k8s-app=coredns NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/coredns 1/1 1 1 4m56s coredns harbor.auth.com/public/coredns:1.6.1 k8s-app=coredns NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/coredns-6b6c4f9648 1 1 1 4m56s coredns harbor.auth.com/public/coredns:1.6.1 k8s-app=coredns,pod-template-hash=6b6c4f9648
2.5:验证coredns
[root@k8s-6-94 ~]# dig -t A www.baidu.com @10.100.0.2 +short www.a.shifen.com. 220.181.38.149 220.181.38.150 [root@k8s-6-94 ~]# dig -t A k8s-6-94.host.com @10.100.0.2 +short 192.168.6.94 //自建dns是coredns上级dns,所以差得到 [root@k8s-6-94 ~]# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 7d <none> [root@k8s-6-94 ~]# kubectl get pods -n kube-public NAME READY STATUS RESTARTS AGE nginx-dp-5dfc689474-ggsn2 1/1 Running 0 7h23m nginx-dp-5dfc689474-hw6vm 1/1 Running 0 7h8m 查看: [root@k8s-6-94 ~]# kubectl expose deployment nginx-dp --port=80 -n kube-public [root@k8s-6-94 ~]# kubectl get svc -o wide -n kube-public NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR nginx-dp ClusterIP 10.100.95.151 <none> 80/TCP 7h21m app=nginx-dp 验证: [root@k8s-6-94 ~]# dig -t A nginx-dp @10.100.0.2 +short [root@k8s-6-94 ~]# dig -t A nginx-dp.kube-public.svc.cluster.local. @10.100.0.2 +short 10.100.95.151
找台宿主机验证
查看: [root@k8s-6-94 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ds-d5kl8 1/1 Running 0 120m 172.6.94.2 k8s-6-94.host.com <none> <none> nginx-ds-jtn62 1/1 Running 0 120m 172.6.95.2 k8s-6-95.host.com <none> <none> 进入宿主机容器: [root@k8s-6-94 ~]# kubectl exec -ti nginx-ds-jtn62 /bin/bash root@nginx-ds-jtn62:/# 验证: root@nginx-ds-jtn62:/# curl 10.100.95.151 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> root@nginx-ds-jtn62:/# curl nginx-dp.kube-public <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; 为什么容器里不用加FQDN? 原因: root@nginx-ds-jtn62:/# cat /etc/resolv.conf nameserver 10.100.0.2 search default.svc.cluster.local svc.cluster.local cluster.local host.com options ndots:5 //dns递归查询的层级,默认5层,效率低
四:K8S的服务暴露插件-Traefik
起因:其实此时外部无法解析到,coredns只对内部解析。k8s的dns实现了服务在集群"内"被自动发现,那如何是的服务在k8s集群 "外"被使用和访问呢?
部署traefik(ingress控制器)
注意:
- Ingress只能调度并爆露7层应用,特指http和https协议
- Ingress 是k8s API的标准资源类型之一,也是一种核心资源,它其实就是一组基于域名和URL路径,把用户的请求转发至指定Service资源的规则
- 可以将集群外部的请求流量,转发至集群内部,从而实现服务爆露
- Ingress控制器是能够为Igress资源监听某套接字,然后根据Ingress规则匹配机制路由调度流量的一个组件。
1:在运维主机(192.168.6.96)上准备traefik镜像,打包,并上传到harbor仓库
官方github地址:https://github.com/containous/traefik
[root@k8s-6-96 ~]# docker pull traefik:v1.7.2-alpine [root@k8s-6-96 ~]# docker images|grep traefik traefik v1.7.2-alpine add5fac61ae5 13 months ago 72.4MB [root@k8s-6-96 ~]# docker tag add5fac61ae5 harbor.auth.com/public/traefik:v1.7.2 [root@k8s-6-96 ~]# docker push harbor.auth.com/public/traefik:v1.7.2 The push refers to repository [harbor.auth.com/public/traefik] a02beb48577f: Pushed ca22117205f4: Pushed 3563c211d861: Pushed df64d3292fd6: Pushed v1.7.2: digest: sha256:6115155b261707b642341b065cd3fac2b546559ba035d0262650b3b3bbdd10ea size: 1157
2:在运维主机(192.168.6.96)上准备资源配置清单
官方yaml文件:https://github.com/containous/traefik/tree/v1.7/examples/k8s
[root@k8s-6-96 ~]# mkdir /data/k8s-yaml/traefik [root@k8s-6-96 ~]# cd /data/k8s-yaml/traefik/ [root@k8s-6-96 ~]# vi /data/k8s-yaml/traefik/rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: traefik-ingress-controller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: traefik-ingress-controller rules: - apiGroups: - "" resources: - services - endpoints - secrets verbs: - get - list - watch - apiGroups: - extensions resources: - ingresses verbs: - get - list - watch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: traefik-ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: traefik-ingress-controller subjects: - kind: ServiceAccount name: traefik-ingress-controller namespace: kube-system
[root@k8s-6-96 ~]# vi /data/k8s-yaml/traefik/ds.yaml apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: traefik-ingress namespace: kube-system labels: k8s-app: traefik-ingress spec: template: metadata: labels: k8s-app: traefik-ingress name: traefik-ingress spec: serviceAccountName: traefik-ingress-controller terminationGracePeriodSeconds: 60 containers: - image: harbor.auth.com/public/traefik:v1.7.2 name: traefik-ingress ports: - name: controller containerPort: 80 hostPort: 81 - name: admin-web containerPort: 8080 securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE args: - --api - --kubernetes - --logLevel=INFO - --insecureskipverify=true - --kubernetes.endpoint=https://192.168.6.89:7443 - --accesslog - --accesslog.filepath=/var/log/traefik_access.log - --traefiklog - --traefiklog.filepath=/var/log/traefik.log - --metrics.prometheus
[root@k8s-6-96 ~]# vi /data/k8s-yaml/traefik/svc.yaml kind: Service apiVersion: v1 metadata: name: traefik-ingress-service namespace: kube-system spec: selector: k8s-app: traefik-ingress ports: - protocol: TCP port: 80 name: controller - protocol: TCP port: 8080 name: admin-web
[root@k8s-6-96 ~]# vi /data/k8s-yaml/traefik/ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: traefik-web-ui namespace: kube-system annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: traefik.auth.com http: paths: - path: / backend: serviceName: traefik-ingress-service servicePort: 8080
3:应用资源配置清单
任意一台运算节点上
[root@k8s-6-94 ~]# kubectl apply -f http://k8s-yaml.auth.com/traefik/rbac.yaml serviceaccount/traefik-ingress-controller created clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created [root@k8s-6-94 ~]# kubectl apply -f http://k8s-yaml.auth.com/traefik/ds.yaml daemonset.extensions/traefik-ingress created [root@k8s-6-94 ~]# kubectl apply -f http://k8s-yaml.auth.com/traefik/svc.yaml service/traefik-ingress-service created [root@k8s-6-94 ~]# kubectl apply -f http://k8s-yaml.auth.com/traefik/ingress.yaml ingress.extensions/traefik-web-ui created
4:检查创建的资源
[root@k8s-6-94 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6b6c4f9648-wrrbt 1/1 Running 0 108m traefik-ingress-9z6wd 1/1 Running 0 10m traefik-ingress-ksznv 1/1 Running 0 10m
报错: [root@k8s-6-94 ~]# kubectl describe pods traefik-ingress-ksznv -n kube-system Warning FailedCreatePodSandBox 6m23s kubelet, hdss7-21.host.com Failed create pod sandbox:
rpc error: code = Unknown desc = failed to start sandbox container for pod "traefik-ingress-ksznv": Error response from daemon: driver failed programming external
connectivity on endpoint k8s_POD_traefik-ingress-ksznv_kube-system_d1389546-d27b-47cd-92c1-f5a8963043fd_0 (2f032861a4eb0e5240554e388b8ae8a5efd9ead3c56e50840aacdf43570c434b)
: (iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.7.21.5 --dport 80 -j ACCEPT: iptables: No chain/target/match by that name. 解决: systemctl restart docker.service
5:在DNS服务器(192.168.6.92)上解析域名
[root@k8s-6-92 ~]# vi /var/named/auth.com.zone # 添加一条A记录,并修改serial序列号,进行 + 1 traefik A 192.168.6.89 重启named服务,并验证 [root@k8s-6-92 ~]# systemctl restart named [root@k8s-6-92 ~]# dig -t A traefik.auth.com @192.168.6.92 +short 192.168.6.89
6:配置反向代理
注意:代理节点(192.168.6.92和192.168.6.93)上都需要配置
[root@k8s-6-92 ~]# vi /usr/local/nginx/conf/vhosts/auth.com.conf upstream default_backend_traefik { server 192.168.6.94:81 max_fails=3 fail_timeout=10s; server 192.168.6.95:81 max_fails=3 fail_timeout=10s; } server { server_name *.auth.com; location / { proxy_pass http://default_backend_traefik; proxy_set_header Host $http_host; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; } }
[root@k8s-6-92 ~]# /usr/local/nginx/sbin/nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@k8s-6-92 ~]# /usr/local/nginx/sbin/nginx -s reload
7:浏览器中输入:http://traefik.auth.com/,出现如下页面


浙公网安备 33010602011771号