使用kubeadm安装k8s集群v1.20.6 --- 2

第一章介绍了,kubeadmin安装集群,确实比二进制安装快的多。是不是半个小时就搞定了。下面先介绍几个自己踩的坑。

问题一:CoreDNS重启之后,状态是Running,但READY确实0/1,查看日志显示连接apiserver请求超时

 

 

 解决办法:

1:修改flannel配置文件
-6-81 net.d]# kubectl edit configmap kube-flannel-cfg -n  kube-system -o  yaml

修改如下配置:
  net-conf.json: |
    {
      "Network": "172.6.0.0/16",
      "Backend": {
        "Type": "host-gw"
      }
    }

# 注解:"Network":  flannel为pod分配的地址段,要与自己pod的一直,切记不可和宿主机的网段冲突
       "Type": 使用host-gw模式,此步可以不修改

2:重启flannel pod
-81 ~]# kubectl get pod -n kube-system | grep kube-flannel | awk '{system("kubectl delete pod "$1" -n kube-system")}'

3:重启CoreDNS pod
 ~]# kubectl scale deployment -n kube-system --replicas=0 coredns
 ~]# kubectl scale deployment -n kube-system --replicas=2 coredns

问题二:kube-proxy 开启 ipvs

#修改ConfigMap的kube-system/kube-proxy中的config.conf,把 mode: "" 改为mode: “ipvs" 保存退出即可
~]# kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited
###删除之前的proxy pod
~]# kubectl get pod -n kube-system |grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-2m5jh" deleted
pod "kube-proxy-nfzfl" deleted
pod "kube-proxy-shxdt" deleted
#查看proxy运行状态
~]# kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-54qnw                              1/1     Running   0          24s
kube-proxy-bzssq                              1/1     Running   0          14s
kube-proxy-cvlcm                              1/1     Running   0          37s
#查看日志,如果有 `Using ipvs Proxier.` 说明kube-proxy的ipvs 开启成功!
~]# kubectl logs kube-proxy-54qnw -n kube-system
I0518 20:24:09.319160       1 server_others.go:176] Using ipvs Proxier.
W0518 20:24:09.319751       1 proxier.go:386] IPVS scheduler not specified, use rr by default
I0518 20:24:09.320035       1 server.go:562] Version: v1.14.2
I0518 20:24:09.334372       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0518 20:24:09.334853       1 config.go:102] Starting endpoints config controller
I0518 20:24:09.334916       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0518 20:24:09.334945       1 config.go:202] Starting service config controller
I0518 20:24:09.334976       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0518 20:24:09.435153       1 controller_utils.go:1034] Caches are synced for service config controller
I0518 20:24:09.435271       1 controller_utils.go:1034] Caches are synced for endpoints config controller

使用ipvsadm测试,可以查看之前创建的Service已经使用LVS创建了集群。

 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.100.0.1:443 rr
  -> 192.168.6.81:6443            Masq    1      0          0         
TCP  10.100.0.10:53 rr
  -> 172.6.1.2:53                 Masq    1      0          0         
  -> 172.6.2.2:53                 Masq    1      0          0         
TCP  10.100.0.10:9153 rr
  -> 172.6.1.2:9153               Masq    1      0          0         
  -> 172.6.2.2:9153               Masq    1      0          0                
UDP  10.100.0.10:53 rr
  -> 172.6.1.2:53                 Masq    1      0          0         
  -> 172.6.2.2:53                 Masq    1      0          0         

简单记录下陈述式资源方法:

名称空间操作:
# 查看名称空间
kubectl get ns

# 查看指定名称空间资源
kubectl get all -n default

# 创建名称空间
kubectl create ns test

# 删除名称空间
kubectl delete ns test


deployment资源操作:
# 创建deployment资源
kubectl create deployment nginx-dp --image=harbor.auth.com/public/nginx:latest -n test

# 查看deployment资源
kubectl get deployment -n test

# 删除deployment资源
kubectl delete deployment nginx-dp -n test

# 查看pod资源
kubectl get pods -n test -o wide

# 删除pod资源
kubectl delete pod nginx-dp-5dfc689474-v96vj -n test


service资源操作:
# 创建service(必须要有deployment资源)
kubectl expose deployment nginx-dp --port=80 -n test

# 查看service资源
kubectl get svc -n test

# 详细查看service资源
kubectl describe svc nginx-dp -n test

# 扩容pod容器数量
kubectl scale deployment nginx-dp --replicas=2 -n test

# 删除service资源
kubectl delete svc nginx-dp -n test

下面开始记录Ingress,服务暴露(简单的说,就是把容器中的服务,可以让外面的人进行访问)

实验一:
1:创建名称空间
kubectl create ns test

2:在test名称空间中创建deployment资源
kubectl create deployment nginx-dp --image=nginx:latest -n test

3:扩容pod容器数量
kubectl scale deployment nginx-dp --replicas=2 -n test

4:创建service
kubectl expose deployment nginx-dp --port=80 -n test

5:验证资源pod资源
 ~]# kubectl get pods -n test -o wide
NAME                        READY   STATUS    RESTARTS   AGE    IP           NODE       NOMINATED NODE   READINESS GATES
nginx-dp-5849f68b88-6dv5v   1/1     Running   0          12h    172.6.1.4    k8s-6-82   <none>           <none>
nginx-dp-5849f68b88-b6lmn   1/1     Running   0          12h    172.6.1.3    k8s-6-82   <none>           <none>

6:验证nginx服务是否正常
 ~]# curl 172.6.1.3
......
<h1>Welcome to nginx!</h1>
......

 ~]# curl 172.6.1.4
......
<h1>Welcome to nginx!</h1>
......

7:验证service资源
 ~]# kubectl get svc -n test -o wide
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE   SELECTOR
nginx-dp     ClusterIP   10.100.19.9      <none>        80/TCP     12h   app=nginx-dp

8:使用svc进行访问Nginx服务
~]# curl 10.100.19.9
......
<h1>Welcome to nginx!</h1>
......

思考:容器内的服务不可能只能容器中服务,怎么让外面的用户进行访问呢???

是的,这个时候就需要部署Ingress服务,把内部服务进行暴露

1:下载yaml文件 
~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml

2:修改mandatory.yaml文件
kind: Deployment     # 修改为 kind: DaemonSet
replicas: 2          # 删除了此行
hostPort: 81         # containerPort: 80 下面添加

3:应用配置文件
kubectl apply -f mandatory.yaml

此时Ingress已经安装完成

4:验证Ingress
]# kubectl get pods -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
nginx-ingress-controller-5bd4dff84b-2vhqh   1/1     Running   0          7h36m   172.6.1.5    k8s-6-82   <none>           <none>
nginx-ingress-controller-5bd4dff84b-nb9hr   1/1     Running   0          7h36m   172.6.2.10   k8s-6-83   <none>           <none>

Ingress安装完成,就可以对Nginx服务进行暴露了,编辑一个yaml文件

]# vi ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-dp
  namespace: test
spec:
  rules:
  - host: nginx.auth.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-dp
            port:
              number: 80

# 应用资源配置清单:
kubectl apply -f ingress.yaml

进行访问:在hosts中绑定域名记录

192.168.6.82  nginx.auth.com

然后打开浏览器访问:

 

哇塞:终于可以在外面访问了。是不是不明白我为什么要修改为 “DaemonSet”,DaemonSet 是可以在每个Node节点上运行一个,在生产环境中我们就可以做高可用服务。

我们可以在 Ingress 前面放置一个Nginx,进行反向代理 Ingress 

1:nginx安装方式省略
2:配置反向代理
[root@k8s-6-92 ~]# vi /usr/local/nginx/conf/vhosts/auth.com.conf 
upstream default_backend_traefik {
    server 192.168.6.82:81    max_fails=3 fail_timeout=10s;
    server 192.168.6.83:81    max_fails=3 fail_timeout=10s;
}
server {
    server_name *.auth.com;

    location / {
        proxy_pass http://default_backend_traefik;
        proxy_set_header Host       $http_host;
        proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
    }
}
[root@k8s-6-92 ~]# /usr/local/nginx/sbin/nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@k8s-6-92 ~]# /usr/local/nginx/sbin/nginx -s reload

此时,修改hosts解析为:192.168.6.92 nginx.auth.com,在通过域名试试吧

 

现在流程就是:

192.168.6.92 相当于 F5 或者 SLB。用户发送一个“nginx.auth.com”请求,F5接收到之后,把请求给 Ingress 服务,由 Ingress 服务去找 Server,Server去找Pod

下面以tomcat给出一个完成示例,仅供参考:

1:创建资源配置清单

tomcat]# cat tomcat-dp.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: tomcat-dp
  namespace: test
spec:
  selector:
    matchLabels:
      app: mytomcat
  template:
    metadata:
      labels:
        app: mytomcat
    spec:
      containers:
      - name: tomcat-dp
        image: tomcat:latest 
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080

tomcat]# cat svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: tomcat-svc
  namespace: test
spec:
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: mytomcat
  type: ClusterIP

tomcat]# cat ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tomcat-ingress
  namespace: test
spec:
  rules:
  - host: tomcat.auth.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: tomcat-svc
            port:
              number: 8080

2:应用资源配置清单

tomcat]# kubectl apply -f tomcat-dp.yaml 
tomcat]# kubectl apply -f svc.yaml 
tomcat]# kubectl apply -f ingress.yaml.yaml 

3:验证服务

tomcat]# kubectl get pods -n test -o wide
NAME                        READY   STATUS    RESTARTS   AGE    IP           NODE       NOMINATED NODE   READINESS GATES
tomcat-dp-5lbk6             1/1     Running   0          137m   172.6.2.13   k8s-6-83   <none>           <none>
tomcat-dp-j9ftl             1/1     Running   0          137m   172.6.1.6    k8s-6-82   <none>           <none>

tomcat]# kubectl get svc -n test -o wide
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE    SELECTOR
tomcat-svc   ClusterIP   10.100.138.160   <none>        8080/TCP   134m   app=mytomcat

tomcat]# kubectl get ingress -n test -o wide
NAME             CLASS    HOSTS             ADDRESS   PORTS   AGE
tomcat-ingress   <none>   tomcat.auth.com             80      130m

4:绑定hosts

192.168.6.92  tomcat.auth.com

5:浏览器进行验证,我对tomcat进行了修改,内容不一样,同时也测试了是否负载

 

再次访问

 

posted @ 2021-07-28 00:29  为生活而努力  阅读(346)  评论(1)    收藏  举报