kubernetes集群之部署kube-proxy组件
创建证书请求文件
[root@master-1 work]# vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "k8s",
"OU": "system"
}
]
}
创建证书
[root@master-1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2022/08/11 23:09:30 [INFO] generate received request
2022/08/11 23:09:30 [INFO] received CSR
2022/08/11 23:09:30 [INFO] generating key: rsa-2048
2022/08/11 23:09:30 [INFO] encoded CSR
2022/08/11 23:09:30 [INFO] signed certificate with serial number 651393559866203534444655380501390249620393978184
2022/08/11 23:09:30 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
设置角色绑定
[root@master-1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.10.29:6443 --kubeconfig=kube-proxy.kubeconfig Cluster "kubernetes" set. [root@master-1 work]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig User "kube-proxy" set. 您在 /var/spool/mail/root 中有新邮件 [root@master-1 work]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig Context "default" created. [root@master-1 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig Switched to context "default".
创建kube-proxy配置文件
[root@master-1 work]# vim kube-proxy.yaml apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 192.168.10.32 clientConnection: kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig clusterCIDR: 192.168.10.0/24 healthzBindAddress: 192.168.10.32:10256 kind: KubeProxyConfiguration metricsBindAddress: 192.168.10.32:10249 mode: "ipvs"
创建服务启动文件与分发文件
[root@master-1 work]# vim kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/local/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.yaml \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@master-1 work]# scp kube-proxy.kubeconfig kube-proxy.yaml node-1:/etc/kubernetes/ kube-proxy.kubeconfig 100% 6237 5.3MB/s 00:00 kube-proxy.yaml 100% 293 300.4KB/s 00:00 [root@master-1 work]# scp kube-proxy.service node-1:/usr/lib/systemd/system/ kube-proxy.service 100% 438 639.7KB/s 00:00
创建服务运行目录并设置开机自启和启动服务
[root@master-1 work]# scp kube-proxy.kubeconfig kube-proxy.yaml node-1:/etc/kubernetes/
kube-proxy.kubeconfig 100% 6237 5.3MB/s 00:00
kube-proxy.yaml 100% 293 300.4KB/s 00:00
[root@master-1 work]# scp kube-proxy.service node-1:/usr/lib/systemd/system/
kube-proxy.service 100% 438 639.7KB/s 00:00
您在 /var/spool/mail/root 中有新邮件
[root@master-1 work]# vim kube-proxy.service
您在 /var/spool/mail/root 中有新邮件
[root@node-1 modules]# systemctl enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node-1 modules]# systemctl start kube-proxy
[root@node-1 modules]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube-Proxy Server
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since 四 2022-08-11 23:21:35 CST; 4s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 23856 (kube-proxy)
Tasks: 6
Memory: 16.7M
CGroup: /system.slice/kube-proxy.service
└─23856 /usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.yaml --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2
8月 11 23:21:35 node-1 kube-proxy[23856]: I0811 23:21:35.479752 23856 shared_informer.go:240] Waiting for caches to sync for service config
8月 11 23:21:35 node-1 kube-proxy[23856]: I0811 23:21:35.479767 23856 config.go:224] Starting endpoint slice config controller
8月 11 23:21:35 node-1 kube-proxy[23856]: I0811 23:21:35.479770 23856 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
8月 11 23:21:35 node-1 kube-proxy[23856]: I0811 23:21:35.479870 23856 reflector.go:219] Starting reflector *v1.Service (15m0s) from k8s.io/client-go/informers/factory.go:134
8月 11 23:21:35 node-1 kube-proxy[23856]: I0811 23:21:35.480231 23856 reflector.go:219] Starting reflector *v1beta1.EndpointSlice (15m0s) from k8s.io/client-go/informers/factory.go:134
8月 11 23:21:35 node-1 kube-proxy[23856]: I0811 23:21:35.491603 23856 service.go:275] Service default/kubernetes updated: 1 ports
8月 11 23:21:35 node-1 kube-proxy[23856]: I0811 23:21:35.580498 23856 shared_informer.go:247] Caches are synced for service config
8月 11 23:21:35 node-1 kube-proxy[23856]: I0811 23:21:35.580586 23856 proxier.go:1036] Not syncing ipvs rules until Services and Endpoints have been received from master
8月 11 23:21:35 node-1 kube-proxy[23856]: I0811 23:21:35.580498 23856 shared_informer.go:247] Caches are synced for endpoint slice config
8月 11 23:21:35 node-1 kube-proxy[23856]: I0811 23:21:35.580929 23856 service.go:390] Adding new service port "default/kubernetes:https" at 10.255.0.1:443/TCP
部署calico组件参考yamle https://support.huaweicloud.com/intl/en-us/fg-kunpengcpfs/kunpengcontainer_06_0033.html 镜像版本 docker.io/calico/node:v3.5.3
[root@master-1 ~]# mv calico.yaml /tmp/
[root@master-1 ~]# curl https://projectcalico.docs.tigera.io/manifests/canal.yaml -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 225k 100 225k 0 0 36398 0 0:00:06 0:00:06 --:--:-- 60266
[root@master-1 ~]# kubectl apply -f canal.yaml
configmap/canal-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-calico created
daemonset.apps/canal created
serviceaccount/canal created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
error: unable to recognize "canal.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1"
[root@master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-1 NotReady <none> 25m v1.20.7 下载镜像
[root@master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-1 NotReady <none> 25m v1.20.7
[root@master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-1 NotReady <none> 25m v1.20.7
[root@master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-1 NotReady <none> 25m v1.20.7
[root@master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-1 NotReady <none> 26m v1.20.7
您在 /var/spool/mail/root 中有新邮件
[root@master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-1 NotReady <none> 26m v1.20.7
[root@master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-1 NotReady <none> 26m v1.20.7
[root@master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-1 NotReady <none> 26m v1.20.7
[root@master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-1 NotReady <none> 26m v1.20.7
[root@master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-1 Ready <none> 26m v1.20.7
[root@master-1 ~]#
草都可以从石头缝隙中长出来更可况你呢

浙公网安备 33010602011771号