k8s.gcr.io => registry.cn-hangzhou.aliyuncs.com/google_containers
Kubeadm单master安装
一、规划
| 角色 | IP | 主机名 | 组件 |
|---|---|---|---|
| master | 192.168.56.129 | anyu967master1 | apiserver、controller-manager、scheduler、etcd、kube-proxy、docker、calico |
| node1 | 192.168.56.130 | anyu967node1 | kubelet、kube-proxy、docker、calico、coredns |
| node2 | 192.168.56.131 | anyu967node2 | kubelet、kube-proxy、docker、calico、coredns |
二、搭建
2.1. 初始化
2.2. 安装
kubeadm config images pull 拉取镜像失败的问题
kubeadm安装k8s完整教程_nerdsu的博客-CSDN博客_kubeadm安装k8s
k8s学习-k8s初识、Centos下集群安装与一键离线安装-CSDN博客
配置cri-docker使kubernetes1.24以docker作为运行时
# 方式一
[root@anyu967node1 Package]# kubeadm config images pull --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
# 方式二
[root@anyu967node1 Package]# kubeadm config images list
registry.k8s.io/kube-apiserver:v1.25.4
registry.k8s.io/kube-controller-manager:v1.25.4
registry.k8s.io/kube-scheduler:v1.25.4
registry.k8s.io/kube-proxy:v1.25.4
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.5-0
registry.k8s.io/coredns/coredns:v1.9.3
#!/bin/sh
for i in `kubeadm config images list`; do
imageName=${i#registry.k8s.io/} # 表示从左边删除到第一个指定的字符;
docker pull registry.aliyuncs.com/google_containers/$imageName
docker tag registry.aliyuncs.com/google_containers/$imageName registry.k8s.io/$imageName
docker rmi registry.aliyuncs.com/google_containers/$imageName
done
# centos7安装k8s 1.25版本 Error getting node
## [[参考](https://www.cnblogs.com/gaoyuechen/p/16850801.html)]https://www.cnblogs.com/gaoyuechen/p/16850801.html
## [[参考](https://github.com/Mirantis/cri-dockerd/tree/master/packaging/systemd)]https://github.com/Mirantis/cri-dockerd/tree/master/packaging/systemd
[root@anyu967master1 Package]# kubeadm reset -f
[root@anyu967master1 Package]# kubeadm init --kubernetes-version=1.25.4 --apiserver-advertise-address=192.168.56.129 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --cri-socket /var/run/cri-dockerd.sock # 注意--cri-socket
2.3. 使用
2.3.1. 添加工作节点
[root@anyu967master1 ~]# kubeadm token create --print-join-command
# kubeadm join 192.168.56.129:6443 --token q3fxy2.i6hlhsdtb2zjn9gg --discovery-token-ca-cert-hash sha256:e18cf48d72431a90b77133cb90a36d1be87ee664c4e08adb9acb4bdda92ba2f6
kubeadm join 192.168.56.129:6443 --token q3fxy2.i6hlhsdtb2zjn9gg --discovery-token-ca-cert-hash sha256:e18cf48d72431a90b77133cb90a36d1be87ee664c4e08adb9acb4bdda92ba2f6 --cri-socket /var/run/cri-dockerd.sock
[root@anyu967master1 ~]# kubeadm label node node1 node-role.kubernets.io/worker=worker
[root@anyu967master1 Package]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
anyu967master1 NotReady control-plane,master 47h v1.25.4 # kubectl label node anyu967master1 node-role.kubernetes.io/master=master
anyu967node1 NotReady worker 6m45s v1.25.4 # kubectl label node anyu967node1 node-role.kubernetes.io/worker=worker
anyu967node2 NotReady worker 4m15s v1.25.4
[root@anyu967master1 Package]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-c676cc86f-4t4tq 0/1 Pending 0 47h <none> <none> <none> <none>
coredns-c676cc86f-qrbcz 0/1 Pending 0 47h <none> <none> <none> <none>
etcd-anyu967master1 1/1 Running 1 (47h ago) 47h 192.168.56.129 anyu967master1 <none> <none>
kube-apiserver-anyu967master1 1/1 Running 1 (47h ago) 47h 192.168.56.129 anyu967master1 <none> <none>
kube-controller-manager-anyu967master1 1/1 Running 1 (47h ago) 47h 192.168.56.129 anyu967master1 <none> <none>
kube-proxy-pntbq 1/1 Running 0 4m17s 192.168.56.130 anyu967node1 <none> <none>
kube-proxy-v7bxp 1/1 Running 0 107s 192.168.56.131 anyu967node2 <none> <none>
kube-proxy-z9562 1/1 Running 1 (47h ago) 47h 192.168.56.129 anyu967master1 <none> <none>
kube-scheduler-anyu967master1 1/1 Running 1 (47h ago) 47h 192.168.56.129 anyu967master1 <none> <none>
2.3.2. 安装网络插件(Calico、flannel)
2.3.3. 测试
-
测试能否访问网络
[root@anyu967master1 Package]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh # If you don't see a command prompt, try pressing enter. / # ping baidu.com PING baidu.com (110.242.68.66): 56 data bytes 64 bytes from 110.242.68.66: seq=0 ttl=127 time=49.951 ms 64 bytes from 110.242.68.66: seq=1 ttl=127 time=51.254 ms -
测试能否访问内部服务
[root@anyu967master1 Package]# kubectl apply -f tomcat.yaml pod/demo-pod created [root@anyu967master1 Package]# kubectl get pods NAME READY STATUS RESTARTS AGE demo-pod 1/1 Running 0 10s [root@anyu967master1 Package]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES demo-pod 1/1 Running 0 42s 10.244.87.4 anyu967node1 <none> <none> [root@anyu967master1 Package]# kubectl apply -f tomcat-service.yaml service/tomcat created [root@anyu967master1 Package]# kubectl get svc http://192.168.56.130:30080/ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d23h tomcat NodePort 10.101.245.87 <none> 8080:30080/TCP 11s -
测试coredns
[root@anyu967master1 Package]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh # If you don't see a command prompt, try pressing enter. / # nslookup kubernetes.default.svc.cluster.local Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default.svc.cluster.local Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
2.4. k8s可视化UI界面
-
dashboard/creating-sample-user.md at master · kubernetes/dashboard
-
token认证
[root@anyu967master1 k8s]# kubectl apply -f kubernetes-dashboard.yaml namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created [root@anyu967master1 k8s]# kubectl get pod -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-64bcc67c9c-flmkd 1/1 Running 0 100s kubernetes-dashboard-5c8bd6b59-qvjpp 1/1 Running 0 100s [root@anyu967master1 k8s]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.107.231.156 <none> 8000/TCP 8m23s kubernetes-dashboard ClusterIP 10.104.53.184 <none> 443/TCP 8m24s [root@anyu967master1 k8s]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard type: ClusterIP ==> type: NodePort [root@anyu967master1 k8s]# kubectl get svc -n kubernetes-dashboard https://192.168.56.129:31908/#/login NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.107.231.156 <none> 8000/TCP 11m kubernetes-dashboard NodePort 10.104.53.184 <none> 443:31908/TCP 12m [root@anyu967master1 k8s]# kubectl get sa -n kubernetes-dashboard NAME SECRETS AGE default 0 17m kubernetes-dashboard 0 17m [root@anyu967master1 k8s]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created [root@anyu967master1 k8s]# kubectl get secret -n kubernetes-dashboard NAME TYPE DATA AGE kubernetes-dashboard-certs Opaque 0 26m kubernetes-dashboard-csrf Opaque 1 26m kubernetes-dashboard-key-holder Opaque 2 26m [root@anyu967master1 k8s]# cat >> kubernetes-dashboard-secret.yaml <<EOF apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token metadata: name: kubernetes-dashboard-token-a0d6b namespace: kubernetes-dashboard annotations: kubernetes.io/service-account.name: "kubernetes-dashboard" EOF [root@anyu967master1 k8s]# kubectl describe secret kubernetes-dashboard-token-a0d6b -n kubernetes-dashboard Name: kubernetes-dashboard-token-a0d6b Namespace: kubernetes-dashboard Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard kubernetes.io/service-account.uid: 3247ffe8-831f-4b03-ad82-ee349dae2c92 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1099 bytes namespace: 20 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1FcUlia3JQYkXXXXZmU1hZMzYzbm5rdnQxcDRlaDhwWmlwN25pX3dUWWsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9XXXXXXXXXXXXXXXm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hXXXXiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1hMGQ2YiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjMyNDdmZmU4LTgzMWYtNGIwMy1hZDgyLWVlMzQ5ZGFlMmM5MiXXXXN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.ohwIxj65yPiKRVJt_GTI_P0sVTAGcEXbZPR5-CeKnV18kvh7WqXUXgGmD-3KjvLSZuAOQVmNfM82gquI1dfL7UUgPRp_vhFyzPOiK0TKoBhNMAcnMdV_xprtvoRPPt-XFl-m2Yz22T3HOFs1W9gjGqOUUFijzeo1fjWDZ5SGT_pJ3x0eO1gCJGtNh_Ud51ROqUylGujmUsE1ZzvDQIpqhFG5172y9YQQX77S0ipR0jiTDOtIMOjcPU1lmfHG6w8zBBHcNo6_rNbMbPJLZyCS4SJdXDRQ-3H1jm8xxSkd-WH6N_TfL9kVMmOwtm5MK2XJb2IcxPHv15PUlTVo_HT9SQ -
kubeConfig认证
[root@anyu967master1 k8s]# cd /etc/kubernetes/pki/ [root@anyu967master1 pki]# kubectl config set-cluster kubernetes --certificate-authority=./ca.crt --server="https://192.168.56.129:6443" --embed-certs=true --kubeconfig=/root/dashboard-admin.conf <<'RES' apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJB Z0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1URXhN ekUzTkRFek1Gb1hEVE15TVRXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXJsY3pDQ0FTSXdE UVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmZNCmRQVWxsV05kd1oybEhZMGtWWTNGZDJnRElxa2Qv MTRxZ2sxWThQMzZWU3UrU3grSGRkK0xpWTNIS1ZwRGRkcXUKVEk5VERkTzQva2RSTDBRam5rYU5IK3NvSE5WckI3S1Zr YnVLY0cwK0tDNVF4ZVROVlZGOWXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXOamhvdGJuMVpzTy82QklwaWthUzNFOE1H dmh1K1NjSURsQk1RMzFJc1k1RWIzZ1dZS0paCmFlNHZEWTJVaUlsZmpOSFBxZzk4MDkwano4WWJHSnRudkQ1eFhXSkNk SHFzTmNJSUVHejM1SndkVW9mWGViQTQKR2dzaGlNSXAyNE96V3RKQ2hOYko2S2dFL3FtQ2NNYmhuRS80czJUTzR5Q2pC OTkveGNZbEcwYWNVbGNZR1JXZAppVEpMMDc5TThDaHQvZ290bnVFQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURB Z0trTUE4R0ExVWRFd0VCCi9XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXJleXVrUmc3aGNPcWd4bUFSSlVN QlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBS0xKVENKdzVUb2hI M201Q2YxNgpUNmIveGdINWdsaEVTL1RoOWJBUWhqZlVuZVF3Vmg2aWo5RXQxcmRQbVhrNGQxRWNuSGx2WXNFajZrNG10 akdMCkt3ekV2aktZVTFUQW5FRjJ2K09iRUVHMFBaVnRYUWh0VTBVTXpxbW1XWkxqd0QveUQ4SEpBQVpTbkVHSkt6SGEK NUZRaXNkQXhGL2lEb3gyQURXKytxM0psZ0h1UXhNWnBiUFBqMUE4M0NQallNSDNXTW0vNmJxOXQ4VllEazF2cgpMWHhN T3I2aEMvbmd2Q1J3ajNhNnR5MlpIbkdTc2VXakxDT1BHOUx0cWN4cDF0eXdkVkRSRFdwN05OaGtGYlA1CjdnemxIbHp2 aHRGeVNacE85SWJEOExiZ0FQMkllUVVrQU1oRm5mQytQT1p0eGUxeWZsZzVRNVZrb0lrWWJEcXYKS1BNPQotLS0tLUVO RCBDRVJUSUZJQ0FURS0tLS0tCg== server: https://192.168.56.129:6443 name: kubernetes contexts: null current-context: "" kind: Config preferences: {} users: null RES [root@anyu967master1 k8s]# DEF_NS_ADMIN_TOKEN=$(kubectl get secret kubernetes-dashboard-token-a0d6b -n kubernetes-dashboard -o jsonpath={.data.token}|base64 -d) [root@anyu967master1 k8s]# echo $DEF_NS_ADMIN_TOKEN kubectl describe secret kubernetes-dashboard-token-a0d6b -n kubernetes-dashboard [root@anyu967master1 pki]# kubectl config set-credentials dashboard-admin --token=$DEF_NS_ADMIN_TOKEN --kubeconfig=/root/dashboard-admin.conf User "dashboard-admin" set. [root@anyu967master1 pki]# cat /root/dashboard-admin.conf <<'RES' apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1URXhNekUzTkRFek1Gb1hEVE15TVRFeE1ERTNOREV6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmZNCmRQVWxsV05kd1oybEhZMGtWWTNGZDJnRElxa2QvMTRxZ2sxWThQMzZWU3UrU3grSGRkK0xpWTNIS1ZwRGRkcXUKVEk5VERkTzQva2RSTDBRam5rYU5IK3NvSE5WckI3S1ZrYnVLY0cwK0tDNVF4ZVROVlZGOWRSSEtqRExCL3h3NApaZ0ZkU2VMQTVTVzdOamhvdGJuMVpzTy82QklwaWthUzNFOE1Hdmh1K1NjSURsQk1RMzFJc1k1RWIzZ1dZS0paCmFlNHZEWTJVaUlsZmpOSFBxZzk4MDkwano4WWJHSnRudkQ1eFhXSkNkSHFzTmNJSUVHejM1SndkVW9mWGViQTQKR2dzaGlNSXAyNE96V3RKQ2hOYko2S2dFL3FtQ2NNYmhuRS80czJUTzR5Q2pCOTkveGNZbEcwYWNVbGNZR1JXZAppVEpMMDc5TThDaHQvZ290bnVFQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZJbCtYWThWdVJleXVrUmc3aGNPcWd4bUFSSlVNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBS0xKVENKdzVUb2hIM201Q2YxNgpUNmIveGdINWdsaEVTL1RoOWJBUWhqZlVuZVF3Vmg2aWo5RXQxcmRQbVhrNGQxRWNuSGx2WXNFajZrNG10akdMCkt3ekV2aktZVTFUQW5FRjJ2K09iRUVHMFBaVnRYUWh0VTBVTXpxbW1XWkxqd0QveUQ4SEpBQVpTbkVHSkt6SGEKNUZRaXNkQXhGL2lEb3gyQURXKytxM0psZ0h1UXhNWnBiUFBqMUE4M0NQallNSDNXTW0vNmJxOXQ4VllEazF2cgpMWHhNT3I2aEMvbmd2Q1J3ajNhNnR5MlpIbkdTc2VXakxDT1BHOUx0cWN4cDF0eXdkVkRSRFdwN05OaGtGYlA1CjdnemxIbHp2aHRGeVNacE85SWJEOExiZ0FQMkllUVVrQU1oRm5mQytQT1p0eGUxeWZsZzVRNVZrb0lrWWJEcXYKS1BNPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https://192.168.56.129:6443 name: kubernetes contexts: null current-context: "" kind: Config preferences: {} users: - name: dashboard-admin user: token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1FcUlia3JQYkRSM3ZmU1hZMzYzbm5rdnQxcDRlaDhwWmlwN25pX3dUWWsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1hMGQ2YiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjMyNDdmZmU4LTgzMWYtNGIwMy1hZDgyLWVlMzQ5ZGFlMmM5MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.ohwIxj65yPiKRVJt_GTI_P0sVTAGcEXbZPR5-CeKnV18kvh7WqXUXgGmD-3KjvLSZuAOQVmNfM82gquI1dfL7UUgPRp_vhFyzPOiK0TKoBhNMAcnMdV_xprtvoRPPt-XFl-m2Yz22T3HOFs1W9gjGqOUUFijzeo1fjWDZ5SGT_pJ3x0eO1gCJGtNh_Ud51ROqUylGujmUsE1ZzvDQIpqhFG5172y9YQQX77S0ipR0jiTDOtIMOjcPU1lmfHG6w8zBBHcNo6_rNbMbPJLZyCS4SJdXDRQ-3H1jm8xxSkd-WH6N_TfL9kVMmOwtm5MK2XJb2IcxPHv15PUlTVo_HT9SQ RES [root@anyu967master1 pki]# kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/root/dashboard-admin.conf Context "dashboard-admin@kubernetes" created. [root@anyu967master1 pki]# kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/root/dashboard-admin.conf Switched to context "dashboard-admin@kubernetes".
2.5. metric-server
Metrics Server是Kubernetes内置自动缩放管道的可扩展,高效的容器资源指标来源
Release v0.3.6 · kubernetes-sigs/metrics-server
docker pull [registry.cn-hangzhou.aliyuncs.com/ljck8s/metrics-server-amd64:v0.3.6](http://registry.cn-hangzhou.aliyuncs.com/ljck8s/metrics-server-amd64:v0.3.6)
[root@anyu967master1 pki]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --enable-aggregator-routing=true
[root@anyu967master1 pki]# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
pod/kube-apiserver created
[root@anyu967master1 k8s]# kubectl apply -f metrics.yaml
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
- --metric-resolution=15s
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
[root@anyu967master1 k8s]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
anyu967master1 442m 11% 1240Mi 65%
anyu967node1 280m 7% 888Mi 47%
anyu967node2 263m 6% 962Mi 51%
2.6. 修改schedule 服务IP
[root@anyu967master1 ~]# cd /etc/kubernetes/manifests/
total 16
-rw------- 1 root root 2422 Nov 14 01:41 etcd.yaml
-rw------- 1 root root 3416 Nov 20 21:00 kube-apiserver.yaml
-rw------- 1 root root 2876 Nov 21 00:49 kube-controller-manager.yaml
-rw------- 1 root root 1462 Nov 21 00:48 kube-scheduler.yaml
- --bind-address=0.0.0.0
[root@anyu967master1 manifests]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
[root@anyu967master1 manifests]# netstat -antlp |grep :10259
tcp6 0 0 :::10259 :::* LISTEN 130457/kube-schedul
[root@anyu967master1 manifests]# netstat -antlp |grep :10257
tcp6 0 0 :::10257 :::* LISTEN 1800/kube-controlle
本文来自博客园,作者:anyu967,转载请注明原文链接:https://www.cnblogs.com/anyu967/articles/17331843.html
浙公网安备 33010602011771号