k8s reset之后彻底清除上次初始化

k8s reset之后彻底清除上次初始化
kubeadm reset

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/*
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/*
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker
之后重新kubeadm init

3. journalctl -u kubelet 查看kubectl日志发现报错如下

Kubernetes启动报错

kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

 

错误原因:

docker和k8s使用的cgroup不一致导致

解决办法:

修改二者一致,统一使用systemd或者cgroupfs进行资源管理。由于k8s官方文档中提示使用cgroupfs管理docker和k8s资源,而使用systemd管理节点上其他进程资源在资源压力大时会出现不稳定,因此推荐修改docker和k8s统一使用systemd管理资源。

Cgroup drivers

When systemd is chosen as the init system for a Linux distribution, the init process generates and consumes a root control group (cgroup) and acts as a cgroup manager. Systemd has a tight integration with cgroups and will allocate cgroups per process. It’s possible to configure your container runtime and the kubelet to use cgroupfs. Using cgroupfs alongside systemd means that there will then be two different cgroup managers.

Control groups are used to constrain resources that are allocated to processes. A single cgroup manager will simplify the view of what resources are being allocated and will by default have a more consistent view of the available and in-use resources. When we have two managers we end up with two views of those resources. We have seen cases in the field where nodes that are configured to use cgroupfs for the kubelet and Docker, and systemd for the rest of the processes running on the node becomes unstable under resource pressure.

docker修改方法:

修改或创建/etc/docker/daemon.json,加入下面的内容:

cat > /etc/docker/daemon.json <<EOF

{

"exec-opts": ["native.cgroupdriver=systemd"]

}

EOF

重启docker:

systemctl restart docker
k8s修改方法:

修改kubelet:

修改docker,只需在/etc/docker/daemon.json中,添加"exec-opts": ["native.cgroupdriver=systemd"]即可,本文最初的docker配置可供参考。

修改kubelet:

cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
vim /var/lib/kubelet/kubeadm-flags.env

KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --hostname-override=10.249.176.86 --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1"
添加如下内容--cgroup-driver=systemd

需要重启 kubelet:

systemctl daemon-reload

systemctl restart kubelet
4.kubernetes认证namespace,默认default的namespace变成操作自定义namespace yujia-k8s
示例:kubectl get pod -n yujia-k8s <<---等价于-->> kubectl get pod

vim /root/.kube/config


contexts:的下面加上自己想要修改的namespace

contexts:
#- context:
# cluster: kubernetes
# user: kubernetes-admin
# name: kubernetes-admin@kubernetes
#current-context: kubernetes-admin@kubernetes
- context:
cluster: kubernetes
namespace: yujia-k8s
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
这样做能让我们操作默认default的namespace变成操作yujia-k8s 命名空间下面的资源,提升了简便性

5.node节点加不进去

'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused

kubelet没有这个文件

open /var/lib/kubelet/pki/kubelet.crt: no such file or directory

解决办法:复制其他node节点的这个证书
————————————————
版权声明:本文为CSDN博主「翟雨佳blogs」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/yujia_666/article/details/107719919

posted @ 2021-08-02 10:58  ianCloud  阅读(2165)  评论(0编辑  收藏  举报