使用wise2c-devops/breeze 安装k8s环境

k8s 安装出错

安装地址:

https://github.com/wise2c-devops/breeze/blob/master/README-CN.md

使用这个基本能解决问题,以下是安装遇到的一些错误

 

 

TASK [kubeadm init] ************************************************************
fatal: [192.168.9.13]: FAILED! => {"changed": true, "cmd": "kubeadm init --config /var/tmp/wise2c/kubernetes/kubeadm.conf", "delta": "0:04:07.759992", "end": "2020-01-19 02:33:51.440675", "msg": "non-zero return code", "rc": 1, "start": "2020-01-19 02:29:43.680683", "stderr": "W0119 02:29:43.734489   11401 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeproxy.config.k8s.io\", Version:\"v1alpha1\", Kind:\"KubeProxyConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"resourceContainer\"\n\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly\nerror execution phase wait-control-plane: couldn't initialize a Kubernetes cluster\nTo see the stack trace of this error execute with --v=5 or higher", "stderr_lines": ["W0119 02:29:43.734489   11401 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeproxy.config.k8s.io\", Version:\"v1alpha1\", Kind:\"KubeProxyConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"resourceContainer\"", "\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly", "error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster", "To see the stack trace of this error execute with --v=5 or higher"], "stdout": "[init] Using Kubernetes version: v1.16.4\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] External etcd mode: Skipping etcd/ca certificate authority generation\n[certs] External etcd mode: Skipping etcd/server certificate generation\n[certs] External etcd mode: Skipping etcd/peer certificate generation\n[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation\n[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\n[kubelet-check] Initial timeout of 40s passed.\n\nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'\n\nAdditionally, a control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.\nHere is one example how you may list all Kubernetes containers running in docker:\n\t- 'docker ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'docker logs CONTAINERID'", "stdout_lines": ["[init] Using Kubernetes version: v1.16.4", "[preflight] Running pre-flight checks", "[preflight] Pulling images required for setting up a Kubernetes cluster", "[preflight] This might take a minute or two, depending on the speed of your internet connection", "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'", "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"", "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"", "[kubelet-start] Activating the kubelet service", "[certs] Using certificateDir folder \"/etc/kubernetes/pki\"", "[certs] Using existing ca certificate authority", "[certs] Using existing apiserver certificate and key on disk", "[certs] Using existing apiserver-kubelet-client certificate and key on disk", "[certs] Using existing front-proxy-ca certificate authority", "[certs] Using existing front-proxy-client certificate and key on disk", "[certs] External etcd mode: Skipping etcd/ca certificate authority generation", "[certs] External etcd mode: Skipping etcd/server certificate generation", "[certs] External etcd mode: Skipping etcd/peer certificate generation", "[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation", "[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation", "[certs] Using the existing \"sa\" key", "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"", "[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address", "[kubeconfig] Writing \"admin.conf\" kubeconfig file", "[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address", "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file", "[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address", "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file", "[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address", "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file", "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"", "[control-plane] Creating static Pod manifest for \"kube-apiserver\"", "[control-plane] Creating static Pod manifest for \"kube-controller-manager\"", "[control-plane] Creating static Pod manifest for \"kube-scheduler\"", "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s", "[kubelet-check] Initial timeout of 40s passed.", "", "Unfortunately, an error has occurred:", "\ttimed out waiting for the condition", "", "This error is likely caused by:", "\t- The kubelet is not running", "\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)", "", "If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:", "\t- 'systemctl status kubelet'", "\t- 'journalctl -xeu kubelet'", "", "Additionally, a control plane component may have crashed or exited when started by the container runtime.", "To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.", "Here is one example how you may list all Kubernetes containers running in docker:", "\t- 'docker ps -a | grep kube | grep -v pause'", "\tOnce you have found the failing container, you can inspect its logs with:", "\t- 'docker logs CONTAINERID'"]}

 

设置防火墙

添加服务kubernetes配置文件

firewall-cmd --new-service=kubernetes --permanent
[root@master03 ~]# cat /etc/firewalld/services/kubernetes.xml
<?xml version="1.0" encoding="utf-8"?>
<service>
    <short>kubernetes</short>
    <description>Kubernetes cluster </description>
    <port protocol="tcp" port="6443"/>
    <port protocol="tcp" port="10250"/>
</service>

  重启服务

	
systemctl restart firewalld

  查看错误

登陆192.168.9.11查看了一下

[root@master01 ~]# docker ps -a
CONTAINER ID        IMAGE                                       COMMAND                  CREATED             STATUS                          PORTS                    NAMES
6cd09e5e999e        3722a80984a0                                "kube-apiserver --ad…"   22 minutes ago      Up 22 minutes                                            k8s_kube-apiserver_kube-apiserver-master01_kube-system_810793cf458f4fc2bd08479eb551d196_0
48867aae9b92        fb4cca6b4e4c                                "kube-controller-man…"   22 minutes ago      Up 22 minutes                                            k8s_kube-controller-manager_kube-controller-manager-master01_kube-system_31c189d2fdb9c19a6c47f5dda0f8d8ae_0
ab3d39621bc8        2984964036c8                                "kube-scheduler --au…"   22 minutes ago      Up 22 minutes                                            k8s_kube-scheduler_kube-scheduler-master01_kube-system_50d1f95c6ad0c76b3d0bd3dc028effea_0
4451b4e4a763        192.168.9.14/library/pause:3.1              "/pause"                 22 minutes ago      Up 22 minutes                                            k8s_POD_kube-apiserver-master01_kube-system_810793cf458f4fc2bd08479eb551d196_0
b3a5aec9705f        192.168.9.14/library/pause:3.1              "/pause"                 22 minutes ago      Up 22 minutes                                            k8s_POD_kube-scheduler-master01_kube-system_50d1f95c6ad0c76b3d0bd3dc028effea_0
779b40774fa2        192.168.9.14/library/pause:3.1              "/pause"                 22 minutes ago      Up 22 minutes                                            k8s_POD_kube-controller-manager-master01_kube-system_31c189d2fdb9c19a6c47f5dda0f8d8ae_0
5b84c37d8c7f        192.168.9.14/library/etcd-amd64:3.3.15-0    "etcd --name etcd0 -…"   26 minutes ago      Up 26 minutes                                            etcd
79dee0d8117c        192.168.9.14/library/k8s-keepalived:1.3.5   "/bin/bash /usr/bin/…"   26 minutes ago      Restarting (1) 48 seconds ago                            keepalived
bdcf54c305bd        192.168.9.14/library/k8s-haproxy:2.0.0      "/docker-entrypoint.…"   27 minutes ago      Up 26 minutes                   0.0.0.0:6444->6444/tcp   haproxy
[root@master01 ~]# etcd --name etcd0 -…"   26 minutes ago      Up 26 minutes                                            etcd
[root@master01 ~]# docker logs keepalived
Cant find interface em1 for vrrp_instance lb-vips-k8s !!!
VRRP is trying to assign ip address 192.168.9.30/24 to unknown em1 interface !!! go out and fix your conf !!!
VRRP_Instance(lb-vips-k8s) Unknown interface !
Stopped
Keepalived_vrrp exited with permanent error CONFIG. Terminating

 

 

[root@master01 ~]# mkdir services
[root@master01 ~]# cd services/
[root@master01 services]# ls
[root@master01 services]# vi nginx-pod.yaml
[root@master01 services]# kubectl create -f nginx-pod.yaml
pod/nginx created

 

 

nginx命令

[root@master01 services]# cat nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
    - name: nginx
      image: nginx:1.17.7
      ports:
      - containerPort: 80

 

[root@master01 services]# kubectl get pods -o wide
NAME                     READY   STATUS              RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
kuard-5cd647675b-65cwg   1/1     Running             0          145m    10.244.4.2   worker01   <none>           <none>
kuard-5cd647675b-65r9c   1/1     Running             0          145m    10.244.4.5   worker01   <none>           <none>
kuard-5cd647675b-f9r9f   1/1     Running             0          145m    10.244.4.4   worker01   <none>           <none>
nginx                    0/1     ContainerCreating   0          7m39s   <none>       worker02   <none>           <none>
[root@master01 services]# kubectl describe pod nginx
Name:         nginx
Namespace:    default
Priority:     0
Node:         worker02/192.168.9.16
Start Time:   Mon, 20 Jan 2020 08:43:43 +0000
[root@master01 services]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE       NOMINATED NODE   READINESS GATES
kuard-5cd647675b-65cwg   1/1     Running   0          170m   10.244.4.2    worker01   <none>           <none>
kuard-5cd647675b-65r9c   1/1     Running   0          170m   10.244.4.5    worker01   <none>           <none>
kuard-5cd647675b-f9r9f   1/1     Running   0          170m   10.244.4.4    worker01   <none>           <none>
nginx                    1/1     Running   0          32m    10.244.3.10   worker02   <none>           <none>

如何仿问呢?

 

curl http://localhost:8001/api/v1/namespaces/default/pods/

 

以上是 https://kubernetes.io/zh/docs/concepts/ 官方文档

 

posted @ 2020-01-21 11:08  jackluo  阅读(1458)  评论(0编辑  收藏  举报