CentOS7.9下KubeKey安装Kubesphere3.3.2

建议准备一台 8 核 CPU 和 16 GB 内存 干净的CentOS7.9,内核由升级至5.4.+

第一步禁用防火墙:

systemctl disable firewalld --now

第二步配置可用的DNS:

#先用以下命令找到网卡名,例如我的是 ens32
yum install -y net-tools.x86_64
ifconfig #然后用以下命令编辑 ens32 网卡的配置,添加以下红色部分 vi /etc/sysconfig/network-scripts/ifcfg-ens32 TYPE="Ethernet" PROXY_METHOD="none" BROWSER_ONLY="no" BOOTPROTO="none" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="ens32" UUID="686513ad-5b52-4f96-99a9-77a281cd42c7" DEVICE="ens32" ONBOOT="yes" IPADDR="192.168.17.3" PREFIX="24" GATEWAY="192.168.17.2"
DNS1="114.114.114.114" DNS2="223.5.5.5" IPV6_PRIVACY="no" #修改完后执行以下命令重启网卡 service network restart #查看一下修改结果 cat /etc/resolv.conf
#可用的域名服务
nameserver 114.114.114.114
nameserver 114.114.114.115
nameserver 223.5.5.5
nameserver 223.6.6.6
nameserver 8.8.8.8
nameserver 180.76.76.76

先安装KubeKey:

export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -    
chmod +X kk

再安装必要依赖(构建K8S集群的每台主机都要):

yum install -y conntrack-tools
yum install -y socat

最好安装k8s和kubesphere:

[root@master1 ~]# ./kk create cluster --with-kubernetes v1.22.12 --with-kubesphere v3.3.2


 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

23:12:13 CST [GreetingsModule] Greetings
23:12:14 CST message: [master1]
Greetings, KubeKey!
23:12:14 CST success: [master1]
23:12:14 CST [NodePreCheckModule] A pre-check on nodes
23:12:15 CST success: [master1]
23:12:15 CST [ConfirmModule] Display confirmation form
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name    | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| master1 | y    | y    | y       | y        | y     | y     |         | y         | y      |        |            |            |             |                  | CST 23:12:15 |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
23:12:23 CST success: [LocalHost]
23:12:23 CST [NodeBinariesModule] Download installation binaries
23:12:23 CST message: [localhost]
downloading amd64 kubeadm v1.22.12 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 43.7M  100 43.7M    0     0   997k      0  0:00:44  0:00:44 --:--:-- 1048k
23:13:09 CST message: [localhost]
downloading amd64 kubelet v1.22.12 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  115M  100  115M    0     0  1000k      0  0:01:58  0:01:58 --:--:-- 1154k
23:15:10 CST message: [localhost]
downloading amd64 kubectl v1.22.12 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 44.7M  100 44.7M    0     0  1005k      0  0:00:45  0:00:45 --:--:-- 1140k
23:15:56 CST message: [localhost]
downloading amd64 helm v3.9.0 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 44.0M  100 44.0M    0     0   992k      0  0:00:45  0:00:45 --:--:-- 1034k
23:16:42 CST message: [localhost]
downloading amd64 kubecni v0.9.1 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 37.9M  100 37.9M    0     0   997k      0  0:00:38  0:00:38 --:--:-- 1014k
23:17:21 CST message: [localhost]
downloading amd64 crictl v1.24.0 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 13.8M  100 13.8M    0     0   985k      0  0:00:14  0:00:14 --:--:-- 1118k
23:17:36 CST message: [localhost]
downloading amd64 etcd v3.4.13 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 16.5M  100 16.5M    0     0   999k      0  0:00:16  0:00:16 --:--:-- 1117k
23:17:53 CST message: [localhost]
downloading amd64 docker 20.10.8 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 58.1M  100 58.1M    0     0  3020k      0  0:00:19  0:00:19 --:--:-- 3475k
23:18:13 CST success: [LocalHost]
23:18:13 CST [ConfigureOSModule] Get OS release
23:18:14 CST success: [master1]
23:18:14 CST [ConfigureOSModule] Prepare to init OS
23:18:15 CST success: [master1]
23:18:15 CST [ConfigureOSModule] Generate init os script
23:18:16 CST success: [master1]
23:18:16 CST [ConfigureOSModule] Exec init os script
23:18:16 CST stdout: [master1]
Permissive
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
23:18:16 CST success: [master1]
23:18:16 CST [ConfigureOSModule] configure the ntp server for each node
23:18:16 CST skipped: [master1]
23:18:16 CST [KubernetesStatusModule] Get kubernetes cluster status
23:18:16 CST success: [master1]
23:18:16 CST [InstallContainerModule] Sync docker binaries
23:18:22 CST success: [master1]
23:18:22 CST [InstallContainerModule] Generate docker service
23:18:22 CST success: [master1]
23:18:22 CST [InstallContainerModule] Generate docker config
23:18:23 CST success: [master1]
23:18:23 CST [InstallContainerModule] Enable docker
23:18:24 CST success: [master1]
23:18:24 CST [InstallContainerModule] Add auths to container runtime
23:18:25 CST skipped: [master1]
23:18:25 CST [PullModule] Start to pull images on all nodes
23:18:25 CST message: [master1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
23:18:28 CST message: [master1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12
23:18:47 CST message: [master1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12
23:18:59 CST message: [master1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12
23:19:08 CST message: [master1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12
23:19:23 CST message: [master1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
23:19:32 CST message: [master1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
23:19:54 CST message: [master1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
23:20:12 CST message: [master1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
23:20:43 CST message: [master1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
23:21:10 CST message: [master1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
23:21:18 CST success: [master1]
23:21:18 CST [ETCDPreCheckModule] Get etcd status
23:21:18 CST success: [master1]
23:21:18 CST [CertsModule] Fetch etcd certs
23:21:18 CST success: [master1]
23:21:18 CST [CertsModule] Generate etcd Certs
[certs] Generating "ca" certificate and key
[certs] admin-master1 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost master1] and IPs [127.0.0.1 ::1 192.168.17.3]
[certs] member-master1 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost master1] and IPs [127.0.0.1 ::1 192.168.17.3]
[certs] node-master1 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost master1] and IPs [127.0.0.1 ::1 192.168.17.3]
23:21:20 CST success: [LocalHost]
23:21:20 CST [CertsModule] Synchronize certs file
23:21:23 CST success: [master1]
23:21:23 CST [CertsModule] Synchronize certs file to master
23:21:23 CST skipped: [master1]
23:21:23 CST [InstallETCDBinaryModule] Install etcd using binary
23:21:25 CST success: [master1]
23:21:25 CST [InstallETCDBinaryModule] Generate etcd service
23:21:25 CST success: [master1]
23:21:25 CST [InstallETCDBinaryModule] Generate access address
23:21:25 CST success: [master1]
23:21:25 CST [ETCDConfigureModule] Health check on exist etcd
23:21:25 CST skipped: [master1]
23:21:25 CST [ETCDConfigureModule] Generate etcd.env config on new etcd
23:21:25 CST success: [master1]
23:21:25 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd
23:21:26 CST success: [master1]
23:21:26 CST [ETCDConfigureModule] Restart etcd
23:21:28 CST stdout: [master1]
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
23:21:28 CST success: [master1]
23:21:28 CST [ETCDConfigureModule] Health check on all etcd
23:21:28 CST success: [master1]
23:21:28 CST [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd
23:21:29 CST success: [master1]
23:21:29 CST [ETCDConfigureModule] Health check on all etcd
23:21:29 CST success: [master1]
23:21:29 CST [ETCDBackupModule] Backup etcd data regularly
23:21:29 CST success: [master1]
23:21:29 CST [ETCDBackupModule] Generate backup ETCD service
23:21:30 CST success: [master1]
23:21:30 CST [ETCDBackupModule] Generate backup ETCD timer
23:21:30 CST success: [master1]
23:21:30 CST [ETCDBackupModule] Enable backup etcd service
23:21:30 CST success: [master1]
23:21:30 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
23:21:46 CST success: [master1]
23:21:46 CST [InstallKubeBinariesModule] Synchronize kubelet
23:21:46 CST success: [master1]
23:21:46 CST [InstallKubeBinariesModule] Generate kubelet service
23:21:47 CST success: [master1]
23:21:47 CST [InstallKubeBinariesModule] Enable kubelet service
23:21:47 CST success: [master1]
23:21:47 CST [InstallKubeBinariesModule] Generate kubelet env
23:21:47 CST success: [master1]
23:21:47 CST [InitKubernetesModule] Generate kubeadm config
23:21:48 CST success: [master1]
23:21:48 CST [InitKubernetesModule] Init cluster using kubeadm
23:22:05 CST stdout: [master1]
W0418 23:21:48.673784   16384 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.22.12
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master1 master1.cluster.local] and IPs [10.233.0.1 192.168.17.3 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.507465 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: s48ww2.qsqalw36t8n86sbn
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token s48ww2.qsqalw36t8n86sbn \
        --discovery-token-ca-cert-hash sha256:e2a7570cee0175729d1089ad50c84220b0d5021cecd883ca2e60fd607cc0443d \
        --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token s48ww2.qsqalw36t8n86sbn \
        --discovery-token-ca-cert-hash sha256:e2a7570cee0175729d1089ad50c84220b0d5021cecd883ca2e60fd607cc0443d
23:22:05 CST success: [master1]
23:22:05 CST [InitKubernetesModule] Copy admin.conf to ~/.kube/config
23:22:05 CST success: [master1]
23:22:05 CST [InitKubernetesModule] Remove master taint
23:22:06 CST stdout: [master1]
node/master1 untainted
23:22:06 CST stdout: [master1]
error: taint "node-role.kubernetes.io/control-plane:NoSchedule" not found
23:22:06 CST [WARN] Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl taint nodes master1 node-role.kubernetes.io/control-plane=:NoSchedule-" 
error: taint "node-role.kubernetes.io/control-plane:NoSchedule" not found: Process exited with status 1
23:22:06 CST success: [master1]
23:22:06 CST [InitKubernetesModule] Add worker label
23:22:06 CST stdout: [master1]
node/master1 labeled
23:22:06 CST success: [master1]
23:22:06 CST [ClusterDNSModule] Generate coredns service
23:22:07 CST success: [master1]
23:22:07 CST [ClusterDNSModule] Override coredns service
23:22:07 CST stdout: [master1]
service "kube-dns" deleted
23:22:09 CST stdout: [master1]
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
23:22:09 CST success: [master1]
23:22:09 CST [ClusterDNSModule] Generate nodelocaldns
23:22:09 CST success: [master1]
23:22:09 CST [ClusterDNSModule] Deploy nodelocaldns
23:22:10 CST stdout: [master1]
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
23:22:10 CST success: [master1]
23:22:10 CST [ClusterDNSModule] Generate nodelocaldns configmap
23:22:11 CST success: [master1]
23:22:11 CST [ClusterDNSModule] Apply nodelocaldns configmap
23:22:11 CST stdout: [master1]
configmap/nodelocaldns created
23:22:11 CST success: [master1]
23:22:11 CST [KubernetesStatusModule] Get kubernetes cluster status
23:22:11 CST stdout: [master1]
v1.22.12
23:22:11 CST stdout: [master1]
master1   v1.22.12   [map[address:192.168.17.3 type:InternalIP] map[address:master1 type:Hostname]]
23:22:15 CST stdout: [master1]
I0418 23:22:13.799395   18608 version.go:255] remote version is much newer: v1.27.1; falling back to: stable-1.22
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
bfef9ad7ea7c7a5ccb2d662d7409772676bd9b27bae39742c3fb9d96d91d6caf
23:22:15 CST stdout: [master1]
secret/kubeadm-certs patched
23:22:15 CST stdout: [master1]
secret/kubeadm-certs patched
23:22:15 CST stdout: [master1]
secret/kubeadm-certs patched
23:22:15 CST stdout: [master1]
9arl8g.jx2swd14x7k5puxw
23:22:15 CST success: [master1]
23:22:15 CST [JoinNodesModule] Generate kubeadm config
23:22:15 CST skipped: [master1]
23:22:15 CST [JoinNodesModule] Join control-plane node
23:22:15 CST skipped: [master1]
23:22:15 CST [JoinNodesModule] Join worker node
23:22:15 CST skipped: [master1]
23:22:15 CST [JoinNodesModule] Copy admin.conf to ~/.kube/config
23:22:15 CST skipped: [master1]
23:22:15 CST [JoinNodesModule] Remove master taint
23:22:15 CST skipped: [master1]
23:22:15 CST [JoinNodesModule] Add worker label to master
23:22:15 CST skipped: [master1]
23:22:15 CST [JoinNodesModule] Synchronize kube config to worker
23:22:15 CST skipped: [master1]
23:22:15 CST [JoinNodesModule] Add worker label to worker
23:22:15 CST skipped: [master1]
23:22:15 CST [DeployNetworkPluginModule] Generate calico
23:22:16 CST success: [master1]
23:22:16 CST [DeployNetworkPluginModule] Deploy calico
23:22:17 CST stdout: [master1]
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
23:22:17 CST success: [master1]
23:22:17 CST [ConfigureKubernetesModule] Configure kubernetes
23:22:17 CST success: [master1]
23:22:17 CST [ChownModule] Chown user $HOME/.kube dir
23:22:17 CST success: [master1]
23:22:17 CST [AutoRenewCertsModule] Generate k8s certs renew script
23:22:18 CST success: [master1]
23:22:18 CST [AutoRenewCertsModule] Generate k8s certs renew service
23:22:19 CST success: [master1]
23:22:19 CST [AutoRenewCertsModule] Generate k8s certs renew timer
23:22:19 CST success: [master1]
23:22:19 CST [AutoRenewCertsModule] Enable k8s certs renew service
23:22:19 CST success: [master1]
23:22:19 CST [SaveKubeConfigModule] Save kube config as a configmap
23:22:19 CST success: [LocalHost]
23:22:19 CST [AddonsModule] Install addons
23:22:19 CST success: [LocalHost]
23:22:19 CST [DeployStorageClassModule] Generate OpenEBS manifest
23:22:20 CST success: [master1]
23:22:20 CST [DeployStorageClassModule] Deploy OpenEBS as cluster default StorageClass
23:22:22 CST success: [master1]
23:22:22 CST [DeployKubeSphereModule] Generate KubeSphere ks-installer crd manifests
23:22:23 CST success: [master1]
23:22:23 CST [DeployKubeSphereModule] Apply ks-installer
23:22:23 CST stdout: [master1]
namespace/kubesphere-system created
serviceaccount/ks-installer created
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
23:22:23 CST success: [master1]
23:22:23 CST [DeployKubeSphereModule] Add config to ks-installer manifests
23:22:23 CST success: [master1]
23:22:23 CST [DeployKubeSphereModule] Create the kubesphere namespace
23:22:24 CST success: [master1]
23:22:24 CST [DeployKubeSphereModule] Setup ks-installer config
23:22:24 CST stdout: [master1]
secret/kube-etcd-client-certs created
23:22:25 CST success: [master1]
23:22:25 CST [DeployKubeSphereModule] Apply ks-installer
23:22:27 CST stdout: [master1]
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
23:22:27 CST success: [master1]
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.17.3:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2023-04-18 23:32:29
#####################################################
23:32:32 CST success: [master1]
23:32:32 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

        kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

[root@master1 ~]# 

最后是验证安装结果:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

...
Start installing monitoring
Start installing multicluster
Start installing openpitrix
Start installing network
**************************************************
Waiting for all tasks to be completed ...
task network status is successful  (1/4)
task openpitrix status is successful  (2/4)
task multicluster status is successful  (3/4)
task monitoring status is successful  (4/4)
**************************************************
Collecting installation results ...
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.17.3:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2023-04-18 23:32:29
#####################################################

 登录至控制台后,可以在系统组件中查看各个组件的状态。如果要使用相关服务,您可能需要等待部分组件启动并运行。您也可以使用 kubectl get pod --all-namespaces 来检查 KubeSphere 相关组件的运行状况。

启用可插拔组件前,建议设置镜像加速:

vi /etc/docker/daemon.json
{
  "log-opts": {
    "max-size": "5m",
    "max-file":"3"
  },
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": [
    "https://docker.mirrors.ustc.edu.cn",
    "https://hub-mirror.c.163.com",
    "https://registry.aliyuncs.com",
    "https://registry.docker-cn.com"
  ]
}

#重载守护进程
systemctl daemon-reload
systemctl restart docker
systemctl status docker
posted @ 2023-04-19 00:06  岁月已走远  阅读(242)  评论(0编辑  收藏  举报