KubeSphere Install
容器云平台KubeSphere:安装KubeSphere
白话
这里的安装方式是以kubesphere v2.1 版本安装主要结合官方文档安装指南,和自己一些亲身经验所及,你也可以直接用官方的安装方式。这个安装方式是在已有的Kubenetes集群上安装。
结合官方https://kubesphere.com.cn/docs/v2.0/zh-CN/installation/install-on-k8s/
安装
1.Kubernetes版本要求为 1.13.0 ≤ K8s Version < 1.16,KubeSphere 依赖 Kubernetes 1.13.0版本之后的新特性,可以在执行 kubectl version来确认 :
$ kubectl version | grep Server
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
说明:注意输出结果中的
Server Version这行,如果显示GitVersion大于v1.13.0,Kubernetes 的版本是可以安装的。如果低于v1.13.0,可以查看 Upgrading kubeadm clusters from v1.12 to v1.13 先升级下 K8s 版本。
- 确认已安装
Helm,并且Helm的版本至少为2.10.0。在终端执行helm version,得到类似下面的输出
[root@k8s ~]# helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
3.集群现有的可用内存至少在 10G以上。 如果是执行的 allinone安装,那么执行 free -g可以看下可用资源
[root@k8s ~]# free -g
total used free shared buff/cache available
Mem: 376 12 327 4 36 358
Swap: 0 0 0
-
(非必须) KubeSphere 非常建议配合持久化存储使用,执行
kubectl get sc看下当前是否设置了默认的storageclass。[root@k8s ~]# kubectl get sc NAME PROVISIONER AGE nfs-storage (default) fuseim.pri/ifs 19h
提示:若未设置持久化存储,安装将默认使用 hostpath,该方式能顺利安装,但可能会由于 Pod 漂移带来其它问题,正式环境建议配置 StorageClass 使用持久化存储。
不满足上面几点需要从头开始安装所需要组件,如果你的集群都满足这几点直接跳到3.部署 KubeSphere安装就可以。
环境
| IP | Hostname | CPU | Memory | Disk |
|---|---|---|---|---|
| 192.168.181.100 | Master-001 | 2 | 2G | 40G |
| 192.168.181.101 | Node-001 | 2 | 2G | 40G |
| 192.168.181.102 | Node-002 | 2 | 2G | 40G |
| 192.168.181.103 | Node-003 | 2 | 2G | 40G |
1. 安装Helm
下载
Helm是一个二进制文件,我们直接到github的release去下载就可以,地址如下:
https://github.com/helm/helm/releases
安装
(1)下载安装包 wget https://storage.googleapis.com/kubernetes-helm/helm-v2.14.3-linux-amd64.tar.gz Or wget https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz (2)解压 tar -zxvf helm-v2.14.3-linux-amd64.tar.gz (3)把解压后的文件放到/usr/local/bin目录下 mv linux-amd64/helm /usr/local/bin
Tiller安装
拉取Tiller镜像 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3 给拉取的镜像打标签 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3 gcr.io/kubernetes-helm/tiller:v2.14.3 安装helm helm init --upgrade --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
给Tiller授权
[root@k8s ~]# kubectl create serviceaccount --namespace kube-system tiller
serviceaccount/tiller created
[root@k8s ~]# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
[root@k8s ~]# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
deployment.extensions/tiller-deploy patched
[root@k8s ~]# helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
2. 安装NFS
用nfs 作为存储比较简单实用
安装到node 节点或者其他机器总之不要安装到master,master节点安装后始终提示无法连接
安装
$ yum install nfs-utils rpcbind -y
配置共享文件
$ vim /etc/exports # 粘贴进去 /home/data *(insecure,rw,async,no_root_squash) #保存后创建文件 $ mkdir /home/data # 设置权限 $ chmod 777 /home/data
启动服务
#先启动rpc服务 $ systemctl start rpcbind #设置开机启动 $ systemctl enable rpcbind # 启动nfs服务 $ systemctl start nfs-server #设置开机启动 $ systemctl enable nfs-server
安装storageclass
storageclass.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-runner
namespace: default
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
selector:
matchLabels:
app: nfs-client-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-client
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 此处修改为nfs服务器ip
- name: NFS_PATH
value: /home/data
volumes:
- name: nfs-client
nfs:
server: 此处修改为nfs服务器ip
path: /home/data
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
provisioner: fuseim.pri/ifs
reclaimPolicy: Retain
创建一下
$ kubectl apply -f storageclass.yaml
设置默认storageclass
$ kubectl patch storageclass nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
3. 部署 KubeSphere
-
在 Kubernetes 集群中创建名为
kubesphere-system和kubesphere-monitoring-system的namespace。$ cat <<EOF | kubectl create -f - --- apiVersion: v1 kind: Namespace metadata: name: kubesphere-system --- apiVersion: v1 kind: Namespace metadata: name: kubesphere-monitoring-system EOF -
创建 Kubernetes 集群 CA 证书的 Secret。
注:按照当前集群 ca.crt 和 ca.key 证书路径创建(Kubeadm 创建集群的证书路径一般为
/etc/kubernetes/pki)$ kubectl -n kubesphere-system create secret generic kubesphere-ca \ --from-file=ca.crt=/etc/kubernetes/pki/ca.crt \ --from-file=ca.key=/etc/kubernetes/pki/ca.key
-
创建 etcd 的证书 Secret。
注:根据集群实际 etcd 证书位置创建;
-
若 etcd 已经配置过证书,则参考如下创建:
$ kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs \ --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt \ --from-file=etcd-client.crt=/etc/kubernetes/pki/etcd/healthcheck-client.crt \ --from-file=etcd-client.key=/etc/kubernetes/pki/etcd/healthcheck-client.key
-
若 etcd 没有配置证书,则创建空 Secret(以下命令适用于 Kubeadm 创建的 Kubernetes 集群环境):
$ kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs
-
克隆 kubesphere-installer 仓库至本地。
$ git clone https://github.com/kubesphere/ks-installer.git
-
进入 ks-installer,修改配置
根据下面的参数说明列表修改
$ cd deploy #修改etcd 配置 --- apiVersion: v1 data: ks-config.yaml: | --- persistence: storageClass: "" # 默认为空“”,则使用 default StorageClass etcd: monitoring: True endpointIps: 192.168.181.101 # etcd地址,如etcd为集群,地址以逗号分离 port: 2379 tlsEnable: False # 是否开启etcd TLS证书认证(True / False) -
保存
-
然后在 Kubernetes 集群部署 KubeSphere
$ kubectl apply -f kubesphere-installer.yaml $ kubectl apply -f cluster-configuration.yaml
-
查看部署信息
[root@k8s deploy]# kubectl get pod -n kubesphere-system NAME READY STATUS RESTARTS AGE ks-apiserver-5f75464f45-49j8r 1/1 Running 0 22h ks-console-58d788ff46-q4fg2 1/1 Running 0 22h ks-controller-manager-797cf4d6bc-9rg7w 1/1 Running 0 22h ks-installer-78659898-9sn8t 1/1 Running 0 22h openldap-0 1/1 Running 0 22h redis-5d4844b947-gnq9t 1/1 Running 0 22h
-
启动成功之后查看日志
-
查看部署日志。
$ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l job-name=kubesphere-installer -o jsonpath='{.items[0].metadata.name}') -f -
查看控制台的服务端口,使用
IP:30880访问 KubeSphere UI 界面,默认的集群管理员账号为admin/P@88w0rd。# 查看 ks-console 服务的端口 默认为 NodePort: 30880 [root@k8s deploy]# kubectl get svc -n kubesphere-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ks-apiserver ClusterIP 10.68.171.36 <none> 80/TCP 22h ks-console NodePort 10.68.177.44 <none> 80:30880/TCP 22h ks-controller-manager ClusterIP 10.68.224.195 <none> 443/TCP 22h openldap ClusterIP None <none> 389/TCP 22h redis ClusterIP 10.68.243.177 <none> 6379/TCP 22h
-
最后浏览器访问 http://192.168.181.103:30880/
参数说明
| 参数 | 描述 | 默认值 | |
| kube_apiserver_host | 当前集群kube-apiserver地址(ip:port) | ||
| etcd_tls_enable | 是否开启etcd TLS证书认证(True / False) | True | |
| etcd_endpoint_ips | etcd地址,如etcd为集群,地址以逗号分离(如:192.168.0.7,192.168.0.8,192.168.0.9) | ||
| etcd_port | etcd端口 (默认2379,如使用其它端口,请配置此参数) | 2379 | |
| disableMultiLogin | 是否关闭多点登录 (True / False) | True | |
| elk_prefix | 日志索引 | logstash | |
| keep_log_days | 日志留存时间(天) | 7 | |
| metrics_server_enable | 是否安装metrics_server (True / False) | True | |
| istio_enable | 是否安装istio (True / False) | True | |
| persistence | enable | 是否启用持久化存储 (True / False)(非测试环境建议开启数据持久化) | |
| storageClass | 启用持久化存储要求环境中存在已经创建好的 StorageClass(默认为空,则使用 default StorageClass) | “” | |
| containersLogMountedPath(可选) | 容器日志挂载路径 | “/var/lib/docker/containers” | |
| external_es_url(可选) | 外部es地址,支持对接外部es用 | ||
| external_es_port(可选) | 外部es端口,支持对接外部es用 | ||
| local_registry (离线部署使用) | 离线部署时,对接本地仓库 (使用该参数需将安装镜像使用scripts/download-docker-images.sh导入本地仓库中) |
浙公网安备 33010602011771号