安装Kubernetes资源管理平台(Ratel)
本文转载杜宽老师博客https://www.cnblogs.com/dukuan/p/11883541.html和51CTO博主mb601cf713ef4ca的文章
1.Ratel是什么?
Ratel是一个Kubernetes资源平台,基于管理Kubernetes的资源开发,可以管理Kubernetes的Deployment、DaemonSet、StatefulSet、Service、Ingress、Pods、Nodes、Role、ClusterRole、Rolebinding、ClusterRoleBinding、Secret、ConfigMap、PV、PVC等。主要用于以图形化的方式管理k8s的各类资源,提高维护k8s集群的效率及降低出错的概率。
2.安装Ratel
请查看最新文档: https://github.com/dotbalo/ratel-doc
2.1 安装说明
集群安装配置需要两类文件: servers.yaml和集群管理的kubeconfig文件 servers.yaml是ratel的配置文件, 格式如下 - serverName: 'xiqu'
serverAddress: 'https://1.1.1.1:8443' #serverAdminUser: 'xxx' #serverAdminPassword: 'xxx#' serverAdminToken: 'null' serverDashboardUrl: "https://k8s.xxx.com.cn/#" production: 'false' kubeConfigPath: "/mnt/xxx.config" harborConfig: "HarborUrl, HarborUsername, HarborPassword, HarborEmail" 其中管理的方式有两种: 账号密码和kubeconfig形式, 只需配置一种即可, kubeconfig优先级高 参数解析: serverName: 集群别名 serverAddress: Kubernetes APIServer地址 serverAdminUser: Kubernetes管理员账号(需要配置basic auth) serverAdminPassword: Kubernetes管理员密码 serverAdminToken: Kubernetes管理员Token serverDashboardUrl: Kubernetes官方dashboard地址,1.x版本需要添加/#!,2.x需要添加/# kubeConfigPath: Kubernetes kube.config路径(绝对路径) harborConfig: 对于多集群管理的情况下,可能会存在不同的harbor仓库,配置此参数可以在拷贝资源的时候自动替换harbor配置 kubeConfigPath 通过secret挂载到容器的/mnt目录或者其他目录
2.2 创建Secret
# 1、查看集群地址
[root@k8s-master01 ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.1.246:8443
CoreDNS is running at https://192.168.1.246:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
# 2、创建servers.yaml
[root@k8s-master01 app]# mkdir Ratel && cd Ratel
[root@k8s-master01 Ratel]# cat servers.yaml
- serverName: 'test1'
serverAddress: 'https://192.168.1.246:8443'
serverAdminToken: 'null'
serverDashboardUrl: "https://k8s.test1.com.cn/#!"
production: 'false'
kubeConfigPath: "/mnt/test1.config"
# 3、copy配置文件
[root@k8s-master01 Ratel]# cp /root/.kube/config test1.config
# 4、创建Secret
[root@k8s-master01 Ratel]# kubectl create secret generic ratel-config --from-file=test1.config --from-file=servers.yaml -n kube-system
secret/ratel-config created
2.3 创建RBAC
2.3.1、创建权限管理namespace
[root@k8s-master01 Ratel]# kubectl create ns kube-users namespace/kube-users created
2.3.2、创建ClusterroleBinding
[root@k8s-master01 Ratel]# vim ratel-rbac.yaml apiVersion: v1 items: - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults rbac.authorization.k8s.io/aggregate-to-edit: "true" name: ratel-namespace-readonly rules: - apiGroups: - "" resources: - namespaces verbs: - get - list - watch - apiGroups: - metrics.k8s.io resources: - pods verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ratel-pod-delete rules: - apiGroups: - "" resources: - pods verbs: - get - list - delete - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ratel-pod-exec rules: - apiGroups: - "" resources: - pods - pods/log verbs: - get - list - apiGroups: - "" resources: - pods/exec verbs: - create - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: ratel-resource-edit rules: - apiGroups: - "" resources: - configmaps - persistentvolumeclaims - services - services/proxy verbs: - patch - update - apiGroups: - apps resources: - daemonsets - deployments - deployments/rollback - deployments/scale - statefulsets - statefulsets/scale verbs: - patch - update - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - patch - update - apiGroups: - batch resources: - cronjobs - jobs verbs: - patch - update - apiGroups: - extensions resources: - daemonsets - deployments - deployments/rollback - deployments/scale - ingresses verbs: - patch - update - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ratel-resource-readonly rules: - apiGroups: - "" resources: - configmaps - endpoints - persistentvolumeclaims - pods - replicationcontrollers - replicationcontrollers/scale - serviceaccounts - services verbs: - get - list - watch - apiGroups: - "" resources: - bindings - events - limitranges - namespaces/status - pods/log - pods/status - replicationcontrollers/status - resourcequotas - resourcequotas/status verbs: - get - list - watch - apiGroups: - "" resources: - namespaces verbs: - get - list - watch - apiGroups: - apps resources: - controllerrevisions - daemonsets - deployments - deployments/scale - replicasets - replicasets/scale - statefulsets - statefulsets/scale verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch - apiGroups: - batch resources: - cronjobs - jobs verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - deployments/scale - ingresses - networkpolicies - replicasets - replicasets/scale - replicationcontrollers/scale verbs: - get - list - watch - apiGroups: - policy resources: - poddisruptionbudgets verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - networkpolicies verbs: - get - list - watch - apiGroups: - metrics.k8s.io resources: - pods verbs: - get - list - watch kind: List metadata: resourceVersion: "" selfLink: "" # create [root@k8s-master01 Ratel]# kubectl create -f ratel-rbac.yaml clusterrole.rbac.authorization.k8s.io/ratel-namespace-readonly created clusterrole.rbac.authorization.k8s.io/ratel-pod-delete created clusterrole.rbac.authorization.k8s.io/ratel-pod-exec created clusterrole.rbac.authorization.k8s.io/ratel-resource-edit created clusterrole.rbac.authorization.k8s.io/ratel-resource-readonly created
2.4 部署ratel
ratel的部署文件内容如下
[root@k8s-master01 Ratel]# vim ratel.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ratel
name: ratel
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: ratel
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: ratel
spec:
containers:
- command:
- sh
- -c
- ./ratel -c /mnt/servers.yaml
env:
- name: TZ
value: Asia/Shanghai
- name: LANG
value: C.UTF-8
- name: ProRunMode
value: prod
- name: ADMIN_USERNAME
value: admin
- name: ADMIN_PASSWORD
value: password
image: registry.cn-beijing.aliyuncs.com/dotbalo/ratel:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 2
initialDelaySeconds: 10
periodSeconds: 60
successThreshold: 1
tcpSocket:
port: 8888
timeoutSeconds: 2
name: ratel
ports:
- containerPort: 8888
name: web
protocol: TCP
readinessProbe:
failureThreshold: 2
initialDelaySeconds: 10
periodSeconds: 60
successThreshold: 1
tcpSocket:
port: 8888
timeoutSeconds: 2
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 500m
memory: 512Mi
volumeMounts:
- mountPath: /mnt
name: ratel-config
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: myregistrykey
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: ratel-config
secret:
defaultMode: 420
secretName: ratel-config
需要更改的内容如下:
ProRunMode: 区别在于dev模式打印的是debug日志, 其他模式是info级别的日志, 实际使用时应该配置为非dev
ADMIN_USERNAME: ratel自己的管理员账号
ADMIN_PASSWORD: ratel自己的管理员密码
实际使用时账号密码应满足复杂性要求,因为ratel可以直接操作所有配置的资源。
其他无需配置, 端口配置暂不支持。
# create Deployment
[root@k8s-master01 Ratel]# kubectl create -f ratel.yaml
deployment.apps/ratel created
2.5、Service和Ingress配置
注意:如果没有安装ingress controller,需要把type: ClusterIP改成type: NodePort,然后通过主机IP+Port进行访问
# 创建ratel Service的文件如下、创建ratel Ingress [root@k8s-master01 Ratel]# vim ratel-svc.yaml apiVersion: v1 kind: Service metadata: labels: app: ratel name: ratel namespace: kube-system spec: ports: - name: container-1-web-1 port: 8888 protocol: TCP targetPort: 8888 selector: app: ratel type: ClusterIP --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ratel namespace: kube-system spec: rules: - host: krm.test.com http: paths: - backend: serviceName: ratel servicePort: 8888 path: / [root@k8s-master01 Ratel]# kubectl create -f ratel-svc.yaml service/ratel created ingress.extensions/ratel created # 查看部署在那个节点 [root@k8s-master01 ~]# kubectl get pod -n kube-system -owide查看部署在那个节点
浙公网安备 33010602011771号