本地k8s环境minikube搭建过程
首先要安装docker这个环境是需要自己安装的。相关步骤如下:
|
1
2
3
4
5
6
7
8
9
10
11
|
yum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo安装dockeryum list docker-ce --showduplicates | sort -r 查看docker相关版本#yum install docker-ce #由于repo中默认只开启stable仓库,故这里安装的是最新稳定版。#yum install <FQPN> # 例如:yum install docker-ce-18.06.0.ce -y#以下是验证过的版本,建议安装yum install docker-ce-18.06.0.ce -ysystemctl start dockersystemctl enable dockerdocker version(因为安装的是1.13.4版本的k8s,建议安装docker18.06) |
然后使用阿里云修改好的minikube进行安装,否则在初始化minikube的时候会卡在墙上下不来
|
1
2
3
|
curl -Lo minikube http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v0.35.0/minikube-linux-amd64chmod +x minikubemv minikube /usr/bin/minikube |
注意一点要关掉swap:关闭命令swapoff -a |
加载阿里云k8s的官方源并且安装相关命令组件
|
1
2
3
4
5
6
7
8
9
10
|
cd /etc/yum.repos.d/cat>>kubernetes.repo<<EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF |
|
1
2
|
yum install kubectl kubelet kubeadm -ysystemctl start kubelet && systemctl enable kubelet |
使用缺省VirtualBox驱动来创建Kubernetes本地环境
minikube start --registry-mirror=https://registry.docker-cn.com |
出现如下字样
- Verifying component health .....
+ kubectl is now configured to use "minikube"
= Done! Thank you for using minikube!
则本地的minikube安装完成。当然这个不能访问外网,单独装ingress或者端口转发即可
############################################################################
----------------------------------我是分割线------------------------------------割一下--------------------------------
############################################################################
ingress安装方法:
生成ingress:
创建depolyment.yaml:
apiVersion: v1kind: Namespacemetadata: name: kube-system labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx---kind: ConfigMapapiVersion: v1metadata: name: nginx-configuration namespace: kube-system labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx---kind: ConfigMapapiVersion: v1metadata: name: tcp-services namespace: kube-system labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx---kind: ConfigMapapiVersion: v1metadata: name: udp-services namespace: kube-system labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx---apiVersion: v1kind: ServiceAccountmetadata: name: nginx-ingress-serviceaccount namespace: kube-system labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginxrules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update---apiVersion: rbac.authorization.k8s.io/v1beta1kind: Rolemetadata: name: nginx-ingress-role namespace: kube-system labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginxrules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get---apiVersion: rbac.authorization.k8s.io/v1beta1kind: RoleBindingmetadata: name: nginx-ingress-role-nisa-binding namespace: kube-system labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginxroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-rolesubjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginxroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrolesubjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: kube-system---apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-ingress-controller namespace: kube-system labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginxspec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: allowPrivilegeEscalation: true capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10--- |
---
---
再创建svc,yaml:
Service:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
apiVersion: v1kind: Servicemetadata: annotations: #service.beta.kubernetes.io/alicloud-loadbalancer-id: "lb-wz9du18pa4e7f93vetzww" labels: app: nginx-ingress name: nginx-ingress namespace: kube-systemspec: ports: - name: http nodePort: 30468 port: 80 protocol: TCP targetPort: 80 - name: https nodePort: 30471 port: 443 protocol: TCP targetPort: 443 selector: #app: ingress-nginx app.kubernetes.io/name: ingress-nginx #type: LoadBalancer type: NodePortstatus: loadBalancer: ingress: - ip: 39.108.26.119(此处更改成自己本机ip) |
以上yaml创建pod的命令是:
kubectl apply -f xxxx.yaml |
业务镜像可以拉取gitlab的,这里没做cofigmap,需要自己配。生成业务编排需自己编写yaml
以下是简单安装脚本。
#!/bin/bash #安装docker相关,用以拉取本地所需镜像,版本采用docker-ce 18.06版,支持1.13版kubernetes #检测网卡是否是固定ip grep -rE "dhcp" /etc/sysconfig/network-scripts/ifcfg-* if [ $? -eq 0 ]; then echo "网卡为DHCP模式请更改为规定ip" exit else echo "网卡正常。" fi yum clean all && yum repolist yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install docker-ce-18.06.0.ce -y systemctl start docker systemctl enable docker VERSION=`docker version` if [ $? -eq 0 ]; then echo "输出docker版本信息:$VERSION" else echo "docker安装出错,请检查错误日志" exit fi echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables #此步是保证iptables正确转发获取镜像,否则会报dns解析错误 ########获取minikube二进制文件并且添加系统命令######## cd /data curl -Lo minikube http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v0.35.0/minikube-linux-amd64 chmod +x minikube mv minikube /usr/bin/minikube swapoff -a #强制关闭swap不然初始化的时候会提示错误 cd /etc/yum.repos.d/ cat>>kubernetes.repo<<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install kubectl kubelet kubeadm -y systemctl start kubelet && systemctl enable kubelet ########启动minikube######## minikube start --vm-driver=none if [ $? -eq 0 ]; then echo "minikube初始化成功" else echo "minikube初始化失败,请检查报错输出,重新执行初始化命令minikube start --vm-driver=none 命令,如果仍有报错,请执行清理集群命令minikube delete,并重新执行初始化命令!" minikube delete exit fi #缺省Minikube使用VirtualBox驱动来创建Kubernetes本地环境 #minikube start --registry-mirror=https://registry.docker-cn.com STATUS=`kubectl get node | awk '{print$2}' | sed -n '2p'` if [ $STATUS = "Ready" ]; then echo "输出集群状态$STATUS" else echo "输出状态不是Ready,请联系运维." fi #echo "输出集群状态$STATUS" #echo "输出状态不是Ready,请联系运维."
you are the best!

浙公网安备 33010602011771号