k8s集群部署步骤(全)

文档配视频效果更佳哦:视频地址

阅读目录(Content)

  • [1:k8s集群的安装]
  • [2:什么是k8s,k8s有什么功能?]
  • [3:k8s常用的资源]
  • [4:k8s的附加组件]
  • [5: k8s弹性伸缩]
  • [6:持久化存储]
  • [7:与jenkins集成实现ci/cd]
    链接:https://pan.baidu.com/s/1gYt-Au_a1t-FBZ9GUUTeLg
    提取码:udzj
    复制这段内容后打开百度网盘手机App,操作更方便哦
1:k8s集群的安装
1.1 k8s集群的架构
master节点:etcd,api-server,scheduler,controller-manager
node节点:kubelet,kube-proxy

etcd的作用:数据库
api-server:核心服务
controller-manager: 控制器管理 rc
scheduler: 创建新pod,选择合适的节点

kubelet: 调用docker来创建容器
kube-proxy: 对外提供用户访问,对内提供一个负载均衡器

1.6 :所有节点配置flannel网络
跨节点容器间的通信
a:安装etcd
b:安装配置启动flannel
c:重启docker生效

1.7 :配置master为docker镜像私有仓库
a:速度快
b:保护隐私

2:什么是k8s,k8s有什么功能?
2.1 k8s的核心功能
自愈:
弹性伸缩: 
服务自动发现和负载均衡
滚动升级和一键回滚
密码和配置文件管理

2.2 k8s的历史
2015年 7月份 1.0版

2.3 k8s的安装方式
yum
源码编译
二进制   生产使用
kubeadm  生产使用
minikube

2.4 k8s的应用场景
微服务:
更高的并发,更高的可用性,更快代码更新
缺点: 管理复杂度上升
docker--k8s--弹性伸缩


3:k8s常用的资源
3.1创建pod资源
k8s最小资源单位
pod资源至少包含两个容器,基础容器pod+业务容器

3.2 ReplicationController资源
保证指定数量的pod运行
pod和rc是通过标签来关联
rc滚动升级和一键回滚

1:k8s集群的安装

  • docker---->管理平台---->k8s管理 在pass层

更新代码,容易出现故障!生产环境与测试环境不一致,使用同一个docker镜像解决问题

1.1 k8s的架构

img

除了核心组件,还有一些推荐的Add-ons:

组件名称 说明
kube-dns 负责为整个集群提供DNS服务
Ingress Controller 为服务提供外网入口
Heapster 提供资源监控
Dashboard 提供GUI
Federation 提供跨可用区的集群
Fluentd-elasticsearch 提供集群日志采集、存储与查询

1.2:修改IP地址、主机和host解析

10.0.0.11 master
10.0.0.12 node-1
10.0.0.13 node-2

所有节点需要做hosts解析

1.3:master节点安装etcd

### 配置源
[root@master ~]# rm -rf /etc/yum.repos.d/local.repo 
[root@master ~]# echo "192.168.37.200 mirrors.aliyun.com" >>/etc/hosts
[root@master ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@master ~]# yum install etcd -y
[root@master ~]# vim /etc/hosts
[root@master ~]# vim /etc/hosts
10.0.0.11 master
10.0.0.12 node-1
10.0.0.13 node-2
[root@master ~]# systemctl restart network
[root@master ~]# vim /etc/etcd/etcd.conf
.......
6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
.......
[root@master ~]# systemctl start etcd.service
[root@master ~]# systemctl enable etcd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@master ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LISTEN      7390/etcd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      6726/sshd           
tcp6       0      0 :::2379                 :::*                    LISTEN      7390/etcd           
tcp6       0      0 :::22                   :::*                    LISTEN      6726/sshd           
udp        0      0 127.0.0.1:323           0.0.0.0:*                           5065/chronyd        
udp6       0      0 ::1:323                 :::*                                5065/chronyd        
### 测试etcd是否安装成功
[root@master ~]# etcdctl set testdir/testkey0 0
0
[root@master ~]# etcdctl get testdir/testkey0
0
### 检查健康状态
[root@master ~]# etcdctl -C http://10.0.0.11:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://10.0.0.11:2379
cluster is healthy

etcd原生支持做集群

作业1:安装部署etcd集群,要求三个节点

1.4:master节点安装kubernetes

[root@master ~]# yum install kubernetes-master.x86_64 -y
[root@master ~]# vim /etc/kubernetes/apiserver 
......
8行:  KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
11行:KUBE_API_PORT="--port=8080"
17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379"
23行:此处是一行
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
......
[root@master ~]# vim /etc/kubernetes/config
......
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
......
[root@master ~]# systemctl enable kube-apiserver.service
[root@master ~]# systemctl restart kube-apiserver.service
[root@master ~]# systemctl enable kube-controller-manager.service
[root@master ~]# systemctl restart kube-controller-manager.service
[root@master ~]# systemctl enable kube-scheduler.service
[root@master ~]# systemctl restart kube-scheduler.service

检查服务是否安装正常

[root@k8s-master ~]# kubectl get componentstatus 
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 

1.5:node节点安装kubernetes

[root@node-1 ~]# rm -rf /etc/yum.repos.d/local.repo 
[root@node-1 ~]# echo "192.168.37.200 mirrors.aliyun.com" >>/etc/hosts
[root@node-1 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@node-1 ~]# yum install kubernetes-node.x86_64 -y
[root@node-1 ~]# vim /etc/kubernetes/config 
......
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
......
[root@node-1 ~]# vim /etc/kubernetes/kubelet
......
5行:KUBELET_ADDRESS="--address=0.0.0.0"
8行:KUBELET_PORT="--port=10250"
11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.12"
14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"


......
[root@node-1 ~]# systemctl enable kubelet.service
[root@node-1 ~]# systemctl start kubelet.service
[root@node-1 ~]# systemctl enable kube-proxy.service
[root@node-1 ~]# systemctl start kube-proxy.service

在master节点检查

[root@k8s-master ~]# kubectl get nodes
NAME        STATUS    AGE
10.0.0.12   Ready     6m
10.0.0.13   Ready     3s

1.6:所有节点配置flannel网络

### 所有节点安装
[root@master ~]# yum install flannel -y
[root@master ~]# sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
[root@node-1 ~]# yum install flannel -y
[root@node-1 ~]# ]# sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
[root@node-2 ~]# yum install flannel -y
[root@node-2 ~]# sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld


##master节点操作:
[root@master ~]# etcdctl mk /atomic.io/network/config   '{ "Network": "172.16.0.0/16" }'
[root@master ~]# yum install docker -y
[root@master ~]# systemctl enable flanneld.service 
[root@master ~]# systemctl restart flanneld.service 
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl enable docker
[root@master ~]# systemctl restart kube-apiserver.service
[root@master ~]# systemctl restart kube-controller-manager.service
[root@master ~]# systemctl restart kube-scheduler.service
###所有节点都上传
[root@master ~]# rz docker_busybox.tar.gz
[root@master ~]# docker load -i docker_busybox.tar.gz
adab5d09ba79: Loading layer [==================================================>] 1.416 MB/1.416 MB
Loaded image: docker.io/busybox:latest

###所有机器都运行docker容器
[root@master ~]# docker run -it docker.io/busybox:latest 
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:10:43:02  
          inet addr:172.16.67.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::42:acff:fe10:4302/64 Scope:Link
/ # ping 172.16.67.2
64 bytes from 172.16.67.2: seq=0 ttl=64 time=0.127 ms
64 bytes from 172.16.67.2: seq=1 ttl=64 time=0.062 ms

##node节点:node-1   node-2
[root@node-1 ~]# systemctl enable flanneld.service 
[root@node-1 ~]# systemctl restart flanneld.service 
[root@node-1 ~]# service docker restart
[root@node-1 ~]# systemctl restart kubelet.service
[root@node-1 ~]# systemctl restart kube-proxy.service
###所有节点启动docker node-1  node-2都部署
[root@node-1 ~]# vim /usr/lib/systemd/system/docker.service
#在[Service]区域下增加一行
......
[Service]
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
......

systemctl daemon-reload 
systemctl restart docker

1.7:配置master为镜像仓库

#所有节点
[root@master ~]# vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries": ["10.0.0.11:5000"]
}
[root@master ~]# systemctl restart docker

### 上传包master节点
[root@master ~]# rz
[root@node-1 ~]# ls
anaconda-ks.cfg  docker_busybox.tar.gz  registry.tar.gz
[root@node-1 ~]# docker load -i registry.tar.gz 
ef763da74d91: Loading layer [==================================================>] 5.058 MB/5.058 MB
7683d4fcdf4e: Loading layer [==================================================>] 7.894 MB/7.894 MB
656c7684d0bd: Loading layer [==================================================>] 22.79 MB/22.79 MB
a2717186d7dd: Loading layer [==================================================>] 3.584 kB/3.584 kB
3c133a51bc00: Loading layer [==================================================>] 2.048 kB/2.048 kB
Loaded image: registry:latest
#master节点操作
[root@node-1 ~]# docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry  registry
68ea32e2ecec3a0fb8a9223e1cc5e22b10b1c64080be020852c557dcc317526b
###测试
[root@node-2 ~]# docker tag docker.io/busybox:latest 10.0.0.11:5000/busybox:latest
[root@node-2 ~]# docker push 10.0.0.11:5000/busybox:latest 
The push refers to a repository [10.0.0.11:5000/busybox]
adab5d09ba79: Pushed 
latest: digest: sha256:4415a904b1aca178c2450fd54928ab362825e863c0ad5452fd020e92f7a6a47e size: 527

###在maser节点查看如下表示私有仓库部署成功
[root@master ~]# ls /opt/myregistry/docker/registry/v2/repositories/
busybox

systemctl daemon-reload 
systemctl restart docker

2:什么是k8s,k8s有什么功能?

k8s是一个docker集群的管理工具

2.1 k8s的核心功能

自愈:

重新启动失败的容器,在节点不可用时,替换和重新调度节点上的容器,对用户定义的健康检查不响应的容器会被中止,并且在容器准备好服务之前不会把其向客户端广播。

弹性伸缩:

通过监控容器的cpu的负载值,如果这个平均高于80%,增加容器的数量,如果这个平均低于10%,减少容器的数量

服务的自动发现和负载均衡:

不需要修改您的应用程序来使用不熟悉的服务发现机制,Kubernetes 为容器提供了自己的 IP 地址和一组容器的单个 DNS 名称,并可以在它们之间进行负载均衡。

升级和一键回滚:

Kubernetes 逐渐部署对应用程序或其配置的更改,同时监视应用程序运行状况,以确保它不会同时终止所有实例。 如果出现问题,Kubernetes会为您恢复更改,利用日益增长的部署解决方案的生态系统。

密码管理

2.2 k8s的历史

2014年 docker容器编排工具,立项

2015年7月 发布kubernetes 1.0, 加入cncf

2016年,kubernetes干掉两个对手,docker swarm,mesos 1.2版

2017年

2018年 k8s 从cncf基金会 毕业

2019年: 1.13, 1.14 ,1.15

cncf cloud native compute foundation

kubernetes (k8s): 希腊语 舵手,领航 容器编排领域,

谷歌16年容器使用经验,borg容器管理平台,使用golang重构borg,kubernetes

2.3 k8s的安装

yum安装 1.5 最容易安装成功,最适合学习的

源码编译安装---难度最大 可以安装最新版

二进制安装---步骤繁琐 可以安装最新版 shell,ansible,saltstack

kubeadm 安装最容易, 网络 可以安装最新版

minikube 适合开发人员体验k8s, 网络

2.4 k8s的应用场景

k8s最适合跑微服务项目!

微服务:

网站:
MVC  开发架构  主域名,公用数据库,当用户多了的时候数据库就先扛不住了
1套架构 2个负载均衡 3-4台web,1台缓存
开发环境  测试环境  预发布环境  生产环境  
预备资源  20台服务器  



微服务:
soa  开发架构
微服务架构  
N多个小服务,每个服务都有自己的数据库,自己的域名,web服务

提供更高的并发,更高的可用性,发布周期更短。
上百套架构,上千台服务器,ansible,自动化代码上线,监控(一套【公共基础设施】)
提供更高的并发,更高的可用性,发布周期更短。
docker部署 ---->k8s管理--->弹性伸缩    k8s适用于微服务


docker的出现解决了微服务的部署问题,k8s的出现解决了docker的集群问题
预备资源  20台服务器  

小规模--->大规模

3:k8s常用的资源

3.1 创建pod资源(最小的资源)

k8s yaml的主要组成

apiVersion: v1  api版本
kind: pod     资源类型
metadata:     属性
spec:         详细

k8s_pod.yaml

[root@master ~]# mkdir k8s_yaml
[root@master ~]# cd k8s_yaml/
[root@master k8s_yaml]# mkdir pod
[root@master k8s_yaml]# cd pod/
[root@master pod]# vi k8s_pod.yaml
[root@master pod]# cat k8s_pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: web
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
#在node3操作
[root@node-2 ~]# rz 
[root@node-2 ~]# ls
anaconda-ks.cfg  docker_busybox.tar.gz  docker_nginx1.13.tar.gz
[root@node-2 ~]# docker load -i docker_nginx1.13.tar.gz 
d626a8ad97a1: Loading layer 58.46 MB/58.46 MB
82b81d779f83: Loading layer 54.21 MB/54.21 MB
7ab428981537: Loading layer 3.584 kB/3.584 kB
Loaded image: docker.io/nginx:1.13
[root@node-2 ~]# docker tag docker.io/nginx:1.13 10.0.0.11:5000/nginx:1.13
[root@node-2 ~]# docker push 10.0.0.11:5000/nginx:1.13
The push refers to a repository [10.0.0.11:5000/nginx]
7ab428981537: Pushed 
82b81d779f83: Pushed 
d626a8ad97a1: Pushed 
1.13: digest: sha256:e4f0474a75c510f40b37b6b7dc2516241ffa8bde5a442bde3d372c9519c84d90 size: 948
###master验证
[root@master pod]# kubectl create -f k8s_pod.yaml 
pod "nginx" created
[root@master pod]# kubectl get pod
NAME      READY     STATUS              RESTARTS   AGE
nginx     0/1       ContainerCreating   0          48s
[root@master pod]# kubectl get pod -o wide
NAME      READY     STATUS              RESTARTS   AGE       IP        NODE
nginx     0/1       ContainerCreating   0          1m        <none>    10.0.0.13

###在node-2  10.0.0.13上传镜像
[root@node-2 ~]# rz
[root@node-2 ~]# ls
 docker_busybox.tar.gz  docker_nginx1.13.tar.gz  pod-infrastructure-latest.tar.gz
[root@node-2 ~]# docker load -i pod-infrastructure-latest.tar.gz 
df9d2808b9a9: Loading layer 202.3 MB/202.3 MB
0a081b45cb84: Loading layer 10.24 kB/10.24 kB
ba3d4cbbb261: Loading layer 12.51 MB/12.51 MB
Loaded image: docker.io/tianyebj/pod-infrastructure:latest
[root@node-2 ~]# docker tag docker.io/tianyebj/pod-infrastructure:latest 10.0.0.11:5000/pod-infrastructure:latest
[root@node-2 ~]# docker push 10.0.0.11:5000/pod-infrastructure:latest 
The push refers to a repository [10.0.0.11:5000/pod-infrastructure]
ba3d4cbbb261: Preparing 
0a081b45cb84: Preparing 
df9d2808b9a9: Pushed 
latest: digest: sha256:a378b2d7a92231ffb07fdd9dbd2a52c3c439f19c8d675a0d8d9ab74950b15a1b size: 948

###在node1和node2节点都添加
[root@node-1 ~]# vim /etc/kubernetes/kubelet   ###添加image后面内容,防止报错红帽链接
......
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.0.11:5000/pod-infrastructure:latest"
......
[root@node-2 ~]# vim /etc/kubernetes/kubelet 
[root@node-1 ~]# systemctl restart kubelet.service   ##node1和node2所有节点都重启
[root@node-2 ~]# systemctl restart kubelet.service 

###验证
[root@master pod]# kubectl get pod -o wide
NAME      READY     STATUS    RESTARTS   AGE       IP            NODE
nginx     1/1       Running   0          20m       172.16.83.2   10.0.0.13
[root@master pod]# curl -I 172.16.83.2
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Mon, 09 Dec 2019 14:26:33 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes

###验证成功
----------------------------------
[root@master pod]# cp k8s_pod.yaml k8s_pod2.yaml 
[root@master pod]# vim k8s_pod2.yaml 
[root@master pod]# kubectl create -f k8s_pod2.yaml 
pod "nginx2" created
[root@master pod]# kubectl get pod
NAME      READY     STATUS              RESTARTS   AGE
nginx     1/1       Running             0          27m
nginx2    0/1       ContainerCreating   0          17s
[root@master pod]# kubectl get pod -o wide
NAME      READY     STATUS    RESTARTS   AGE       IP            NODE
nginx     1/1       Running   0          29m       172.16.83.2   10.0.0.13
nginx2    1/1       Running   0          2m        172.16.77.2   10.0.0.12
[root@master pod]# curl -I 172.16.77.2
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Mon, 09 Dec 2019 14:35:18 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes

[root@master pod]# 

为什么创建一个pod资源?K8s需要启动两个容器

pod资源:至少由两个容器组成

基础容器 pod,定制化功能

业务容器组成 nginx

pod配置文件2:

[root@master pod]# vim k8s_pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: test
  labels:
    app: web
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
    - name: busybox
      image: 10.0.0.11:5000/busybox:latest
      command: ["sleep","10000"]
[root@master pod]# kubectl create -f k8s_pod3.yaml 
pod "test" created
[root@master pod]# kubectl get pod -o wide
NAME      READY     STATUS    RESTARTS   AGE       IP            NODE
nginx     1/1       Running   0          33m       172.16.83.2   10.0.0.13
nginx2    1/1       Running   0          6m        172.16.77.2   10.0.0.12
test      2/2       Running   0          16s       172.16.83.3   10.0.0.13
[root@node-2 ~]# docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED              STATUS              PORTS               NAMES
2434a399924a        10.0.0.11:5000/busybox:latest              "sleep 10000"            About a minute ago   Up 59 seconds                           k8s_busybox.7e7ae56a_test_default_997268fb-1a91-11ea-b30c-000c290c9463_c101f9b4
a71099237d7b        10.0.0.11:5000/nginx:1.13                  "nginx -g 'daemon ..."   About a minute ago   Up About a minute                       k8s_nginx.91390390_test_default_997268fb-1a91-11ea-b30c-000c290c9463_f35f0c75
deca3d444e40        10.0.0.11:5000/pod-infrastructure:latest   "/pod"                   About a minute ago   Up About a minute                       k8s_POD.177f01b0_test_default_997268fb-1a91-11ea-b30c-000c290c9463_e201879a
eaf35aa1a6ca        10.0.0.11:5000/nginx:1.13                  "nginx -g 'daemon ..."   15 minutes ago       Up 15 minutes                           k8s_nginx.91390390_nginx_default_ffb61ebf-1a8c-11ea-b30c-000c290c9463_96956214
2f0a3f968c06        10.0.0.11:5000/pod-infrastructure:latest   "/pod"                   15 minutes ago       Up 15 minutes                           k8s_POD.177f01b0_nginx_default_ffb61ebf-1a8c-11ea-b30c-000c290c9463_ea168898
[root@node-2 ~]# 

pod是k8s最小的资源单位

驱逐
kubelet 监控本机的docker容器,启动新的容器
k8s 集群,pod数量少了,controller-manager启动新的pod

rc  标签选择器---关联    

3.2 ReplicationController资源 rc

rc:保证指定数量的pod始终存活,rc通过标签选择器来关联pod

k8s资源的常见操作: (增删改查)
kubectl create -f xxx.yaml (增 )
kubectl get pod|rc #获取、查看资源
kubectl describe pod nginx # 查看资源的具体描述
kubectl delete pod nginx 或者kubectl delete -f xxx.yaml #删除一个资源
kubectl edit pod nginx #改一个资源的配置文件

创建一个rc (等同于ReplicationController)

[root@master k8s_yaml]# mkdir rc
[root@master k8s_yaml]# cd rc/
[root@master rc]# vim k8s_rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 5
  selector:
    app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
      - name: myweb
        image: 10.0.0.11:5000/nginx:1.13
        ports:
        - containerPort: 80
[root@master rc]# kubectl create -f k8s_rc.yml 
replicationcontroller "nginx" created
[root@master rc]# kubectl get rc
NAME      DESIRED   CURRENT   READY     AGE
nginx     5         5         5         19s
[root@master rc]# kubectl get pod
NAME          READY     STATUS    RESTARTS   AGE
nginx         1/1       Running   0          39m
nginx-3243v   1/1       Running   0          31s
nginx-9fzgc   1/1       Running   0          31s
nginx-ppgdv   1/1       Running   0          31s
nginx-sxtp0   1/1       Running   0          31s
nginx-x5mkk   1/1       Running   0          31s
nginx2        1/1       Running   0          12m
test          2/2       Running   0          6m
[root@master rc]# 

rc的滚动升级
新建一个nginx-rc2.yml
img

升级
kubectl rolling-update nginx -f nginx-rc2.yml --update-period=10s

回滚
kubectl rolling-update nginx2 -f nginx2-rc.yml --update-period=1s

[root@master rc]# cp k8s_rc.yml k8s_rc2.yml 
[root@master rc]# vim k8s_rc2
[root@master rc]# vim k8s_rc2.yml 
[root@master rc]# cat k8s_rc2.yml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx2
spec:
  replicas: 5
  selector:
    app: myweb2
  template:
    metadata:
      labels:
        app: myweb2
    spec:
      containers:
      - name: myweb
        image: 10.0.0.11:5000/nginx:1.15
        ports:
        - containerPort: 80
###node2上传镜像
[root@node-2 ~]# rz 
[root@node-2 ~]# docker load -i docker_nginx1.15.tar.gz 
Loaded image: docker.io/nginx:latest
[root@node-2 ~]# docker tag docker.io/nginx:latest 10.0.0.11:5000/nginx:1.15
[root@node-2 ~]# docker push 10.0.0.11:5000/nginx:1.15
The push refers to a repository [10.0.0.11:5000/nginx]
92b86b4e7957: Pushed 
94ad191a291b: Pushed 
8b15606a9e3e: Pushed 
1.15: digest: sha256:204a9a8e65061b10b92ad361dd6f406248404fe60efd5d6a8f2595f18bb37aad size: 948


[root@master rc]# kubectl rolling-update nginx -f k8s_rc2.yml --update-period=5s
[root@master rc]# kubectl rolling-update nginx2 -f k8s_rc.yml --update-period=1s

3.3 service资源

service帮助pod暴露端口

创建一个service

apiVersion: v1
kind: Service
metadata:
  name: myweb
spec:
  type: NodePort  #ClusterIP
  ports:
    - port: 80          #clusterIP
      nodePort: 30000   #nodeport
      targetPort: 80    #podport
  selector:
    app: myweb2
    
##### 自动暴露端口
命令行生成svc:kubectl expose deployment nginx --target-port=80 --type=NodePort --port=80


修改副本数量:  kubectl scale rc nginx1 --replicas=2
进入容器:     kubectl exec -it nginx1-1frnf /bin/bash

修改nodePort范围

vim  /etc/kubernetes/apiserver
KUBE_API_ARGS="--service-node-port-range=3000-50000"

service默认使用iptables来实现负载均衡, k8s 1.8新版本中推荐使用lvs(四层负载均衡)

自动发现

三种ip nodeip:api-server VIP: api-server podip:etcd /atomic.io

3.4 deployment资源

有rc在滚动升级之后,会造成服务访问中断,于是k8s引入了deployment资源

创建deployment

[root@master k8s_yaml]# mkdir deploy
[root@master k8s_yaml]# cd deploy/
[root@master deploy]# vim k8s_deploy.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: 10.0.0.11:5000/nginx:1.13
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 100m
          requests:
            cpu: 100m

deployment升级和回滚

命令行创建deployment

kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record ##指哪里就去哪里pull镜像

命令行升级版本

kubectl set image deploy nginx nginx=10.0.0.11:5000/nginx:1.15

查看deployment所有历史版本

kubectl rollout history deployment nginx

deployment回滚到上一个版本

kubectl rollout undo deployment nginx

deployment回滚到指定版本

kubectl rollout undo deployment nginx --to-revision=2

RS支持通配符匹配标签

1575960522400

3.5 tomcat+mysql练习

在k8s中容器之间相互访问,通过VIP地址!
img

启动所有服务

systemctl restart flanneld.service 
systemctl restart docker
systemctl restart kubelet.service
systemctl restart kubelet.service 
systemctl restart kube-proxy.service 


mysql的rc和svc

[root@k8s-master tomcat_daemon]# cat mysql-rc.yml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: 10.0.0.11:5000/mysql:5.7
          ports:
          - containerPort: 3306
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: '123456'

svc

[root@k8s-master tomcat_daemon]# cat mysql-svc.yml 
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
    - port: 3306
      targetPort: 3306
  selector:
    app: mysql

tomcat的rc

[root@k8s-master tomcat_daemon]# cat tomcat-rc.yml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: myweb
spec:
  replicas: 1
  selector:
    app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
        - name: myweb
          image: 10.0.0.11:5000/tomcat-app:v2
          ports:
          - containerPort: 8080
          env:
          - name: MYSQL_SERVICE_HOST
            value: '10.254.36.202'
          - name: MYSQL_SERVICE_PORT
            value: '3306'

tomcat的svc

[root@k8s-master tomcat_daemon]# cat tomcat-svc.yml 
apiVersion: v1
kind: Service
metadata:
  name: myweb
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30008
  selector:
    app: myweb

3.6 wordpress+mysql

img

wordpress的代码

[root@k8s-master worepress_daemon]# cat wordpress-rc.yml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: mywordpress
spec:
  replicas: 1
  selector:
    app: mywordpress
  template:
    metadata:
      labels:
        app: mywordpress
    spec:
      containers:
        - name: mywordpress
          image: 10.0.0.11:5000/wordpress:v1
          ports:
          - containerPort: 80
          env:
          - name: WORDPRESS_DB_HOST
            value: '10.254.112.209'
          - name: WORDPRESS_DB_USER
            value: 'wordpress'
          - name: WORDPRESS_DB_PASSWORD
            value: 'wordpress'



[root@k8s-master worepress_daemon]# cat wordpress-svc.yml 
apiVersion: v1
kind: Service
metadata:
  name: mywordpress
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30010
  selector:
    app: mywordpress

mysql的代码

[root@k8s-master worepress_daemon]# cat mysql-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: 10.0.0.11:5000/mysql:5.7
          ports:
          - containerPort: 3306
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: 'somewordpress'
          - name: MYSQL_DATABASE
            value: 'wordpress'
          - name: MYSQL_USER
            value: 'wordpress'
          - name: MYSQL_PASSWORD
            value: 'wordpress'





[root@k8s-master worepress_daemon]# cat mysql-svc.yml 
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
    - port: 3306
      targetPort: 3306
  selector:
    app: mysql

3.7 deployment版wordpress+mysql

img

[root@k8s-master wordpress_deploy]# cat wp-rc.yml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: wordpress-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: wp
    spec:
      containers:
      - name: wp
        image: 10.0.0.11:5000/wordpress:v1
        ports:
        - containerPort: 80
        env:
        - name: WORDPRESS_DB_HOST
          value: '10.254.235.122'
        - name: WORDPRESS_DB_USER
          value: 'wordpress'
        - name: WORDPRESS_DB_PASSWORD
          value: 'wordpress'
        resources:
          limits:
            cpu: 100m
          requests:
            cpu: 100m


        
[root@k8s-master wordpress_deploy]# cat wp-svc.yml 
apiVersion: v1
kind: Service
metadata:
  name: wp
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30011
  selector:
    app: wp         
            
            
[root@k8s-master wordpress_deploy]# cat mysql-wp-rc.yml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mysql-wp
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql-wp
    spec:
      containers:
        - name: mysql-wp
          image: 10.0.0.11:5000/mysql:5.7
          ports:
          - containerPort: 3306
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: 'somewordpress'
          - name: MYSQL_DATABASE
            value: 'wordpress'
          - name: MYSQL_USER
            value: 'wordpress'
          - name: MYSQL_PASSWORD
            value: 'wordpress'


[root@k8s-master wordpress_deploy]# cat mysql-wp-svc.yml 
apiVersion: v1
kind: Service
metadata:
  name: mysql-wp
spec:
  ports:
    - port: 3306
      targetPort: 3306
  selector:
    app: mysql-wp

4:k8s的附加组件

4.1 dns服务

安装dns服务

1:下载dns_docker镜像包

wget http://192.168.37.200/docker_image/docker_k8s_dns.tar.gz

2:导入dns_docker镜像包(node2节点)

3:修改skydns-rc.yaml

spec:
  nodeSelector:
    kubernetes.io/hostname: 10.0.0.13
  containers:   

4:创建dns服务

kubectl  create  -f   skydns-rc.yaml

5:检查

kubectl get all --namespace=kube-system

6:修改所有node节点kubelet的配置文件

vim  /etc/kubernetes/kubelet

KUBELET_ARGS="--cluster_dns=10.254.230.254 --cluster_domain=cluster.local"

systemctl   restart kubelet

4.2 namespace命令空间

namespace 做资源隔离

 [root@master ~]# kubectl create namespace oldqiang            ###创建资源隔离
在生产上一般一个业务一个namespace,以tomcat为例,增加一个namespace。
[root@master ~]# cd k8s_yaml/tomcat_demo/   ###首先进入tomcat
[root@master tomcat_demo]# ls                ###列出查看yml文件
mysql-rc.yml  mysql-svc.yml  tomcat-rc.yml  tomcat-svc.yml
#### 在所有的第三行后面加上namespace
[root@master tomcat_demo]# sed -i '3a \ \ namespace: tomcat'   
[root@master tomcat_demo]# kubectl create namespace tomcat   ###创建tomcat资源隔离
[root@master tomcat_demo]# kubectl create -f .       #创建所有的pod资源
[root@master tomcat_demo]# kubectl get all -n tomcat
####创建zabbix与其他资源同上

4.3 健康检查

4.3.1 探针的种类

livenessProbe:健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器

readinessProbe:可用性检查,周期性检查服务是否可用,不可用将从service的endpoints中移除

4.3.2 探针的检测方法

- exec:执行一段命令,探测返回值,如果健康返回0,不健康非0
- httpGet:检测某个 http 请求的返回状态码,  2xx与3xx正常 4xx与5xx错误
- tcpSocket:测试某个端口是否能够连接

4.3.3 liveness探针的exec使用

[root@master health]# vi  nginx_pod_exec.yml 
apiVersion: v1
kind: Pod
metadata:
  name: exec
spec:                
  containers:                       ###容器
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      args:                        ###执行命令
        - /bin/sh
        - -c
        - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
      livenessProbe:
        exec:
          command:
            - cat
            - /tmp/healthy
      livenessProbe:
        exec:
          command:
            - cat
            - /tmp/healthy
        initialDelaySeconds: 5   
        periodSeconds: 5
        timeoutSeconds: 5
        successThreshold: 1
        failureThreshold: 1
        
###遇到i进入插入模式
[root@master health]# kubectl describe pod exec

4.3.4 liveness探针的httpGet使用

[root@master health]# vi nginx_pod_httpGet.yml 
iapiVersion: v1
kind: Pod
metadata:
  name: httpget
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      livenessProbe:
        httpGet:
          path: /index.html
          port: 80
        initialDelaySeconds: 3
        periodSeconds: 3

4.3.5 liveness探针的tcpSocket使用

[root@master health]#  vim nginx_pod_tcpSocket.yaml
apiVersion: v1
kind: Pod
metadata:
  name: tcpsocket
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      livenessProbe:
        tcpSocket:
          port: 80
        initialDelaySeconds: 3
        periodSeconds: 3

4.3.6 readiness探针的httpGet使用

[root@master health]#  vim nginx-rc-httpGet.yaml
iapiVersion: v1
kind: ReplicationController
metadata:
  name: readiness
spec:
  replicas: 2
  selector:
    app: readiness
  template:
    metadata:
      labels:
        app: readiness
    spec:
      containers:
      - name: readiness
        image: 10.0.0.11:5000/nginx:1.13
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /qiangge.html
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 3
          
          
[root@master health]# kubectl create -f nginx-rc-httpGet.yaml 
[root@master health]# kubectl expose rc readiness --port=80 --target-port=80 --type=NodePort
[root@master health]# kubectl describe svc readiness ###检测负载均衡未发现
[root@master health]# kubectl get all
......
po/readiness-1mj49      0/1       Running             0          18m
po/readiness-s0m9s      0/1       Running             0          18m
......
[root@master health]# kubectl exec -it readiness-1mj49 /bin/bash
root@readiness-1mj49:/# echo 'ewrf' >/usr/share/nginx/html/qiangge.html
[root@master health]# kubectl describe svc readiness 
......
Endpoints:		172.16.83.9:80
......

4.4 dashboard服务

1:上传并导入镜像,打标签
2:创建dashborad的deployment和service
3:访问http://10.0.0.11:8080/ui/


###node2节点
[root@node-2 opt]# ls
kubernetes-dashboard-amd64_v1.4.1.tar.gz
[root@node-2 opt]# docker load -i kubernetes-dashboard-amd64_v1.4.1.tar.gz 

###master节点
[root@master health]# mkdir dashboard
[root@master health]# cd dashboard/
[root@master dashboard]# cat dashboard-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090
[root@master dashboard]# cat dashboard.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Keep the name in sync with image version and
# gce/coreos/kube-manifests/addons/dashboard counterparts
  name: kubernetes-dashboard-latest
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
        version: latest
        kubernetes.io/cluster-service: "true"
    spec:
      nodeName: 10.0.0.13    ###在此处添加nodeName表示调到13节点
      containers:
      - name: kubernetes-dashboard
        image: index.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.4.1
        ####image镜像直接写入yml文件
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        ports:
        - containerPort: 9090
        args:
         -  --apiserver-host=http://10.0.0.11:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
[root@master dashboard]# kubectl create -f .

## 浏览器访问: http://10.0.0.11:8080/ui/
dockerfile:
CMD 
ENTRYPOINT

资源:
deamon sets :畜生应用,无状态的应用,没有自己的数据,随便杀
pot set :宠物应用,有自己的数据

jobs 一次性容器
contables定时任务

html # 锚点  定义符,定位的

4.5 通过apiservicer反向代理访问service

第一种:NodePort类型  ##分配VIP
type: NodePort
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30008

第二种:ClusterIP类型
 type: ClusterIP
  ports:
    - port: 80
      targetPort: 80
      
第三种:http://10.0.0.11:8080/api/v1/proxy/namespaces/命令空间/services/service的名字

5: k8s弹性伸缩

k8s弹性伸缩,需要附加插件heapster监控

5.1 安装heapster监控

1:上传并导入镜像,打标签

kubelet cadvisor 10.0.0.12:4194    heapster调用api-server 将调用到的数据存储在influxdb中,用grafana出图,然后dashboard调用grafana.
ls *.tar.gz
for n in `ls *.tar.gz`;do docker load -i $n ;done
docker tag docker.io/kubernetes/heapster_grafana:v2.6.0 10.0.0.11:5000/heapster_grafana:v2.6.0
docker tag  docker.io/kubernetes/heapster_influxdb:v0.5 10.0.0.11:5000/heapster_influxdb:v0.5
docker tag docker.io/kubernetes/heapster:canary 10.0.0.11:5000/heapster:canary

2:上传配置文件,kubectl create -f .

3:打开dashboard验证

img


svc  调用pod  deploment控制  在deplomen创建hpa(弹性伸缩规则),但是hpa必须借助监控

5.2 弹性伸缩

[root@master monitor]#  kubectl delete pod --all  ##删除pod
[root@master monitor]# cd ../deploy/
[root@master deploy]# ls
k8s_deploy.yml
[root@master deploy]# cat k8s_deploy.yml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: 10.0.0.11:5000/nginx:1.13
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 100m
          requests:
            cpu: 100m
[root@master deploy]# kubectl create -f k8s_deploy.yml 
[root@master deploy]# kubectl expose deployment nginx-deployment --port=80 --target-port=80 --type=NodePort
[root@master deploy]# kubectl get all
svc/nginx-deployment   10.254.174.57    <nodes>       80:20591/TCP   1d
[root@master deploy]# curl -I http://10.0.0.12:20591
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Wed, 11 Dec 2019 14:23:50 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes

[root@master deploy]# yum install httpd -y
[root@master deploy]# ab -n 1000000 -c 40 http://10.0.0.12:20591/index.html

1:修改rc的配置文件

  containers:
  - name: myweb
    image: 10.0.0.11:5000/nginx:1.13
    ports:
    - containerPort: 80
    resources:
      limits:
        cpu: 100m
      requests:
        cpu: 100m

2:创建弹性伸缩规则

kubectl autoscale  -n qiangge replicationcontroller myweb --max=8 --min=1 --cpu-percent=8

3:测试

 ab -n 1000000 -c 40 http://172.16.28.6/index.html

扩容截图

img

缩容:

img

6:持久化存储

pv: persistent volume
pvc: persistent volume  claim

为什么持久化存储:一些用户上传的数据我们需要保留下来

数据持久化类型:

6.1 emptyDir:

    spec:
      nodeName: 10.0.0.13
      volumes:
      - name: mysql
        emptyDir: {}
      containers:
        - name: wp-mysql
          image: 10.0.0.11:5000/mysql:5.7
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 3306
          volumeMounts:
          - mountPath: /var/lib/mysql
            name: mysql

6.2 HostPath:

    spec:
      nodeName: 10.0.0.13
      volumes:
      - name: mysql
        hostPath:
          path: /data/wp_mysql
      containers:
        - name: wp-mysql
          image: 10.0.0.11:5000/mysql:5.7
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 3306
          volumeMounts:
          - mountPath: /var/lib/mysql
            name: mysql

6.3 nfs:

      volumes:
      - name: mysql
        nfs:
          path: /data/wp_mysql
          server: 10.0.0.11

6.4 pvc:

pv: persistent volume 全局资源,k8s集群

pvc: persistent volume claim, 局部资源属于某一个namespace

[root@master ~]# kubectl explain pod.spec.volumes ##查询语法

6.4.1:安装nfs服务端(master--10.0.0.11)

yum install nfs-utils.x86_64 -y
mkdir /data
vim /etc/exports
/data  10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
systemctl start rpcbind
systemctl start nfs

6.2:在node节点安装nfs客户端10.0.0.12--10.0.0.13

yum install nfs-utils.x86_64 -y
showmount -e 10.0.0.11

6.3:创建pv和pvc

上传yaml配置文件,创建pv和pvc

6.4:创建mysql-rc,pod模板里使用volume

          volumeMounts:
          - name: mysql
            mountPath: /var/lib/mysql
      volumes:
      - name: mysql
        persistentVolumeClaim:
          claimName: tomcat-mysql

6.5: 验证持久化

验证方法1:删除mysql的pod,数据库不丢

kubectl delete pod mysql-gt054

验证方法2:查看nfs服务端,是否有mysql的数据文件

img

6.6: 分布式存储glusterfs

a: 什么是glusterfs

Glusterfs是一个开源分布式文件系统,具有强大的横向扩展能力,可支持数PB存储容量和数千客户端,通过网络互联成一个并行的网络文件系统。具有可扩展性、高性能、高可用性等特点。

b: 安装glusterfs

所有节点:
yum install  centos-release-gluster -y
yum install  install glusterfs-server -y
systemctl start glusterd.service
systemctl enable glusterd.service
#为gluster集群增加存储单元
mkdir -p /gfs/test1
mkdir -p /gfs/test2
mkdir -p /gfs/test3

###每个节点添加10g,三个10g的硬盘
echo '- - -' >/sys/class/scsi_host/host0/scan 
echo '- - -' >/sys/class/scsi_host/host1/scan 
echo '- - -' >/sys/class/scsi_host/host2/scan 
fdisk -l
mkfs.xfs /dev/sdb /gfs/test1
mkfs.xfs /dev/sdb 
mkfs.xfs /dev/sdc 
mkfs.xfs /dev/sdd
mount /dev/sdb /gfs/test1
mount /dev/sdc /gfs/test2
mount /dev/sdd /gfs/test3

c: 添加存储资源池

master节点:
gluster pool list
gluster peer probe k8s-node1
gluster peer probe k8s-node2
gluster pool list

d: glusterfs卷管理

创建分布式复制卷
gluster volume create qiangge replica 2 k8s-master:/gfs/test1 k8s-master:/gfs/test2 k8s-node1:/gfs/test1 k8s-node1:/gfs/test2 force
启动卷
gluster volume start qiangge
查看卷
gluster volume info qiangge 
挂载卷
mount -t glusterfs 10.0.0.11:/qiangge /mnt

e: 分布式复制卷讲解

img

f: 分布式复制卷扩容

扩容前查看容量:
df   -h

扩容命令:
gluster volume add-brick qiangge k8s-node2:/gfs/test1 k8s-node2:/gfs/test2 force

扩容后查看容量:
df   -h
###过程实现
[root@master ~]# gluster volume create oldxu master:/gfs/test1 node-1:/gfs/test1 node-2:/gfs/test1 force
volume create: oldxu: success: please start the volume to access data
[root@master ~]# gluster volume info oldxu
 
Volume Name: oldxu
Type: Distribute
Volume ID: 3359e285-95ae-41a6-8791-70e4b6e0e52c
Status: Created
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: master:/gfs/test1
Brick2: node-1:/gfs/test1
Brick3: node-2:/gfs/test1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@master ~]# gluster volume start oldxu
volume start: oldxu: success
[root@master ~]# mount -t glusterfs 127.0.0.1:/oldxu /mnt
[root@master ~]# df -h
Filesystem        Size  Used Avail Use% Mounted on
/dev/sda2          48G  4.0G   45G   9% /
devtmpfs          476M     0  476M   0% /dev
tmpfs             487M     0  487M   0% /dev/shm
tmpfs             487M   51M  437M  11% /run
tmpfs             487M     0  487M   0% /sys/fs/cgroup
tmpfs              98M     0   98M   0% /run/user/0
/dev/sdb           10G   33M   10G   1% /gfs/test1
/dev/sdc           10G   33M   10G   1% /gfs/test2
/dev/sdd           10G   33M   10G   1% /gfs/test3
overlay            48G  4.0G   45G   9% /var/lib/docker/overlay2/b912d985d96128c79652e7b93b67db57dab3131c586362bf91967424949db051/merged
shm                64M     0   64M   0% /var/lib/docker/containers/ca26dee7a7055b1fcb8201cb6c0f737130221c9607a6096b5615404b0d4d9a2b/shm
127.0.0.1:/oldxu   30G  404M   30G   2% /mnt
[root@master ~]# cp /data/wordpress/web/*.php /mnt
cp: cannot stat ‘/data/wordpress/web/*.php’: No such file or directory
[root@master ~]# gluster volume add-brick oldxu replica 2 master:/gfs/test2 node-1:/gfs/test2 node-2:/gfs/test2 force
volume add-brick: success
[root@master ~]# gluster volume add-brick oldxu  master:/gfs/test3 node-1:/gfs/test3  force
volume add-brick: success
[root@master ~]# tree /gfs/test1
/gfs/test1

0 directories, 0 files
[root@master ~]# gluster volume info oldxu
 
Volume Name: oldxu
Type: Distributed-Replicate
Volume ID: 3359e285-95ae-41a6-8791-70e4b6e0e52c
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: master:/gfs/test1
Brick2: master:/gfs/test2
Brick3: node-1:/gfs/test1
Brick4: node-1:/gfs/test2
Brick5: node-2:/gfs/test1
Brick6: node-2:/gfs/test2
Brick7: master:/gfs/test3
Brick8: node-1:/gfs/test3
Options Reconfigured:
performance.client-io-threads: off
transport.address-family: inet
nfs.disable: on
[root@master ~]# gluster volume rebalance oldxu start
volume rebalance: oldxu: success: Rebalance on oldxu has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: a287b4c7-755b-46a7-b22e-8b1a3bff3d39
[root@master ~]# tree /gfs
/gfs
├── test1
├── test2
└── test3

3 directories, 0 files

6.7 k8s 对接glusterfs存储

a:创建endpoint

vi  glusterfs-ep.yaml
iapiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs
  namespace: tomcat
subsets:
- addresses:
  - ip: 10.0.0.11
  - ip: 10.0.0.12
  - ip: 10.0.0.13
  ports:
  - port: 49152
    protocol: TCP

b: 创建service

vi  glusterfs-svc.yaml
vi  glusterfs-svc.yaml
iapiVersion: v1
kind: Service
metadata:
  name: glusterfs
  namespace: tomcat
spec:
  ports:
  - port: 49152
    protocol: TCP
    targetPort: 49152
  sessionAffinity: None
  type: ClusterIP

c: 创建gluster类型pv

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster
  labels:
    type: glusterfs
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: "glusterfs"
    path: "qiangge"
    readOnly: false

d: 创建pvc

e:在pod中使用gluster

vi  nginx_pod.yaml
…… 
volumeMounts:
        - name: nfs-vol2
          mountPath: /usr/share/nginx/html
      volumes:
      - name: nfs-vol2
        persistentVolumeClaim:
          claimName: gluster

回到顶部(go to top)

7:与jenkins集成实现ci/cd

ip地址 服务 内存
10.0.0.11 kube-apiserver 8080 1G
10.0.0.12 jenkins(tomcat + jdk) 8080 1G
10.0.0.13 gitlab 8080,80 2G

img

7.1: 安装gitlab并上传代码

#a:安装
wget https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7/gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm
yum localinstall gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm -y
#b:配置
vim /etc/gitlab/gitlab.rb
external_url 'http://10.0.0.13'
prometheus_monitoring['enable'] = false
#c:应用并启动服务
gitlab-ctl reconfigure

#使用浏览器访问http://10.0.0.13,修改root用户密码,创建project

#上传代码到git仓库
cd /srv/
rz -E
unzip xiaoniaofeifei.zip 
rm -fr xiaoniaofeifei.zip 

git config --global user.name "Administrator"
git config --global user.email "admin@example.com"
git init
git remote add origin http://10.0.0.13/root/xiaoniao.git
git add .
git commit -m "Initial commit"
git push -u origin master

7.2 安装jenkins,并自动构建docker镜像

7.2.1:安装jenkins

cd /opt/
rz -E
rpm -ivh jdk-8u102-linux-x64.rpm 
mkdir /app
tar xf apache-tomcat-8.0.27.tar.gz -C /app
rm -fr /app/apache-tomcat-8.0.27/webapps/*
mv jenkins.war /app/apache-tomcat-8.0.27/webapps/ROOT.war
tar xf jenkin-data.tar.gz -C /root
/app/apache-tomcat-8.0.27/bin/startup.sh 
netstat -lntup

7.2.2:访问jenkins

访问http://10.0.0.12:8080/,默认账号密码admin:123456

7.2.3:配置jenkins拉取gitlab代码凭据

a:在jenkins上生成秘钥对

ssh-keygen -t rsa

b:复制公钥粘贴gitlab上

img

c:jenkins上创建全局凭据

img

7.2.4:拉取代码测试

img

7.2.5:编写dockerfile并测试

#vim dockerfile
FROM 10.0.0.11:5000/nginx:1.13
add .  /usr/share/nginx/html

添加docker build构建时不add的文件
vim .dockerignore
dockerfile

docker build -t xiaoniao:v1 .
docker run -d -p 88:80 xiaoniao:v1

打开浏览器测试访问xiaoniaofeifei的项目

7.2.6:上传dockerfile和.dockerignore到私有仓库

git add docker .dockerignore
git commit -m "fisrt commit"
git push -u origin master

7.2.7:点击jenkins立即构建,自动构建docker镜像并上传到私有仓库

修改jenkins 工程配置

img

docker build -t 10.0.0.11:5000/test:vBUILDID.dockerpush10.0.0.11:5000/test:vBUILDID.dockerpush10.0.0.11:5000/test:vBUILD_ID

7.3 jenkins自动部署应用到k8s

kubectl -s 10.0.0.11:8080 get nodes

if [ -f /tmp/xiaoniao.lock ];then
    docker  build  -t  10.0.0.11:5000/xiaoniao:v$BUILD_ID  .
    docker  push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
    kubectl -s 10.0.0.11:8080 set image  -n xiaoniao deploy xiaoniao xiaoniao=10.0.0.11:5000/xiaoniao:v$BUILD_ID
    echo "更新成功"
else
    docker  build  -t  10.0.0.11:5000/xiaoniao:v$BUILD_ID  .
    docker  push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
    kubectl  -s 10.0.0.11:8080  create  namespace  xiaoniao
    kubectl  -s 10.0.0.11:8080  run   xiaoniao  -n xiaoniao  --image=10.0.0.11:5000/xiaoniao:v$BUILD_ID --replicas=3 --record
    kubectl  -s 10.0.0.11:8080   expose -n xiaoniao deployment xiaoniao --port=80 --type=NodePort
    port=`kubectl -s 10.0.0.11:8080  get svc -n xiaoniao|grep -oP '(?<=80:)\d+'`
    echo "你的项目地址访问是http://10.0.0.13:$port"
    touch /tmp/xiaoniao.lock
fi

jenkins一键回滚

kubectl -s 10.0.0.11:8080 rollout undo -n xiaoniao deployment xiaoniao

posted @ 2020-03-25 10:25  老王教你学Linux  阅读(52018)  评论(8编辑  收藏  举报