docker2
docker 缺点
1.单机使用,无法有效集群
2.容器数量上升,管理成本提高
3.没有有效的容灾/自我恢复机制
4.没有预设编排模版,无法实现快速,大规模容器调度
5.没有统一的配置管理中心
6.没有容器生命周期的管理工具
7.没有图形化运维管理工具等
docker 好处
docker 统一了基础设施环境
1.操作系统的版本
2.运行时环境的异构等
docker统一了程序的打包
1.java程序
2.python程序等
docker统一了程序的运行方式
1.java -jar > docker run
2.python test.py > docker run 等
因此需要一套容器编排的工具
1.docker compose 、docker swarm
2.mesosphere+marathon
3.kubernetes(k8s)
kubernetes优势
1.自动装箱,水平扩展,自我修复
2.服务发现和负载均衡
3.自动发布(默认滚动发布模式)
4.集中化配置管理和密码管理
5.存储编排
6.任务批处理运行等
kubernetes入门
四组基本概念
1.pod/pod控制器
2.Name/Namespace
3.Label/Label选择器
4.Service/Ingress
Pod
k8s里能够被运行的最小逻辑单元
1个POD里面可以运行多个容器(SideCar 边车模式)
POD中的容器共享 UTS/NAT/IPC 名称空间
POD和容器颗粒理解为豌豆荚和豌豆
Pod控制器
Pod控制器是Pod启动的一种模板用来保证在K8S里启动的Pod始终按预期运行包括副本数\生命周期\健康检查等
常用的Pod控制器:
控度器名称
Deployment
用于管理无状态应用,支持滚动更新和回滚
DaemonSet
确保集群中的每一个节点上只运行一个特定的pod副本
ReplicaSet
确保pod副本数量符合用户期望的数量状态
StatefulSet
管理有状态应用
Job
有状态,一次性任务
Cronjob(定时任务)
有状态,周期性任务
Name
由于k8s内部,使用资源来定义每一种逻辑概念(功能)故每种资源都应该有自己的名称
资源”有api版本( apiVersion )类别( kind )、元数据( metadata )、定义清单( spec)、状态( status )等配置信息。
"名称”通常定义在“资源”的"元数据”信息里。
Namespace
随着项目增多、人员增加、集群规模的扩大,需要一种能够隔离K8S内各种“资源”的方法,这就是名称空间
名称空间可以理解为K8S内部的虚拟集群组
不同名称空间内的“资源” ,名称可以相同,相同名称空间内的同种"资源”,"名称” 不能相同
合理的使用K8S的名称空间,使得集群管理员能够更好的对交付到K8S里的服务进行分类管理和浏览
K8S里默认存在的名称空间有: default、 kube-system、 kube-public
查询K8S里特定“资源”要带上相应的名称空间
label
标签是k8s特色的管理方式 ,便于分类管理资源对象。
一个标签可以对应多个资源,一个资源也可以有多个标签,它们是多对多的关系。
一个资源拥有多个标签,可以实现不同维度的管理。
标签的组成: key=value与标签类似的,还有一种"注解”( annotations )
label选择器
给资源打上标签后,可以使用标签选择器过滤指定的标签
标签选择器目前有两个:基于等值关系(等于、不等于)和基于集合关系(属于、不属于、存在)
许多资源支持内嵌标签选择器字段
matchLabels
matchExpressions
Service
在K8S的世界里,虽然每个Pod都会被分配一个单独的IP地址 ,但这个IP地址会随着Pod的销毁而消失
Service (服务)就是用来解决这个问题的核心概念
一个Service可以看作一 组提供相同服务的Pod的对外访问接口
Service作用于哪些Pod是通过标签选择器来定义的
Service实现类型:
ClusterIP:提供一个集群内部的虚拟IP地址以供Pod访问(默认模式)
NodePort:在Node上打开一个端口以供外部访问
LoadBalancer:通过外部的负载均衡器来访问
ClusterIP是默认模式,LoadBalancer需要额外的模组来提供负载均衡
Ingress
Ingress是K8S集群里 工作在OSI网络参考模型下,第7层的应用 ,对外暴露的接口
Service只能进行L 4流量调度,表现形式是ip+port
Ingress则可以调度不同业务域、不同URL访问路径的业务流量
kubernetes核心组件
1.配置存储中心–>etcd服务 (类似于zook存储集群原数据信息,状态,资源配额等信息,存储k8s信息相当于数据库)
2.主控(master)节点
●kube-apiserver服务
Apiserver(k8s集群的大脑)
提供了集群管理的RESTAPI接口(包括鉴权、数据校验及集群状态变更)
负责其他模块之间的数据交互,承担通信枢纽功能
是资源配额控制的入口
提供完备的集群安全机制
●kube-controller-manager服务
controller-manager(控制器管理器)
由一系列控制器组成,通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态
Node Controller
Deployment Controller
Service Controller
Volume Controller
Endpoint Controller
Garbage Controller
Namespace Controller
Job Controller
Resource quta Controller
●kube-scheduler服务
Scheduler(调度程序)(监控node资源的状况)
主要功能是接收调度pod到适合的运算节点上
预算策略( predict )
优选策略( priorities )
3.运算( node )节点
●kube-kubelet服务
Kubelet(容器的搭起,销毁等动作)(负责pod的生命周期,运行node上)(容器的守护进程)
简单地说, kubelet的主要功能就是定时从某个地方获取节点上pod的期望状态(运行什么容器、运行的副本数量网络或者存储如何配置等等) ,并调用对应的容器平台接口达到这个状态
定时汇报当前节点的状态给apiserver,以供调度的时候使用
镜像和容器的清理工作保证节点上镜像不会占满磁盘空间,退出的容器不会占用太多资源
●Kube-proxy服务
kube-proxy(发现机制,运行在node上,最先用iptables做隔离,现在流行用ipvs,简单的网络代理,和负载均衡器,更方便)
是K8S在每个节点上运行网络代理, service资源的载体
建立了pod网络和集群网络的关系( clusterip- >podip )
常用三种流量调度模式
Userspace (废弃)
Iptables (废弃)
Ipvs(推荐)
负责建立和删除包括更新调度规则、通知apiserver自己的更新,或者从apiserver哪里获取其他kube-proxy的调度规则变化来更新自己的
Endpoint Controller 负责维护Service和Pod的对应关系
Kube-proxy负责service的实现,即实现了K8s内部从pod到Service和外部从node port到service的访问
4.CLI客户端
CLI客户端
●kubectl
5.核心附件
●CNI网络插件→flannel/calico
●服务发现用插件→coredns
●服务暴露用插件> traefik
●GUI管理插件> Dashboard
k8s安装前期准备
yum install epel-release
yum install wget net-tools telnet tree namp sysstat lrzsz dos2unix bind-utils
yum isntall bind -y
bind9 配置如下
vi /etc/named.conf
listen-on port 53 { 192.168.17.51; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
recursing-file "/var/named/data/named.recursing";
secroots-file "/var/named/data/named.secroots";
allow-query { any; };
forwarders { 192.168.17.254; };
dnssec-enable no;
dnssec-validation no;
vi /etc/named.rfc1912.zones
zone "host.com" IN {
type master;
file "host.com.zone";
allow-update { 192.168.17.51; };
};
zone "od.com" IN {
type master;
file "od.com.zone";
allow-update { 192.168.17.51; };
};
vi /var/named/host.com.zone
$ORIGIN host.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.host.com. dnsadmin.host.com. (
2020113001 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.host.com.
$TTL 60 ; 1 minute
dns A 192.168.17.51
k8s1 A 192.168.17.51
k8s2 A 192.168.17.52
k8s3 A 192.168.17.53
k8s4 A 192.168.17.54
docker A 192.168.17.200
vi /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.od.com. dnsadmin.od.com. (
2020113001 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.od.com.
$TTL 60 ; 1 minute
dns A 192.168.17.51
systemctl start named
dig -t A k8s2.host.com @192.168.17.51 +short 测试一下 修改网卡的dns地址为192.168.17.51
准备k8s自签证书
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
mkdir -p /opt/certs
/opt/certs
vi ca-csr.json
{
"CN": "k8stest",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
],
"ca": {
"expiry": "175200h"
}
}
mv cfssl_linux-amd64 /usr/local/cfssl
mv cfssljson_linux-amd64 /usr/local/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/cfssl-certinfo
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
部署docker环境
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce
部署docker镜像仓库harbor
tar -xf harbor-offline-installer-v1.8.3.tgz -C /opt
vi harbor.yml
hostname: harbor.od.com
port: 180
data_volume: /data/harbor
yum install docker-compose
./install.sh
docker-compose ps
yum install nginx -y
vi /etc/nginx/conf.d/harbor.od.com.conf
server{
listen 80;
server_name harbor.od.com;
client_max_body_size 1000m;
location / {
proxy_pass http://127.0.0.1:180;
}
}
nginx -t
systemctl start nginx
下载镜像上传到harbor上
docker pull nginx:1.7.9
docker tag 84581e99d807 harbor.od.com/public/nginx:v1.7.9
docker login harbor.od.com
docker push harbor.od.com/public/nginx:v1.7.9
安装k8s 先etcd服务 在三台服务器上
先签etcd证书
vim /opt/certs/ca-config.json
{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"server": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"client": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"peer": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
vim /opt/certs/etcd-peer-csr.json
{
"CN": "k8s-etcd",
"hosts": [
"192.168.17.51",
"192.168.17.52",
"192.168.17.53",
"192.168.17.54"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json |cfssljson -bare etcd-peer
创建用户
useradd -s /sbin/nologin -M etcd
创建目录
mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
拷贝etcd证书
scp ca.pem etcd-peer.pem etcd-peer-key.pem 192.168.17.52:/opt/etcd/certs
下载并且etcd软件
tar -xf etcd-v3.1.20-linux-amd64.tar.gz -C /opt/
创建etcd启动脚本在17.52上面
vim /opt/etcd/etcd-server-startup.sh
#!/bin/sh
./etcd --name etcd-server-17-52 \
--data-dir /data/etcd/etcd-server \
--listen-peer-urls https://192.168.17.52:2380 \
--listen-client-urls https://192.168.17.52:2379,http://127.0.0.1:2379 \
--quota-backend-bytes 8000000000 \
--initial-advertise-peer-urls https://192.168.17.52:2380 \
--advertise-client-urls https://192.168.17.52:2379,http://127.0.0.1:2379 \
--initial-cluster etcd-server-17-52=https://192.168.17.52:2380,etcd-server-17-53=https://192.168.17.53:2380,etcd-server-17-54=https://192.168.17.54:2380 \
--ca-file ./certs/ca.pem \
--cert-file ./certs/etcd-peer.pem \
--key-file ./certs/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ./certs/ca.pem \
--peer-ca-file ./certs/ca.pem \
--peer-cert-file ./certs/etcd-peer.pem \
--peer-key-file ./certs/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ./certs/ca.pem \
--log-output stdout
修改用户属组
chown -R etcd.etcd /opt/etcd/
chown -R etcd.etcd /data/etcd/
chown -R etcd.etcd /data/logs/etcd-server/
安装并运行supervisord保持etcd启动脚本停止后自动启动
yum install supervisor -y
systemctl start supervisord.service
systemctl enable supervisord.service
vim /etc/supervisord.d/etcd-server.ini
[program:etcd-server-17-52]
command=/opt/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/etcd ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=etcd ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
supervisorctl update
supervisorctl status启动查看
安装另外两台etcd服务
拷贝etcd安装包,解压,创建用户,拷贝证书,安装supervisor等,跟第一台一致
第二台启动脚本
#!/bin/sh
./etcd --name etcd-server-17-53 \
--data-dir /data/etcd/etcd-server \
--listen-peer-urls https://192.168.17.53:2380 \
--listen-client-urls https://192.168.17.53:2379,http://127.0.0.1:2379 \
--quota-backend-bytes 8000000000 \
--initial-advertise-peer-urls https://192.168.17.53:2380 \
--advertise-client-urls https://192.168.17.53:2379,http://127.0.0.1:2379 \
--initial-cluster etcd-server-17-52=https://192.168.17.52:2380,etcd-server-17-53=https://192.168.17.53:2380,etcd-server-17-54=https://192.168.17.54:2380 \
--ca-file ./certs/ca.pem \
--cert-file ./certs/etcd-peer.pem \
--key-file ./certs/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ./certs/ca.pem \
--peer-ca-file ./certs/ca.pem \
--peer-cert-file ./certs/etcd-peer.pem \
--peer-key-file ./certs/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ./certs/ca.pem \
--log-output stdout
supervisor配置文件
vim /etc/supervisord.d/etcd-server.ini
[program:etcd-server-17-53]
command=/opt/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/etcd ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=etcd ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
supervisorctl update
supervisorctl status启动查看
第三台配置
#!/bin/sh
./etcd --name etcd-server-17-54 \
--data-dir /data/etcd/etcd-server \
--listen-peer-urls https://192.168.17.54:2380 \
--listen-client-urls https://192.168.17.54:2379,http://127.0.0.1:2379 \
--quota-backend-bytes 8000000000 \
--initial-advertise-peer-urls https://192.168.17.54:2380 \
--advertise-client-urls https://192.168.17.54:2379,http://127.0.0.1:2379 \
--initial-cluster etcd-server-17-52=https://192.168.17.52:2380,etcd-server-17-53=https://192.168.17.53:2380,etcd-server-17-54=https://192.168.17.54:2380 \
--ca-file ./certs/ca.pem \
--cert-file ./certs/etcd-peer.pem \
--key-file ./certs/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ./certs/ca.pem \
--peer-ca-file ./certs/ca.pem \
--peer-cert-file ./certs/etcd-peer.pem \
--peer-key-file ./certs/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ./certs/ca.pem \
--log-output stdout
supervisor配置文件
vim /etc/supervisord.d/etcd-server.ini
[program:etcd-server-17-54]
command=/opt/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/etcd ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=etcd ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
supervisorctl update
supervisorctl status启动查看
./etcdctl cluster-health/./etcdctl member list检查集群健康状态

浙公网安备 33010602011771号