一步步学习k8s(一)
一步步学习k8s
1、目前使用docker的情况
使用docker的缺点:
- 使用Docker容器化封装应用程序的缺点(坏处)
- 单机使用,无法有效集群
- 随着容器数量的上升,管理成本攀升
- 没有有效的容灾/自愈机制
- 没有预设编排模板,无法实现快速、大规模容器调度
- 没有统一的配置管理中心工具
- 没有容器生命周期的管理工具
- 没有图形化运维管理工具
使用Docker容器化封装应用程序的意义(好处)
- Docker引擎统一了基础设施环境-docker环境
- 硬件的配置
- 操作系统的版本
- 运行时环境的异构
- Docker引擎统一了程序打包(装箱)方式-docker镜像
- java程序
- python程序
- nodejs程序
- Docker引擎统一了程序部署(运行)方式-docker容器
- java -jar ...→ docker run ...
- python manage.py runserver → docker run ....
- npm run dev → docker run ...
2、因此我们需要一套容器编排工具
他就是: Kubernetes(K8S)
官网: https://kubernetes.io
GitHub: https://github.com/kubernetes/kubernetes
作用: 开源的容器编排框架工具(生态非常丰富)
意义: 解决Docker的
优势:
- 自动装箱,水平扩展,自我修复
- 服务发现和负载均衡
- 自动发布(默认滚动发布模式)和回滚
- 集中化配置管理和秘钥管理
- 存储编排
- 任务批处理运行
3、Kubernetes快速入门
3.1、四组基本概念
- Pod/Pod控制器
- Name/Namespace
- Label/Label选择器
- Service/Ingress
Pod:
- Pod是K8S里能够被运行的最小的逻辑单元(原子单元)
- 1个Pod里面可以运行多个容器,他们共享UTS+NET+IPC名称空间
- 可以把Pod理解成豌豆荚,而同一个Pod内的每个容器是一颗颗豌豆
- 一个Pod里运行多个容器,又叫 边车 (SideCar)模式
Pod控制器
- Pod控制器是Pod启动的一种模板,用来保证在K8S里启动的Pod应始终按照人们的预期运行(副本数、生命周期、健康状态检查)
- K8S内始终提供了众多的Pod控制器,常用的有以下几种
- Deployment
- DaemonSet
- ReplicaSet
- Statefulset
- Job
- Cronjob
Name
- 由于K8S内部,使用“资源”来定义每一种逻辑概念(功能),因此每一种“资源”都应该有自己的“名称”
- “资源”有api版本(apiVersion)类别(kind)、元数据(metadata)、定义清单(spec)、状态(status)等配置信息
- “名称”通常定义在“资源”的“元数据”信息里
Namespace
- 随着项目增多、人员增加、集群规模的扩大,需要一种能够隔离K8S内各种“资源”的方法,这就是名称空间
- 名称空间可以理解为K8S内部的虚拟集群组
- 不同名称空间内的“资源”,名称可以相同,相同名称空间内的同种“资源”,名称不能相同
- 合理的使用K8S的名称空间,使得集群管理员能够更好的对交付到K8S里的服务进行分类管理和浏览
- K8S里默认存在的名称空间有:default、Kube-system、Kube-public
- 查询K8S里特定“资源”要带上相应的名称空间
Label
- 标签是K8S特色的管理方式,便于分类管理资源对象
- 一个标签可以对应多个资源,一个资源也可以有多个标签,他们是多对多的关系。
- 一个资源拥有多个标签,可以实现不同维度的管理。
- 标签的组成:Key = value
- 与标签类似的,还有一种“注解”(annitations)
Label选择器
- 给资源打上标签后,可以使用标签选择器过滤指定的标签
- 标签选择器目前有两个:基于等值关系(等于、不等于)和基于集合关系(属于、不属于、存在)
- 许多资源支持内嵌标签选择器字段
- matchLabels
- matchExpressions
Service
- 在K8S的世界里,虽然每个Pod都会被分配一个单独的IP地址,但这个IP地址会随着Pod的销毁而消失
- Service(服务)就是用来解决这个问题的核心概念
- 一个Service可以看做一组提供相同服务的Pod的对外访问接口
- Service作用于哪些Pod是通过标签选择器来定义的
Ingress
- Ingress是K8S集群里工作在OSI网络参考模型下,第7层的应用,对外暴露的接口
- Service只能进行L4 流量调度,表现形式是 ip + port
- Ingress则可以调度不同业务域,不用URL访问路径的业务流量
4、 K8S的三条网络详解
核心组件
- 配置存储中心 → etcd服务
- 主控(master)节点
- kube-apiserver服务
- kube-controller-manager服务
- kube-scheduler服务
- 运算(node)节点
- kube-kubelet服务
- kube-porxy服务
- CLI客户端
- kubectl(命令行工具)
核心附件
- CNI网络插件 → flannel/calico
- 服务发现用插件 → coredns
- 服务暴露用插件 → traefik
- GUI管理插件 → Dashborad
简单介绍一下
- apiserver
- 提供了集群管理的RESTAPI接口(包括鉴权、数据校验及集群状态变更)
- 负责其他模块之间的数据交互,承担通信枢纽功能
- 是资源配额控制的入口
- 提供玩呗的集群安全机制
- controller-manage
- 由一系列控制器组成,通过apiserver监控整个集群的状态,保证集群处于预期的工作状态
- Node Controller
- Deployment Controller
- Service Controller
- Volume Controller
- Endpoint Controller
- Garbage Controller
- Namespace Controller
- Job Controller
- Resource quta Controller
- scheduler
- 主要功能是接收调度pod到合适的运算节点上
- 预算策略(predict)
- 优选策略(priorities)
- kubelet
- 简单地说,kubelet的主要功能就是定时从某个地方获取节点上pod的期望状态(运行什么容器、运行的副本数量、网络或者存储如何配置等等),并调用对应的容器平台接口达到这个状态
- 定时汇报当前节点的状态给apiserver,以供调度的时候使用
- 镜像和容器的清理工作,保证节点上的镜像不会沾满磁盘空间,退出的容器不会占用太多资源
- porxy
- 是k8s在每个节点上运行网络代理,service资源的载体
- 建立了pod网络和集群网络的关系(clusterip → podip)
- 常用三种流量调度模式
- Userspace(废弃)
- Iptables(濒临废弃)
- Ipvs(推荐)
- 负责建立和删除包括更新调度规则、通知apiserver自己的更新,或者从apiserver那里获取其他kube-proxy的调度规则变化来更新自己的
K8S架构图

课程主机,共5台,密码均为:think999!

5、安装部署
常见的安装方式:
- Minikube 单节点微型K8S(学习预览),官网链接:https://kubernetes.io/docs/tutorials/hello-minikube/
- 二进制安装部署(生产首选,推荐)
- 使用kubeadmin进行部署,K8S的部署工具,跑在K8S里(相对简单,熟练推荐)
5.1、环境检查
关闭防火墙
systemctl stop firewalld
systemctl disable firewall
禁用selinux
setenforce 0
sed -i 's@SELINUX=enforce@SELINUX=disabled@' /etc/selinux/config
5.2、调整操作系统
安装epel-release
yum -y install epel-release
装必要的工具
yum install -y wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils vim less
DNS服务初始化
10.4.7.11 机器上安装 bind9
yum -y install bind
主配置文件
[root@hdss7-11 ~]# vim /etc/named.conf
# 修改下面两行
listen-on port 53 { 10.4.7.11; };
allow-query { any; };
# 加一行
forwarders { 10.4.7.254; };
# 删除一行
listen-on-v6 port 53 { ::1; };
# 要确保
recursion yes; #递归查询
# 为节省资源,可将下面两行改为 no
dnssec-enable no;
dnssec-validation no;
[root@hdss7-11 ~]# named-checkconf #检查配置,没有输出信息即为正确
在10.4.7.11上配置区域文件
增加两个zone配置,od.com为业务域,host.com.zone为主机域
[root@hdss7-11 ~]# vim /etc/named.rfc1912.zones
# 添加至最后
zone "host.com" IN {
type master;
file "host.com.zone";
allow-update { 10.4.7.11; };
};
zone "od.com" IN {
type master;
file "od.com.zone";
allow-update { 10.4.7.11; };
};
在10.4.7.11上配置主机域文件
[root@hdss7-11 ~]# vim /var/named/host.com.zone
$ORIGIN host.com.
$TTL 600 ; 10 minutes # 过期时间十分钟 这里的分号是注释
@ IN SOA dns.host.com. dnsadmin.host.com. (
2021032923 ; serial
10800 ; refresh (3 hours) # soa参数
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.host.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
HDSS7-11 A 10.4.7.11
HDSS7-12 A 10.4.7.12
HDSS7-21 A 10.4.7.21
HDSS7-22 A 10.4.7.22
HDSS7-200 A 10.4.7.200
在10.4.7.11上配置业务域文件
[root@hdss7-11 ~]# vim /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.od.com. dnsadmin.od.com. (
2021032901 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.od.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
[root@hdss7-11 ~]# named-checkconf #检查配置,没有输出信息即为正确
启动服务
[root@hdss7-11 ~]# systemctl start named
[root@hdss7-11 ~]# dig -t A hdss7-21.host.com @10.4.7.11 +short #验证解析是否正确
10.4.7.21
所有服务器修改dns地址
sed -i 's@nameserver 114.114.114.114@nameserver 10.4.7.11@' /etc/resolv.conf
5.3、准备签发证书环境
运维主机HDSS7-200.host.com上:
安装CFSSL
证书签发工具CFSLL:R1.2
[root@hdss7-200 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
[root@hdss7-200 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssl-json
[root@hdss7-200 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo
[root@hdss7-200 ~]# chmod u+x /usr/local/bin/cfssl*
创建生成CA证书签名请求(csr)的JSON配置文件
[root@hdss7-200 ~]# vim /opt/certs/ca-csr.json
{
"CN": "OldboyEdu",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
],
"ca": {
"expiry": "175200h"
}
}
CN: Common Name ,浏览器使用该字段验证网站是否合法, 一般写的是域名。非常重要。浏览器使用该字段验证网站是否合法
C: Country,国家
ST:State,州,省
L: Locality ,地区,城市
O: Organization Name ,组织名称,公司名称
OU: Organization Unit Name ,组织单位名称,公司部门
生成CA证书和私钥
[root@hdss7-200 ~]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
[root@hdss7-200 certs]# ll
total 16
-rw-r--r-- 1 root root 993 Apr 7 05:30 ca.csr
-rw-r--r-- 1 root root 328 Apr 7 05:29 ca-csr.json
-rw------- 1 root root 1679 Apr 7 05:30 ca-key.pem
-rw-r--r-- 1 root root 1346 Apr 7 05:30 ca.pem
5.4、docker环境准备
需要安装docker的机器:hdss7-21 hdss7-22 hdss7-200,以hdss7-21为例
[root@hdss7-200 ~]# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
[root@hdss7-200 ~]# mkdir /data/docker
[root@hdss7-200 ~]# mkdir /etc/docker/
[root@hdss7-200 ~]# vim /etc/docker/daemon.json
{
"graph": "/data/docker",
"storage-driver": "overlay2",
"insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"],
"registry-mirrors": ["https://registry.docker-cn.com"],
"bip": "172.7.21.1/24",
"exec-opts": ["native.cgroupdriver=systemd"],
"live-restore": true
}
可配置阿里云镜像加速
"registry-mirrors": ["https://srn2r5um.mirror.aliyuncs.com"],
部署docker镜像私有仓库harbor
HDSS7-200.host.com上
harbor安装
官方地址:https://goharbor.io/
下载地址:https://github.com/goharbor/harbor/releases
注意 不要选择1.7.5以下版本 有漏洞
下载的时候下载harbor-offline-installer-vx.x.x.tgz版本(离线安装版本)
安装harbor
HDSS7-200.host.com上
[root@hdss7-200 ~]# cd /opt && mkdir src
[root@hdss7-200 ~]# scp -r harbor-offline-installer-v2.2.1 root@10.4.7.200:/opr/src/
[root@hdss7-200 ~]# tar zxf harbor-offline-installer-v2.2.1.tgz -C /opt/
[root@hdss7-200 ~]# cd /opt/
[root@hdss7-200 ~]# mv harbor harbor-v2.2.1
[root@hdss7-200 ~]# ln -s harbor-v2.2.1 harbor
[root@hdss7-200 ~]# cd harbor
[root@hdss7-200 ~]# vim harbor.yml
配置文件修改几项内容
hostname: harbor.od.com
http:
# port for http, default is 80. If https enabled, this port will redirect to https port
port: 180
harbor_admin_password: Harbor12345
data_volume: /home/harbor
log:
location: /home/harbor/logs
注释掉https的相关内容,否则会报错
ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
继续安装
[root@hdss7-200 ~]# yum install -y docker-compose
[root@hdss7-200 ~]# cd /opt/harbor
[root@hdss7-200 ~]# ./install.sh
[root@hdss7-200 ~]# docker ps -a
安装nginx,反向代理harbor
[root@hdss7-200 ~]# yum install -y nginx
[root@hdss7-200 ~]# vim /etc/nginx/conf.d/harbor.od.com.conf
# nginx配置文件
server {
listen 80;
server_name harbor.od.com;
# 避免出现上传失败的情况
client_max_body_size 1000m;
location / {
proxy_pass http://127.0.0.1:180;
}
}
[root@hdss7-200 ~]# systemctl start nginx
[root@hdss7-200 ~]# systemctl enable nginx
harbor配置dns解析
HDSS7-11.host.com上
[root@hdss7-11 ~]# vim /var/named/od.com.zone #serial数字加1
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.od.com. dnsadmin.od.com. (
2021032902 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.od.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
harbor A 10.4.7.200 #添加一条新的记录
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A harbor.od.com +short
10.4.7.200
[root@hdss7-11 ~]#
浏览器配置harbor
浏览器访问:harbor.od.com
用户名:admin
密 码:Harbor12345

1、新建项目

2、测试harbor
HDSS7-200.host.com上
[root@hdss7-200 ~]# docker pull nginx:1.7.9
[root@hdss7-200 ~]# docker tag 84581e99d807 harbor.od.com/public/nginx:v1.7.9
[root@hdss7-200 ~]# docker login -u admin harbor.od.com
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[root@hdss7-200 ~]# docker push harbor.od.com/public/nginx:v1.7.9
[root@hdss7-200 ~]#
3、浏览器查看

6、部署Master节点服务
6.1、部署etcd集群
集群规划
etcd 的leader选举机制,要求至少为3台或以上的奇数台。本次安装涉及:hdss7-12,hdss7-21,hdss7-22
| 主机名 | 角色 | ip |
|---|---|---|
| hdss7-12.host.com | etc lead | 10.4.7.12 |
| hdss7-21.host.com | etc fllow | 10.4.7.21 |
| hdss7-22.host.com | etc fllow | 10.4.7.22 |
给etcd创建签发证书
证书签发服务器 hdss7-200:
[root@hdss7-200 ~]# cd /opt/certs/
[root@hdss7-200 ~]# vim ca-conf.json
文件内容如下:
- server 表示服务端连接客户端时携带的证书,用于客户端验证服务端身份
- client 表示客户端连接服务端时携带的证书,用于服务端验证客户端身份
- peer 表示相互之间连接时使用的证书,如etcd节点之间验证
"expiry": "175200h" 证书有效期 十年 如果这里是一年的话 到期后集群会立宕掉
{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"server": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"client": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"peer": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
创建etcd证书请求配置
[root@hdss7-200 ~]# cd /opt/certs/
[root@hdss7-200 ~]# vim etcd-peer-csr.json
文件内容如下:
重点在hosts上,将所有可能的etcd服务器添加到host列表,不能使用网段,新增etcd服务器需要重新签发证书
{
"CN": "k8s-etcd",
"hosts": [
"10.4.7.11",
"10.4.7.12",
"10.4.7.21",
"10.4.7.22"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
签发证书
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json |cfssl-json -bare etcd-peer
[root@hdss7-200 certs]# ll etcd-peer*
-rw-r--r-- 1 root root 1062 Apr 12 22:16 etcd-peer.csr
-rw-r--r-- 1 root root 363 Apr 12 22:10 etcd-peer-csr.json
-rw------- 1 root root 1679 Apr 12 22:16 etcd-peer-key.pem
-rw-r--r-- 1 root root 1428 Apr 12 22:16 etcd-peer.pem
安装etcd
本次安装涉及:hdss7-12,hdss7-21,hdss7-22
以hdss7-12为例
[root@hdss7-12 ~]# mkdir /opt/src && cd /opt/src/
[root@hdss7-12 ~]# useradd -s /sbin/nologin -M etcd
etcd地址:https://github.com/etcd-io/etcd/
本次使用:etcd-v3.1.20-linux-amd64.tar.gz
[root@hdss7-12 src]# wget https://github.com/etcd-io/etcd/releases/download/v3.1.20/etcd-v3.1.20-linux-amd64.tar.gz
[root@hdss7-12 src]# tar -zxf etcd-v3.1.20-linux-amd64.tar.gz -C /opt
[root@hdss7-12 src]# cd ../
[root@hdss7-12 opt]# mv etcd-v3.1.20-linux-amd64 etcd-v3.1.20
[root@hdss7-12 opt]# ln -s /opt/etcd-v3.1.20 /opt/etcd
[root@hdss7-12 opt]# cd etcd
创建目录
[root@hdss7-12 opt]# mkdir -p /opt/etcd/certs /home/etcd /home/logs/etcd-server
下发证书到各个etcd上
登录hdss7-200:
cd /opt/certs
scp ca.pem etcd-peer.pem etcd-peer-key.pem hdss7-12:/opt/etcd/certs/
scp ca.pem etcd-peer.pem etcd-peer-key.pem hdss7-21:/opt/etcd/certs/
scp ca.pem etcd-peer.pem etcd-peer-key.pem hdss7-22:/opt/etcd/certs/
回到hdss7-12
[root@hdss7-12 certs]# ll
total 12
-rw-r--r-- 1 root root 1346 Apr 12 22:48 ca.pem
-rw------- 1 root root 1679 Apr 12 22:48 etcd-peer-key.pem
-rw-r--r-- 1 root root 1428 Apr 12 22:48 etcd-peer.pem
创建启动脚本etcd-server-startup.sh(部分参数每台机器不同)
- listen-peer-urls etcd节点之间通信端口
- listen-client-urls 客户端与etcd通信端口
- quota-backend-bytes 配额大小
- 需要修改的参数:name,listen-peer-urls,listen-client-urls,initial-advertise-peer-urls,advertise-client-urls
[root@hdss7-12 etcd]# vim /opt/etcd/etcd-server-startup.sh
#!/bin/sh
WORK_DIR=$(dirname $(readlink -f $0))
[ $? -eq 0 ] && cd $WORK_DIR || exit
/opt/etcd/etcd --name etcd-server-7-12 \
--data-dir /home/etcd/etcd-server \
--listen-peer-urls https://10.4.7.12:2380 \
--listen-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
--quota-backend-bytes 8000000000 \
--initial-advertise-peer-urls https://10.4.7.12:2380 \
--advertise-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
--initial-cluster etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \
--ca-file ./certs/ca.pem \
--cert-file ./certs/etcd-peer.pem \
--key-file ./certs/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ./certs/ca.pem \
--peer-ca-file ./certs/ca.pem \
--peer-cert-file ./certs/etcd-peer.pem \
--peer-key-file ./certs/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ./certs/ca.pem \
--log-output stdout
添加权限和更改属主属组
[root@hdss7-12 etcd]# chmod +x etcd-server-startup.sh
[root@hdss7-12 etcd]# chown -R etcd:etcd /opt/etcd-v3.1.20/
[root@hdss7-12 etcd]# chown -R etcd:etcd /home/etcd/
[root@hdss7-12 etcd]# chown -R etcd:etcd /home/logs/etcd-server/
启动etcd
因为这些进程都是要启动为后台进程,要么手动启动,要么采用后台进程管理工具,实验中使用后台管理工具
[root@hdss7-12 etcd]# yum install -y supervisor
[root@hdss7-12 etcd]# cd /etc/supervisord.d/
[root@hdss7-12 etcd]# vim etcd-server-7-12.ini
[program:etcd-server-7-12]
command=/opt/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/etcd ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=etcd ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/home/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=5 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
启动supervisor,并查看
[root@hdss7-12 etc]# systemctl start supervisord
[root@hdss7-12 etc]# systemctl enable supervisord
[root@hdss7-12 etc]# ps -fe |grep super
[root@hdss7-12 etc]# netstat -lntp |grep 9001
[root@hdss7-12 etc]# netstat -lntp |grep etcd
tcp 0 0 10.4.7.12:2379 0.0.0.0:* LISTEN 6403/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 6403/etcd
tcp 0 0 10.4.7.12:2380 0.0.0.0:* LISTEN 6403/etcd
都起来以后在任意一台执行以下命令,查看健康性
方式一:
[root@hdss7-12 etc]# cd /opt/etcd
[root@hdss7-12 etc]# ./etcdctl cluster-health
member 988139385f78284 is healthy: got healthy result from http://127.0.0.1:2379
member 5a0ef2a004fc4349 is healthy: got healthy result from http://127.0.0.1:2379
member f4a0cb0a765574a8 is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy
返回上面的结果表示正常
方式二:
[root@hdss7-12 etcd]# ./etcdctl member list
988139385f78284: name=etcd-server-7-22 peerURLs=https://10.4.7.22:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.22:2379 isLeader=false
5a0ef2a004fc4349: name=etcd-server-7-21 peerURLs=https://10.4.7.21:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.21:2379 isLeader=false
f4a0cb0a765574a8: name=etcd-server-7-12 peerURLs=https://10.4.7.12:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.12:2379 isLeader=true
6.2、部署Kube-apiserver集群
集群规划
| 主机名 | 角色 | IP |
|---|---|---|
| hdss7-21.host.com | kube-apiserver | 10.4.7.21 |
| hdss7-22.host.com | kube-apiserver | 10.4.7.22 |
| hdss7-11.host.com | 4层负载均衡 | 10.4.7.11 |
| hdss7-12.host.com | 4层负载均衡 | 10.4.7.12 |
注意:这里10.4.7.11和10.4.7.12使用nginx做4层负载均衡,用keepalived跑一个vip:10.4.7.10,代理两个kube-apiserver,实现高可用 |
||
这里以部署hdss7-21.host.com为例,另一台10.4.7.22做同样的操作 |
下载Kubernetes-server的包
1、下载网址:https://github.com/kubernetes/kubernetes
2、进入标签页:https://github.com/kubernetes/kubernetes/tags
3、选择要下载的版本:https://github.com/kubernetes/kubernetes/releases/tag/v1.21.0
4、点击the CHANGELOG :https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md
5、下载Server Binaries中的第一个:kubernetes-server-linux-amd64.tar.gz:
存在百度网盘里了
[root@hdss7-21 src]# cd /opt/src
[root@hdss7-21 src]# tar zxf kubernetes-server-linux-amd64.tar.gz -C /opt/
[root@hdss7-21 src]# cd /opt/
[root@hdss7-21 opt]# mv kubernetes kubernetes-v1.21.0
[root@hdss7-21 opt]# ln -s kubernetes-v1.21.0 kubernetes
[root@hdss7-21 opt]# cd kubernetes/server/bin
[root@hdss7-21 bin]# rm -rf *.tar # 因为我们这里不用镜像的方式安装
[root@hdss7-21 bin]# rm -rf *_tag # 所以把镜像和标签都删掉
[root@hdss7-21 bin]# mkdir certs
签发证书
登录到10.4.7.200服务器
签发client证书,(apiserver和etcd通信的证书)
[root@hdss7-200 ~]# cd /opt/certs/
[root@hdss7-200 certs]# vim client-csr.json
{
"CN": "k8s-node",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client
[root@hdss7-200 certs]# ll cli*
-rw-r--r-- 1 root root 993 Apr 15 00:43 client.csr
-rw-r--r-- 1 root root 280 Apr 15 00:41 client-csr.json
-rw------- 1 root root 1679 Apr 15 00:43 client-key.pem
-rw-r--r-- 1 root root 1363 Apr 15 00:43 client.pem
签发server证书(apiserver和其它k8s组件通信使用)
注意:hosts中将所有可能作为apiserver的ip添加进去,VIP 10.4.7.10 也要加入
[root@hdss7-200 certs]# vim /opt/certs/apiserver-csr.json
{
"CN": "k8s-apiserver",
"hosts": [
"127.0.0.1",
"192.168.0.1",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"10.4.7.10",
"10.4.7.21",
"10.4.7.22",
"10.4.7.23"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver
[root@hdss7-200 certs]# ll api*
-rw-r--r-- 1 root root 1249 Apr 15 00:49 apiserver.csr
-rw-r--r-- 1 root root 566 Apr 15 00:48 apiserver-csr.json
-rw------- 1 root root 1675 Apr 15 00:49 apiserver-key.pem
-rw-r--r-- 1 root root 1598 Apr 15 00:49 apiserver.pem
证书下发
[root@hdss7-200 certs]# scp -r apiserver-key.pem apiserver.pem client-key.pem client.pem ca-key.pem ca.pem root@10.4.7.21:/opt/kubernetes/server/bin/certs
[root@hdss7-200 certs]# scp -r apiserver-key.pem apiserver.pem client-key.pem client.pem ca-key.pem ca.pem root@10.4.7.22:/opt/kubernetes/server/bin/certs
10.4.7.22做同样的操作
配置apiserver日志审计
aipserver 涉及的服务器:hdss7-21,hdss7-22
[root@hdss7-21 bin]# cd /opt/kubernetes/server/bin/ && mkdir conf
[root@hdss7-21 bin]# cd conf
[root@hdss7-21 bin]# vim audit.yaml # 打开文件后,设置 :set paste,避免自动缩进
文件内容:
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"
配置启动脚本
[root@hdss7-21 bin]# cd /opt/kubernetes/server/bin
[root@hdss7-21 bin]# vim kube-apiserver-startup.sh #两处日志路径,注意修改
#!/bin/bash
WORK_DIR=$(dirname $(readlink -f $0))
[ $? -eq 0 ] && cd $WORK_DIR || exit
/opt/kubernetes/server/bin/kube-apiserver \
--apiserver-count 2 \
--audit-log-path /home/logs/kubernetes/kube-apiserver/audit-log \
--audit-policy-file ./conf/audit.yaml \
--authorization-mode RBAC \
--client-ca-file ./certs/ca.pem \
--requestheader-client-ca-file ./certs/ca.pem \
--enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,
ResourceQuota \ --etcd-cafile ./certs/ca.pem \
--etcd-certfile ./certs/client.pem \
--etcd-keyfile ./certs/client-key.pem \
--etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
--service-account-key-file ./certs/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--service-node-port-range 3000-29999 \
--target-ram-mb=1024 \
--kubelet-client-certificate ./certs/client.pem \
--kubelet-client-key ./certs/client-key.pem \
--log-dir /home/logs/kubernetes/kube-apiserver \
--tls-cert-file ./certs/apiserver.pem \
--tls-private-key-file ./certs/apiserver-key.pem \
--v 2
创建日志路径,注意与上面对应
[root@hdss7-21 bin]# mkdir -p /home/logs/kubernetes/kube-apiserver/audit-log
配置用supervisor来启动它
注意: 10.4.7.21和10.4.7.22有区别
[root@hdss7-21 bin]# vim /etc/supervisord.d/kube-apiserver.ini
[program:kube-apiserver-7-21]
command=/opt/kubernetes/server/bin/kube-apiserver-startup.sh
numprocs=1
directory=/opt/kubernetes/server/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/home/logs/kubernetes/kube-apiserver/apiserver.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=5
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
6.3、配置apiserver L4代理
nginx配置
10.4.7.11和10.4.7.12分别安装ngxin和keepalived
[root@hdss7-11 ~]# yum -y install nginx
[root@hdss7-11 ~]# vim /etc/nginx/nginx.conf
# 末尾加上以下内容,stream 只能加在 main 中
# 此处只是简单配置下nginx,实际生产中,建议进行更合理的配置
stream {
log_format proxy '$time_local|$remote_addr|$upstream_addr|$protocol|$status|'
'$session_time|$upstream_connect_time|$bytes_sent|$bytes_received|'
'$upstream_bytes_sent|$upstream_bytes_received' ;
upstream kube-apiserver {
server 10.4.7.21:6443 max_fails=3 fail_timeout=30s;
server 10.4.7.22:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 7443;
proxy_connect_timeout 2s;
proxy_timeout 900s;
proxy_pass kube-apiserver;
access_log /var/log/nginx/proxy.log proxy;
}
}
[root@hdss7-11 nginx]# nginx -t # 测试nginx.conf文件格式有无错误
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@hdss7-11 ~]# systemctl start nginx # 启动nginx
[root@hdss7-11 ~]# systemctl enable nginx # 设置开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
keepalived配置
[root@hdss7-11 ~]# yum -y install keepalived
[root@hdss7-11 ~]# vim /etc/keepalived/check_port.sh
#!/bin/bash
# keepalived 监控端口脚本
# 使用方法:
# 在keepalived的配置文件里
# vrrp_script check_port {#创建一个vrrp_script脚本,检查配置
# script "/etc/keepalived/check_port.sh 6379" #配置监听的端口
# interval 2 #检查脚本的频率,单位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
PORT_PROCESS=`ss -lnt | grep $CHK_PORT | wc -l`
if [ $PORT_PROCESS -eq 0 ];then
echo "PORT $CHK_PORT IS NOT Used,End.."
fi
else
echo "Check Port Can't Be Empty!"
fi
[root@hdss7-11 ~]# chmod +x /etc/keepalived/check_port.sh
主keepalived配置文件(10.4.7.11)
! Configuration File for keepalived
global_defs {
router_id 10.4.7.11
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_port.sh 7443"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 251
priority 100
advert_int 1
mcast_src_ip 10.4.7.11
nopreempt # 不抢占,即被备机抢过去之后,自己不会主动抢回来
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx
}
virtual_ipaddress {
10.4.7.10
}
}
备keepalived配置文件(10.4.7.12)
! Configuration File for keepalived
global_defs {
router_id 10.4.7.12
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_port.sh 7443"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 251
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx
}
virtual_ipaddress {
10.4.7.10
}
}
启动服务
[root@hdss7-11 keepalived]# systemctl start keepalived
[root@hdss7-11 keepalived]# systemctl enable keepalived
[root@hdss7-11 keepalived]# ip add # 查看,10.4.7.11上有10.4.7.10的vip了
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:67:9b:99 brd ff:ff:ff:ff:ff:ff
inet 10.4.7.11/24 brd 10.4.7.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 10.4.7.10/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::5a61:aba0:fdb7:ef30/64 scope link noprefixroute
valid_lft forever preferred_lft forever
6.4、部署controller-manager
集群规划
| 主机名 | 角色 | ip |
|---|---|---|
| hdss7-21.host.com | controller-manager | 10.4.7.21 |
| hdss7-22.host.com | controller-manager | 10.4.7.22 |
注意: 这里以10.4.7.21为例,10.4.7.22同样的方法安装 |
||
| controller-manager 设置为只调用当前机器的 apiserver,走127.0.0.1网卡,因此不配制SSL证书 |
配置启动脚本
在hdss7-21.host.com上
[root@hdss7-21 bin]# cd /opt/kubernetes/server/bin
[root@hdss7-21 bin]# vim kube-controller-manager-startup.sh
#!/bin/sh
WORK_DIR=$(dirname $(readlink -f $0))
[ $? -eq 0 ] && cd $WORK_DIR || exit
/opt/kubernetes/server/bin/kube-controller-manager \
--cluster-cidr 172.7.0.0/16 \
--leader-elect true \
--log-dir /home/logs/kubernetes/kube-controller-manager \
--master http://127.0.0.1:8080 \
--service-account-private-key-file ./certs/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--root-ca-file ./certs/ca.pem \
--v 2
[root@hdss7-21 bin]# chmod +x kube-controller-manager-startup.sh
[root@hdss7-21 bin]# mkdir -p /home/logs/kubernetes/kube-controller-manager
配置用supervisor来启动它
[root@hdss7-21 bin]# vim /etc/supervisord.d/kube-controller-manager.ini
[program:kube-controller-manager-7-21]
command=/opt/kubernetes/server/bin/kube-controller-manager-startup.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/home/logs/kubernetes/kube-controller-manager/controller.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
启动
[root@hdss7-21 bin]# supervisorctl update
kube-controller-manager-7-21: added process group
[root@hdss7-21 bin]# supervisorctl status
etcd-server-7-21 RUNNING pid 6364, uptime 23:01:48
kube-apiserver-7-21 RUNNING pid 7930, uptime 18:24:32
kube-controller-manager-7-21 RUNNING pid 8987, uptime 0:02:48
6.5、部署kube-scheduler
集群规划
| 主机名 | 角色 | ip |
|---|---|---|
| hdss7-21.host.com | controller-manager | 10.4.7.21 |
| hdss7-22.host.com | controller-manager | 10.4.7.22 |
kube-scheduler 设置为只调用机器的 apiserver,走127.0.0.1网卡,因此不配制SSL证书
以hdss7-21为例
创建启动脚本
[root@hdss7-21 bin]# vim kube-scheduler-startup.sh
#!/bin/sh
WORK_DIR=$(dirname $(readlink -f $0))
[ $? -eq 0 ] && cd $WORK_DIR || exit
/opt/kubernetes/server/bin/kube-scheduler \
--leader-elect \
--log-dir /home/logs/kubernetes/kube-scheduler \
--master http://127.0.0.1:8080 \
--v 2
[root@hdss7-21 bin]# mkdir -p /home/logs/kubernetes/kube-scheduler
[root@hdss7-21 bin]# chmod +x kube-scheduler-startup.sh
配置用supervisor来启动它
[root@hdss7-21 bin]# vim /etc/supervisord.d/kube-scheduler.ini
[program:kube-scheduler-7-21]
command=/opt/kubernetes/server/bin/kube-scheduler-startup.sh
numprocs=1
directory=/opt/kubernetes/server/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/home/logs/kubernetes/kube-scheduler/scheduler.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
启动
[root@hdss7-21 bin]# supervisorctl update
kube-scheduler-7-21: added process group
[root@hdss7-21 bin]# supervisorctl status
etcd-server-7-21 RUNNING pid 6364, uptime 23:12:57
kube-apiserver-7-21 RUNNING pid 7930, uptime 18:35:41
kube-controller-manager-7-21 RUNNING pid 8987, uptime 0:13:57
kube-scheduler-7-21 RUNNING pid 9015, uptime 0:02:17
6.7、检查master节点上服务的状态
[root@hdss7-21 bin]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
[root@hdss7-21 bin]# kubectl get cs # 检查状态
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
7、部署Node节点服务
7.1、部署kubelet
集群规划
| 主机名 | 角色 | ip |
|---|---|---|
| HDSS7-21.host.com | kubelet | 10.4.7.21 |
| HDSS7-22.host.com | kubelet | 10.4.7.22 |
这里以10.4.7.21为例,另一台10.4.7.22同样的操作 |
签发证书
登录主机10.4.7.200
[root@hdss7-200 ~]# cd /opt/certs/
[root@hdss7-200 certs]# vim kubelet-csr.json
# 将所有可能的kubelet机器IP添加到hosts中
{
"CN": "k8s-kubelet",
"hosts": [
"127.0.0.1",
"10.4.7.10",
"10.4.7.21",
"10.4.7.22",
"10.4.7.23",
"10.4.7.24",
"10.4.7.25",
"10.4.7.26",
"10.4.7.27",
"10.4.7.28"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
[root@hdss7-200 certs]# ll kubelet*
-rw-r--r-- 1 root root 1115 Apr 19 21:48 kubelet.csr
-rw-r--r-- 1 root root 452 Apr 19 21:46 kubelet-csr.json
-rw------- 1 root root 1679 Apr 19 21:48 kubelet-key.pem
-rw-r--r-- 1 root root 1468 Apr 19 21:48 kubelet.pem
[root@hdss7-200 certs]# scp -r kubelet.pem kubelet-key.pem 10.4.7.21:/opt/kubernetes/server/bin/certs/
[root@hdss7-200 certs]# scp -r kubelet.pem kubelet-key.pem 10.4.7.22:/opt/kubernetes/server/bin/certs/
创建kubelet配置(4步)
kubelet配置在 hdss7-21 和hdss7-22 操作
在conf文件夹下
set-cluster
创建需要连接的集群信息,可以创建多个k8s集群信息
[root@hdss7-21 conf]# cd /opt/kubernetes/server/bin/conf
[root@hdss7-21 conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
--embed-certs=true \
--server=https://10.4.7.10:7443 \
--kubeconfig=kubelet.kubeconfig
set-credentials
创建用户账号,即用户登陆使用的客户端私有和证书,可以创建多个证书
[root@hdss7-21 conf]# kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/certs/client.pem \
--client-key=/opt/kubernetes/server/bin/certs/client-key.pem \
--embed-certs=true \
--kubeconfig=kubelet.kubeconfig
set-context
设置context,即确定账号和集群对应关系
[root@hdss7-21 conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node \
--kubeconfig=kubelet.kubeconfig
use-context
设置当前使用哪个context
[root@hdss7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
把此配置文件传给另一台就不用做以上四步
[root@hdss7-21 conf]# scp -r kubelet.kubeconfig 10.4.7.22:/opt/kubernetes/server/bin/conf/
[root@hdss7-21 conf]# scp -r k8s-node.yaml 10.4.7.22:/opt/kubernetes/server/bin/conf/
授权k8s-node用户
此步骤只需要在一台master节点执行
授权 k8s-node 用户绑定集群角色 system:node ,让 k8s-node 成为具备运算节点的权限。
[root@hdss7-21 conf]# vim k8s-node.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node
[root@hdss7-21 conf]# kubectl create -f k8s-node.yaml
kubectl get clusterrolebinding k8s-node
NAME AGE
k8s-node 79s
[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node -o yaml
装备pause镜像
在主机10.4.7.200上
[root@hdss7-200 certs]# docker pull kubernetes/pause
[root@hdss7-200 certs]# docker tag f9d5de079539 harbor.od.com/public/pause:latest
[root@hdss7-200 certs]# docker login harbor.od.com
[root@hdss7-200 certs]# docker push harbor.od.com/public/pause:latest
创建启动脚本
在node节点创建脚本并启动kubelet,涉及服务器: hdss7-21和hdss7-22
[root@hdss7-21 bin]# vim kubelet-startup.sh
#!/bin/sh
WORK_DIR=$(dirname $(readlink -f $0))
[ $? -eq 0 ] && cd $WORK_DIR || exit
/opt/kubernetes/server/bin/kubelet \
--anonymous-auth=false \
--cgroup-driver systemd \
--cluster-dns 192.168.0.2 \
--cluster-domain cluster.local \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on="false" \
--client-ca-file ./certs/ca.pem \
--tls-cert-file ./certs/kubelet.pem \
--tls-private-key-file ./certs/kubelet-key.pem \
--hostname-override hdss7-21.host.com \
--image-gc-high-threshold 20 \
--image-gc-low-threshold 10 \
--kubeconfig ./conf/kubelet.kubeconfig \
--log-dir /home/logs/kubernetes/kube-kubelet \
--pod-infra-container-image harbor.od.com/public/pause:latest \
--root-dir /home/kubelet
创建目录,添加权限
[root@hdss7-21 bin]# mkdir -p /home/logs/kubernetes/kube-kubelet /home/kubelet
[root@hdss7-21 bin]# chmod +x kubelet-startup.sh
使用supervisor来启动它
[root@hdss7-21 supervisord.d]# vim /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-7-21]
command=/opt/kubernetes/server/bin/kubelet-startup.sh
numprocs=1
directory=/opt/kubernetes/server/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/home/logs/kubernetes/kube-kubelet/kubelet.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=5
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
[root@hdss7-21 supervisord.d]# supervisorctl update
[root@hdss7-21 supervisord.d]# supervisorctl status
etcd-server-7-22 RUNNING pid 6542, uptime 1:37:51
kube-apiserver-7-22 RUNNING pid 6534, uptime 1:37:51
kube-controller-manager-7-22 RUNNING pid 6523, uptime 1:37:51
kube-kubelet-7-22 RUNNING pid 7102, uptime 0:01:26
kube-scheduler-7-22 RUNNING pid 6528, uptime 1:37:51
检查kubelet是否正常
[root@hdss7-21 bin]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready <none> 3m26s v1.15.11
hdss7-22.host.com Ready <none> 2m15s v1.15.11
修改节点角色
使用 kubectl get nodes 获取的Node节点角色为空,可以按照以下方式修改
主机既做master节点又做node节点
[root@hdss7-21 bin]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/master=
node/hdss7-21.host.com labeled
[root@hdss7-21 bin]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready master 7m20s v1.15.11
hdss7-22.host.com Ready <none> 6m9s v1.15.11
[root@hdss7-21 bin]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/node=
node/hdss7-21.host.com labeled
[root@hdss7-21 bin]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready master,node 8m17s v1.15.11
hdss7-22.host.com Ready <none> 7m6s v1.15.11
[root@hdss7-21 bin]# kubectl label node hdss7-22.host.com node-role.kubernetes.io/master=
[root@hdss7-21 bin]# kubectl label node hdss7-22.host.com node-role.kubernetes.io/node=
[root@hdss7-21 bin]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready master,node 10m v1.15.11
hdss7-22.host.com Ready master,node 8m49s v1.15.11
7.2、部署kube-proxy
集群规划
| 主机名 | 角色 | ip |
|---|---|---|
| HDSS7-21.host.com | kube-proxy | 10.4.7.21 |
| HDSS7-22.host.com | kube-proxy | 10.4.7.22 |
这里部署以10.4.7.21为例,另外一台10.4.7.22同样的操作 |
签发证书
登录到10.4.7.200主机
[root@hdss7-200 certs]# cd /opt/certs
[root@hdss7-200 certs]# vim kube-proxy-csr.json
# CN 其实是k8s中的角色
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
[root@hdss7-200 certs]# ll kube-proxy*
-rw-r--r-- 1 root root 1005 Apr 19 23:19 kube-proxy-client.csr
-rw------- 1 root root 1675 Apr 19 23:19 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1375 Apr 19 23:19 kube-proxy-client.pem
-rw-r--r-- 1 root root 267 Apr 19 23:17 kube-proxy-csr.json
[root@hdss7-200 certs]# scp -r kube-proxy-client.pem kube-proxy-client-key.pem 10.4.7.21:/opt/kubernetes/server/bin/certs/
[root@hdss7-200 certs]# scp -r kube-proxy-client.pem kube-proxy-client-key.pem 10.4.7.22:/opt/kubernetes/server/bin/certs/
创建kube-proxy配置(4步)
在所有node节点创建,涉及服务器:hdss7-21 ,hdss7-22
set-cluster
[root@hdss7-21 conf]# cd /opt/kubernetes/server/bin/conf
[root@hdss7-21 conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
--embed-certs=true \
--server=https://10.4.7.10:7443 \
--kubeconfig=kube-proxy.kubeconfig
set-credentials
[root@hdss7-21 conf]# kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/server/bin/certs/kube-proxy-client.pem \
--client-key=/opt/kubernetes/server/bin/certs/kube-proxy-client-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
set-context
[root@hdss7-21 conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
use-context
[root@hdss7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
把生成配置文件传到另一台机器 那边就可以不用做以上四步
[root@hdss7-21 conf]# scp kube-proxy.kubeconfig hdss7-22:/opt/kubernetes/conf/
加载ipvs模块
[root@hdss7-21 ~]# lsmod |grep ip_vs
[root@hdss7-21 ~]# vim ipvs.sh
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir | grep -o "^[^.]*")
do
/sbin/modinfo -F filename $i >/dev/null 2>&1
if [ $? -eq 0 ];then
/sbin/modprobe $i
fi
done
[root@hdss7-21 ~]# sh ipvs.sh
[root@hdss7-21 ~]# lsmod |grep ip_vs
ip_vs_wrr 12697 0
ip_vs_wlc 12519 0
ip_vs_sh 12688 0
ip_vs_sed 12519 0
ip_vs_rr 12600 0
ip_vs_pe_sip 12740 0
nf_conntrack_sip 33860 1 ip_vs_pe_sip
ip_vs_nq 12516 0
ip_vs_lc 12516 0
ip_vs_lblcr 12922 0
ip_vs_lblc 12819 0
ip_vs_ftp 13079 0
ip_vs_dh 12688 0
ip_vs 145497 24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc
nf_nat 26787 3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4
nf_conntrack 133095 8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4
libcrc32c 12644 3 ip_vs,nf_nat,nf_conntrack
创建启动脚本
[root@hdss7-21 ~]# cd /opt/kubernetes/server/bin/
[root@hdss7-21 bin]# vim kube-proxy-startup.sh
#!/bin/sh
WORK_DIR=$(dirname $(readlink -f $0))
[ $? -eq 0 ] && cd $WORK_DIR || exit
/opt/kubernetes/server/bin/kube-proxy \
--cluster-cidr 172.7.0.0/16 \
--hostname-override hdss7-21.host.com \
--proxy-mode=ipvs \
--ipvs-scheduler=nq \
--kubeconfig ./conf/kube-proxy.kubeconfig
创建目录,添加权限
[root@hdss7-22 bin]# chmod +x kube-proxy-startup.sh
[root@hdss7-21 bin]# mkdir -p /home/logs/kubernetes/kube-proxy
使用supervisor来启动它
[root@hdss7-21 bin]# vim /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-7-21]
command=/opt/kubernetes/server/bin/kube-proxy-startup.sh
numprocs=1
directory=/opt/kubernetes/server/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=5
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
[root@hdss7-22 bin]# supervisorctl update
[root@hdss7-22 bin]# supervisorctl status
etcd-server-7-22 RUNNING pid 6542, uptime 2:44:01
kube-apiserver-7-22 RUNNING pid 6534, uptime 2:44:01
kube-controller-manager-7-22 RUNNING pid 6523, uptime 2:44:01
kube-kubelet-7-22 RUNNING pid 7102, uptime 1:07:36
kube-proxy-7-22 STARTING
kube-scheduler-7-22 RUNNING pid 6528, uptime 2:44:01
验证一下集群
[root@hdss7-21 bin]# yum -y install ipvsadm
[root@hdss7-21 bin]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.0.1:443 nq
-> 10.4.7.21:6443 Masq 1 0 0
-> 10.4.7.22:6443 Masq 1 0 0
[root@hdss7-21 bin]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 4d20h
8、验证kebernetes集群
任意一个node节点主机,创建一个资源配置清单
这里选择10.4.7.21主机
[root@hdss7-21 ~]# vim nginx-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
spec:
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: harbor.od.com/public/nginx:v1.7.9
ports:
- containerPort: 80
创建资源
[root@hdss7-21 ~]# kubectl create -f nginx-ds.yaml
daemonset.extensions/nginx-ds created
[root@hdss7-21 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ds-t8mwg 1/1 Running 0 55s
nginx-ds-tjkt2 1/1 Running 0 55s
[root@hdss7-21 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ds-t8mwg 1/1 Running 0 85s 172.7.21.2 hdss7-21.host.com <none> <none>
nginx-ds-tjkt2 1/1 Running 0 85s 172.7.22.2 hdss7-22.host.com <none> <none>
[root@hdss7-21 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
[root@hdss7-21 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready master,node 83m v1.15.11
hdss7-22.host.com Ready master,node 82m v1.15.11
检查证书有效期
在10.4.7.200机器上
[root@hdss7-200 certs]# cfssl-certinfo -cert apiserver.pem


浙公网安备 33010602011771号