参考:https://github.com/liuyi01/kubernetes-starter

一、环境准备

192.168.87.130 master  server01

192.168.87.131 worker  server02

192.168.87.132 worker  server03

二、安装docker(所有节点)

参考docker安装,安装后做如下配置修改

接受所有ip的数据包转发

$ vi /lib/systemd/system/docker.service
#找到ExecStart=xxx,在这行上面加入一行,内容如下:(k8s的网络需要)
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT

启动服务

$ systemctl daemon-reload
$ service docker restart

三、系统设置(所有节点)

1关闭禁用防火墙,参考百度

启动: systemctl start firewalld
关闭: systemctl stop firewalld
查看状态: systemctl status firewalld 
开机禁用  : systemctl disable firewalld
开机启用  : systemctl enable firewalld

2设置系统参数 - 允许路由转发,不对bridge的数据进行处理

#写入配置文件
$ cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
 
#生效配置文件
$ sysctl -p /etc/sysctl.d/k8s.conf

3、配置hosts

#配置host,使每个Node都可以通过名字解析到ip地址
$ vi /etc/hosts
#加入如下片段(ip地址和servername替换成自己的)
192.168.1.101 server01
192.168.1.102 server02
192.168.1.103 server03

4、准备kubernetes二进制文件(所有节点),配置环境变量path,方便后面可以直接使用命令

路径:/usr/local/kubernetes/bin

5、准备配置文件并生成配置

路径:/usr/local/kubernetes-starter

生成方式:

#cd到之前下载的git代码目录
$ cd ~/kubernetes-starter
#编辑属性配置(根据文件注释中的说明填写好每个key-value)
$ vi config.properties
#生成配置文件,确保执行过程没有异常信息
$ ./gen-config.sh simple
#查看生成的配置文件,确保脚本执行成功
$ find target/ -type f
target/all-node/kube-calico.service
target/master-node/kube-controller-manager.service
target/master-node/kube-apiserver.service
target/master-node/etcd.service
target/master-node/kube-scheduler.service
target/worker-node/kube-proxy.kubeconfig
target/worker-node/kubelet.service
target/worker-node/10-calico.conf
target/worker-node/kubelet.kubeconfig
target/worker-node/kube-proxy.service
target/services/kube-dns.yaml

 6、部署etcd节点(主节点)生产上需部署多个集群,本次学习只部署一个

#把服务配置文件copy到系统服务目录
$ cp ~/kubernetes-starter/target/master-node/etcd.service /lib/systemd/system/
#enable服务
$ systemctl enable etcd.service
#创建工作目录(保存数据的地方)
$ mkdir -p /var/lib/etcd
# 启动服务
$ service etcd start
# 查看服务日志,看是否有错误信息,确保服务正常
$ journalctl -f -u etcd.service

7、部署APIServer(主节点),同上

$ cp target/master-node/kube-apiserver.service /lib/systemd/system/
$ systemctl enable kube-apiserver.service       #设置开机自动启动
$ service kube-apiserver start
$ journalctl -f -u kube-apiserver

8、部署ControllerManager(主节点)

$ cp target/master-node/kube-controller-manager.service /lib/systemd/system/
$ systemctl enable kube-controller-manager.service
$ service kube-controller-manager start
$ journalctl -f -u kube-controller-manager

9、部署Scheduler(主节点)

$ cp target/master-node/kube-scheduler.service /lib/systemd/system/
$ systemctl enable kube-scheduler.service
$ service kube-scheduler start
$ journalctl -f -u kube-scheduler

10、部署CalicoNode(所有节点)

calico是通过系统服务+docker方式完成的

$ cp target/all-node/kube-calico.service /lib/systemd/system/
$ systemctl enable kube-calico.service
$ service kube-calico start
$ journalctl -f -u kube-calico

calico可用性验证

查看容器运行情况

$ docker ps
CONTAINER ID   IMAGE                COMMAND        CREATED ...
4d371b58928b   calico/node:v2.6.2   "start_runit"  3 hours ago...

查看节点运行情况

$ calicoctl node status
Calico process is running.
IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+---------------+-------------------+-------+----------+-------------+
| 192.168.1.103 | node-to-node mesh | up    | 13:13:13 | Established |
+---------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.

查看端口BGP 协议是通过TCP 连接来建立邻居的,因此可以用netstat 命令验证 BGP Peer

$ netstat -natp|grep ESTABLISHED|grep 179
tcp        0      0 192.168.1.102:60959     192.168.1.103:179       ESTABLISHED 29680/bird

查看集群ippool情况

$ calicoctl get ipPool -o yaml
- apiVersion: v1
  kind: ipPool
  metadata:
    cidr: 172.20.0.0/16
  spec:
    nat-outgoing: true

10、配置kubectl命令(任意节点)

#指定apiserver地址(ip替换为你自己的api-server地址)
kubectl config set-cluster kubernetes  --server=http://192.168.87.130:8080
#指定设置上下文,指定cluster
kubectl config set-context kubernetes --cluster=kubernetes
#选择默认的上下文
kubectl config use-context kubernetes

 11、配置kubelet(工作节点)

#确保相关目录存在
$ mkdir -p /var/lib/kubelet
$ mkdir -p /etc/kubernetes
$ mkdir -p /etc/cni/net.d

#复制kubelet服务配置文件
$ cp target/worker-node/kubelet.service /lib/systemd/system/
#复制kubelet依赖的配置文件
$ cp target/worker-node/kubelet.kubeconfig /etc/kubernetes/
#复制kubelet用到的cni插件配置文件
$ cp target/worker-node/10-calico.conf /etc/cni/net.d/

$ systemctl enable kubelet.service
$ service kubelet start
$ journalctl -f -u kubelet

12、小试牛刀(安装kubectl的节点)kubectl version #查看版本kubectl get nodes #获取所有结kubectl get pods #获取所有pods节点kubectl get --help #查看帮助信kubectl run kubernetes-bootcamp --image=jocatalin/kubernetes-bootcamp:v1 --port=8080 #运行官方提供的镜像

kubectl get deployments  #查看当前集群所有的deployments
kubectl get pods -o wide #查看pods详细信息
journalctl -f #
kubectl delete deployments kubernetes-bootcamp #删除kubernetes-bootcamp镜像
kubectl describe deploy kubernetes-bootcamp #描述deploy信息
kubectl describe pods .... #描述pods信息
#运用kubectl proxy测试访问集群运行服务
kubectl proxy
...
curl http://localhost:8001/api/v1/proxy/namespaces/default/pods/.../

#扩缩容
kubectl scale deploy kubernetes-bootcamp --replicas=4 #扩4个负载
kubectl scale deploy kubernetes-bootcamp --replicas=2 #缩2个负载
#更新镜像
kubectl set image deploy
kubernetes-bootcamp kubernetes-bootcamp=jocatalin/kubernetes-bootcamp:v2
kubectl rollout status deploy
kubernetes-bootcamp #查看镜像更新结果
kubectl rollout undo deploy kubernetes-bootcamp #回退deploy

 13、再试牛刀

kubectl get pods -o wide|grep xxx            --查看pod详细信息
xxxx-f45989c69-clcps         1/1       Running            0          5h        172.26.117.30    192.168.115.137
kubectl logs
xxxx-f45989c69-clcps -f --查看pod里容器日志 -f跟踪
kubectl describe pods xxxx-f45989c69-clcps --查看pods详细信息
kubectl exec -it aep-ecloud-agadminserver-f45989c69-clcps -- /bin/bash --进入pods容器,exit退出
kubectl apply -f nginx-pod.yaml --创建pod