代码改变世界

《Kubernetes权威指南:从Docker到Kubernetes实践全接触》学习实验记录-跨主机连接网络

2019-01-15 00:03  it长青  阅读(207)  评论(0)    收藏  举报

kubernetes网络

1 一个pod内多个容器之间的通信:

每个pod都有一个专属的pause网络容器
一个pod里面,多个容器,容器的网络,类似于docker的link网络,但link网络,只要一个容器失效,整个网络就失效了,所以,为了解决这个问题,kuberntes为每个pod引入了一个pause网络容器
一个pod中的多个容器,实现网络共享,依赖于pause

2 pod与pod之间的网络通信:

同一个主机上的pod,通过docker0这个网桥通信
不同主机上的pod,就不好办了,因为每个主机,默认情况下,都有一个docker0,并且IP为172.17.0.1,连到它上面的pod所分到的IP,就从172.17.0.2开始,所以,不同主机上的pod的IP可能是一样的,那怎么办呢?
1.不同主机上的pod要互相访问,首先要知道对方在哪个主机节点的IP
2.创建隧道网络
3.避免不同主机节点上的docker0的IP冲突

关于第一点,在etcd上已经有记录,在master上执行以下命令查看:
kubectl describe pods centos

第二、三点,通过flannel技术解决

flannel安装与配置

yum install -y flannel # 所有 master 和node节点上都安装

node 节点上配置

vim /usr/lib/systemd/system/flanneld.service
#查看flannel的启动脚本
 
vim /etc/sysconfig/flanneld   # 这配置文件后面加两项参数
FLANNEL_ETCD="http://172.16.111.200:2379"
#配置flannel可以访问etcd
FLANNEL_ETCD_KEY="/coreos.com/network"
#指定etcd的key文件路径
#也就是说要在etcd配置一个key

master节点配置

etcdctl -C http://172.16.111.200:2379 set /coreos.com/network/config '{ "Network": "172.17.0.0/16" }'
#-C,配置etcd监听地址,默认监听127.0.0.1
#set,配置etcd的key
#Network,指定docker0的网段,以后docker0就从这个网段分配IP
#注意:文件名字必须是config
 
systemctl stop docker
#如果本地启了docker,必须将docker先停掉,再启flannel
systemctl enable flanneld
systemctl start flanneld
systemctl start docker

[root@k8s_master ~]# cat /usr/lib/systemd/system/flanneld.service 
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
WantedBy=docker.service
FLANNEL_ETCD="http://172.16.111.200:2379"
FLANNEL_ETCD_KEY="/coreos.com/network"
[root@k8s_master ~]#
[root@k8s_master ~]# etcdctl -C http://172.16.111.200:2379 set /coreos.com/network/config '{ "Network": "172.17.0.0/16" }'
{ "Network": "172.17.0.0/16" }
[root@k8s_master ~]# 

====
启动报错
journalctl -xe

Failed to start Flanneld overlay address etcd agent.
etcd 的没配置好,
====

[root@k8s_node1 ~]# cat /etc/sysconfig/flanneld 
# Flanneld configuration options  

# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://master:2379"

# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/coreos.com/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
 
[root@k8s_node1 ~]# 
[root@k8s_node1 ~]# systemctl stop docker
[root@k8s_node1 ~]# 
[root@k8s_node1 ~]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s_node1 ~]# systemctl start flanneld
[root@k8s_node1 ~]# 
[root@k8s_node1 ~]# systemctl start docker
[root@k8s_node1 ~]# 
[root@k8s_node1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:98:a6:eb brd ff:ff:ff:ff:ff:ff
    inet 172.16.111.201/24 brd 172.16.111.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::de34:614f:537e:2674/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:28:ac:c4:ce brd ff:ff:ff:ff:ff:ff
    inet 172.17.24.1/24 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:28ff:feac:c4ce/64 scope link 
       valid_lft forever preferred_lft forever
46: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
    link/none 
    inet 172.17.24.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
[root@k8s_node1 ~]# 

node2 也一样操作后
[root@k8s_node2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:1f:7b:29 brd ff:ff:ff:ff:ff:ff
    inet 172.16.111.202/24 brd 172.16.111.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::252a:fd9d:9f65:9f0c/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::7d8:2f20:bc1a:5753/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::de34:614f:537e:2674/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:bd:28:89:34 brd ff:ff:ff:ff:ff:ff
    inet 172.17.13.1/24 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:bdff:fe28:8934/64 scope link 
       valid_lft forever preferred_lft forever
16: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
    link/none 
    inet 172.17.13.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
[root@k8s_node2 ~]# 

再重启 kubelet 和 kube-proxy,不然node节点的容器都是停止状态了

[root@k8s_node2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@k8s_node2 ~]# 
[root@k8s_node2 ~]# 
[root@k8s_node2 ~]# systemctl restart kubelet
[root@k8s_node2 ~]# systemctl restart kube-proxy
[root@k8s_node2 ~]# 
[root@k8s_node2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
be5c5cbd30ae 172.16.111.200:5000/httpd:v1 "/usr/sbin/httpd -..." 7 seconds ago Up 6 seconds k8s_pod1.2d3905b1_web_default_3dc45f27-fba8-11e8-9a1d-000c29bd7985_53887829
e5222185aec4 172.16.111.200:5000/pod-infrastructure:latest "/pod" 7 seconds ago Up 6 seconds 0.0.0.0:8080->80/tcp k8s_POD.2872049d_web_default_3dc45f27-fba8-11e8-9a1d-000c29bd7985_15f47471
[root@k8s_node2 ~]# 

kubernetes 基本需要用的组成部分已学习并实验创建好了

资源使用记录

[root@k8s_master ~]# free -h
              total used free shared buff/cache available
Mem: 4.0G 436M 1.6G 124M 2.0G 3.1G
Swap: 2.0G 0B 2.0G
[root@k8s_master ~]# 
[root@k8s_master ~]# top

top - 09:15:01 up 3 days, 2:40, 1 user, load average: 0.01, 0.09, 0.07
Tasks: 101 total, 1 running, 100 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.7 us, 0.7 sy, 0.0 ni, 98.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 4158388 total, 1643740 free, 446856 used, 2067792 buff/cache
KiB Swap: 2097148 total, 2097148 free, 0 used. 3261676 avail Mem 

   PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND                                                                                                         
   616 kube 20 0 356048 58312 23796 S 1.0 1.4 44:27.28 kube-controller                                                                                                 
  2206 kube 20 0 180080 122252 21068 S 1.0 2.9 44:06.67 kube-apiserver                                                                                                  
   978 etcd 20 0 10.437g 135324 26580 S 0.7 3.3 39:11.21 etcd                                                                                                            
 30409 root 20 0 157696 2280 1616 R 0.7 0.1 0:00.03 top                                                                                                             
   466 root 20 0 65912 22080 21676 S 0.3 0.5 1:37.61 systemd-journal                                                                                                 
   615 kube 20 0 411300 39600 22068 S 0.3 1.0 7:11.58 kube-scheduler                                                                                                  
     1 root 20 0 125224 3788 2440 S 0.0 0.1 0:30.53 systemd   


[root@k8s_node1 ~]# free -h
              total used free shared buff/cache available
Mem: 3.7G 242M 2.1G 32M 1.3G 3.1G
Swap: 2.0G 0B 2.0G
[root@k8s_node1 ~]#

[root@k8s_node2 ~]# free -h
              total used free shared buff/cache available
Mem: 3.7G 259M 2.1G 32M 1.3G 3.1G
Swap: 2.0G 0B 2.0G
[root@k8s_node2 ~]#

master 使用 436M 内存,node使用 259M内存