calico docker 应用实例
在上一篇文章《quay.io/coreos/etcd 基于Docker镜像的集群搭建》中,介绍了ETCD集群的搭建。在此基础上,我们进一步实践calico docker的应用。
PaaS 平台的网络需求:
在使用Docker构建PaaS平台的过程中,我们首先遇到的问题是需要选择一个满足需求的网络模型:
1)让每个容器拥有自己的网络栈,特别是独立的 IP 地址;
2)能够进行跨服务器的容器间通讯,同时不依赖特定的网络设备;
3)有访问控制机制,不同应用之间互相隔离,有调用关系的能够通讯。
调研了几个主流的网络模型:
1)Docker原生的Bridge模型:NAT机制导致无法使用容器IP进行跨服务器通讯;
2)Docker原生的Host模型:大家都使用和服务器相同的IP,端口冲突问题很麻烦;
3)Weave OVS等基于隧道的模型:由于是基于隧道的技术,在用户态进行封包解包,性能折损比较大,同时出现问题时网络抓包调试会很不便。
在对上述模型都不怎么满意的情况下,发现了一个还不怎么被大家关注的新项目:Project Calico。Project Calico是纯三层的SDN实现,它基于BPG协议和Linux自己的路由转发机制,不依赖特殊硬件,没有使用NAT或Tunnel等技术。能够方便的部署在物理服务器,虚拟机(如 OpenStack)或者容器环境下。同时它自带的基于Iptables的ACL管理组件非常灵活,能够满足比较复杂的安全隔离需求。
传统overlay网络架构
Calico提供的网络解决方案
本次搭建的基础环境:
底层OS:Centos7 docker版本:1.8.2-el7.centos IP: 服务器A:192.168.7.168 服务器B:192.168.7.170 服务器C:192.168.7.172
三台机器上搭建基于docker的ETCD集群——参见《quay.io/coreos/etcd 基于Docker镜像的集群搭建》
具体操作步骤:(注,请仔细观察命令,[root@AAA ~]# calicoctl node 表示在A主机上运行的命令,同理B、C)
1、下载calicoctl及docker.io/calico/node镜像(三台机器均需要相同操作)
下载calicoctl,地址如下。为下载之后的文件赋予可执行权限,并复制到/usr/bin/下 链接:http://pan.baidu.com/s/1nuHn5hB 密码:7yce 下载calico-node镜像 [root@AAA ~]# docker pull docker.io/calico/node
2、启动calico-node
[root@AAA ~]# calicoctl node No IP provided. Using detected IP: 192.168.7.168 Calico node is running with id: 6e754df308342753b259e89850f51b3e002780958bbc3f7c0803436548666560 [root@AAA ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6e754df30834 calico/node:latest "/sbin/start_runit" About a minute ago Up About a minute calico-node 0b5f487c20ae quay.io/coreos/etcd "/etcd -name qf2200-c" 4 minutes ago Up 4 minutes 4001/tcp, 0.0.0.0:2379-2380->2379-2380/tcp, 7001/tcp etcd
[root@BBB ~]# calicoctl node No IP provided. Using detected IP: 192.168.7.170 Calico node is running with id: 836bb8208dd992333c4ebc81d6312d1c0e53acffeca1b2ab3942a9483744fdf0 [root@BBB ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 836bb8208dd9 calico/node:latest "/sbin/start_runit" 19 seconds ago Up 19 seconds calico-node fa52ef61ccee quay.io/coreos/etcd "/etcd -name qf2200-c" 2 minutes ago Up 2 minutes 4001/tcp, 7001/tcp, 0.0.0.0:2379-2380->2379-2380/tcp etcd
[root@CCC ~]# calicoctl node No IP provided. Using detected IP: 192.168.7.172 Calico node is running with id: ff71c5939b119e724fca59e24039c7bbbc2adba9078f0b6c5ffa89359df92e2d [root@CCC ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ff71c5939b11 calico/node:latest "/sbin/start_runit" 21 seconds ago Up 20 seconds calico-node eb29998e8e92 quay.io/coreos/etcd "/etcd -name qf2200-c" 2 minutes ago Up 2 minutes 4001/tcp, 0.0.0.0:2379-2380->2379-2380/tcp, 7001/tcp etcd
3、处理calico 的IP资源池
[root@AAA ~]# calicoctl pool show +----------------+---------+ | IPv4 CIDR | Options | +----------------+---------+ | 192.168.0.0/16 | | +----------------+---------+ +--------------------------+---------+ | IPv6 CIDR | Options | +--------------------------+---------+ | fd80:24e2:f998:72d6::/64 | | +--------------------------+---------+ [root@AAA ~]# calicoctl pool remove 192.168.0.0/16 [root@AAA ~]# calicoctl pool show +-----------+---------+ | IPv4 CIDR | Options | +-----------+---------+ +-----------+---------+ +--------------------------+---------+ | IPv6 CIDR | Options | +--------------------------+---------+ | fd80:24e2:f998:72d6::/64 | | +--------------------------+---------+ [root@AAA ~]# calicoctl pool add 10.0.238.0/24 --nat-outgoing --ipip (支持跨子网的主机上的Docker间网络互通,需要添加--ipip参数;如果要Docker访问外网,需要添加--nat-outgoing参数。) [root@AAA ~]# calicoctl pool show +---------------+-------------------+ | IPv4 CIDR | Options | +---------------+-------------------+ | 10.0.238.0/24 | ipip,nat-outgoing | +---------------+-------------------+ +--------------------------+---------+ | IPv6 CIDR | Options | +--------------------------+---------+ | fd80:24e2:f998:72d6::/64 | | +--------------------------+---------+
[root@BBB ~]# calicoctl pool show +---------------+-------------------+ | IPv4 CIDR | Options | +---------------+-------------------+ | 10.0.238.0/24 | ipip,nat-outgoing | +---------------+-------------------+ +--------------------------+---------+ | IPv6 CIDR | Options | +--------------------------+---------+ | fd80:24e2:f998:72d6::/64 | | +--------------------------+---------+
4、处理calico profile(类似于VLAN)
[root@AAA ~]# calicoctl profile add p1 Created profile p1 [root@AAA ~]# calicoctl profile add p2 Created profile p2 [root@AAA ~]# calicoctl profile show +------+ | Name | +------+ | p1 | | p2 | +------+
[root@CCC ~]# calicoctl profile show
+------+
| Name |
+------+
| p1 |
| p2 |
+------+
5、启动net=none的容器
[root@AAA ~]# docker run -tid --name redis --restart=always --log-driver=none --net=none redis /run.sh b6d894f4cfcf36f5d19f3798447825730c80e95d1a9f98f326b77fae0ed85277 [root@AAA ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b6d894f4cfcf redis "/run.sh" 3 seconds ago Up 2 seconds redis 6e754df30834 calico/node:latest "/sbin/start_runit" 14 minutes ago Up 14 minutes calico-node 0b5f487c20ae quay.io/coreos/etcd "/etcd -name qf2200-c" 16 minutes ago Up 16 minutes 4001/tcp, 0.0.0.0:2379-2380->2379-2380/tcp, 7001/tcp etcd
[root@BBB ~]# docker run -tid --name redis --restart=always --log-driver=none --net=none redis /run.sh 4de1a0e2b2af5ad6c7f33138161105d46a07ce70d0b90b513125b28390a6a185 [root@BBB ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4de1a0e2b2af redis "/run.sh" 2 seconds ago Up 1 seconds redis 836bb8208dd9 calico/node:latest "/sbin/start_runit" 15 minutes ago Up 15 minutes calico-node fa52ef61ccee quay.io/coreos/etcd "/etcd -name qf2200-c" 17 minutes ago Up 17 minutes 4001/tcp, 0.0.0.0:2379-2380->2379-2380/tcp, 7001/tcp etcd
[root@CCC ~]# docker run -tid --name redis --restart=always --log-driver=none --net=none redis /run.sh b6801f99494ada054a8ef00fc5b74ff4aba4e156e506d94c0b781fa20f8b6f50 [root@CCC ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b6801f99494a redis "/run.sh" 2 seconds ago Up 1 seconds redis ff71c5939b11 calico/node:latest "/sbin/start_runit" 15 minutes ago Up 15 minutes calico-node eb29998e8e92 quay.io/coreos/etcd "/etcd -name qf2200-c" 17 minutes ago Up 17 minutes 4001/tcp, 0.0.0.0:2379-2380->2379-2380/tcp, 7001/tcp etcd
6、为容器配置IP及VLAN
[root@AAA ~]# calicoctl container add redis 10.0.238.1 IP 10.0.238.1 added to redis [root@AAA ~]# docker exec -ti redis ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: tunl0: <NOARP> mtu 0 qdisc noop state DOWN link/ipip 0.0.0.0 brd 0.0.0.0 39: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 12:b7:51:07:81:10 brd ff:ff:ff:ff:ff:ff inet 10.0.238.1/32 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::10b7:51ff:fe07:8110/64 scope link valid_lft forever preferred_lft forever [root@AAA ~]# calicoctl container redis profile append p1 Profile(s) p1 appended.
[root@BBB ~]# calicoctl container add redis 10.0.238.2 IP 10.0.238.2 added to redis [root@BBB ~]# docker exec -ti redis ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: tunl0: <NOARP> mtu 0 qdisc noop state DOWN link/ipip 0.0.0.0 brd 0.0.0.0 29: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether ca:19:45:10:19:80 brd ff:ff:ff:ff:ff:ff inet 10.0.238.2/32 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::c819:45ff:fe10:1980/64 scope link valid_lft forever preferred_lft forever [root@BBB ~]# calicoctl container redis profile append p1 Profile(s) p1 appended. [root@BBB ~]# calicoctl container redis profile append p2 Profile(s) p2 appended.
[root@CCC ~]# calicoctl container add redis 10.0.238.3 IP 10.0.238.3 added to redis [root@CCC ~]# docker exec -ti redis ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: tunl0: <NOARP> mtu 0 qdisc noop state DOWN link/ipip 0.0.0.0 brd 0.0.0.0 25: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether ca:6b:e2:63:44:28 brd ff:ff:ff:ff:ff:ff inet 10.0.238.3/32 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::c86b:e2ff:fe63:4428/64 scope link valid_lft forever preferred_lft forever [root@CCC ~]# calicoctl container redis profile append p2 Profile(s) p2 appended.
7、宿主机及容器网络拓扑
8、测试
[root@AAA ~]# docker exec -ti redis /bin/bash [root@b6d894f4cfcf /]# ping 10.0.238.1 (本机,可达) PING 10.0.238.1 (10.0.238.1) 56(84) bytes of data. 64 bytes from 10.0.238.1: icmp_seq=1 ttl=64 time=0.113 ms 64 bytes from 10.0.238.1: icmp_seq=2 ttl=64 time=0.052 ms ^C --- 10.0.238.1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.052/0.082/0.113/0.031 ms [root@b6d894f4cfcf /]# ping 10.0.238.2 (同VLAN,可达) PING 10.0.238.2 (10.0.238.2) 56(84) bytes of data. 64 bytes from 10.0.238.2: icmp_seq=1 ttl=62 time=1.02 ms 64 bytes from 10.0.238.2: icmp_seq=2 ttl=62 time=0.533 ms ^C --- 10.0.238.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 0.533/0.776/1.020/0.245 ms [root@b6d894f4cfcf /]# ping 10.0.238.3 (不同VLAN,不可达) PING 10.0.238.3 (10.0.238.3) 56(84) bytes of data. ^C --- 10.0.238.3 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3022ms
[root@BBB ~]# docker exec -ti redis /bin/bash [root@4de1a0e2b2af /]# ping 10.0.238.1 (同VLAN,可达) PING 10.0.238.1 (10.0.238.1) 56(84) bytes of data. 64 bytes from 10.0.238.1: icmp_seq=1 ttl=62 time=2.08 ms 64 bytes from 10.0.238.1: icmp_seq=2 ttl=62 time=1.02 ms ^C --- 10.0.238.1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 1.027/1.555/2.084/0.529 ms [root@4de1a0e2b2af /]# ping 10.0.238.2 (本机,可达) PING 10.0.238.2 (10.0.238.2) 56(84) bytes of data. 64 bytes from 10.0.238.2: icmp_seq=1 ttl=64 time=0.154 ms 64 bytes from 10.0.238.2: icmp_seq=2 ttl=64 time=0.066 ms ^C --- 10.0.238.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.066/0.110/0.154/0.044 ms [root@4de1a0e2b2af /]# ping 10.0.238.3 (同VLAN,可达) PING 10.0.238.3 (10.0.238.3) 56(84) bytes of data. 64 bytes from 10.0.238.3: icmp_seq=1 ttl=62 time=1.06 ms 64 bytes from 10.0.238.3: icmp_seq=2 ttl=62 time=0.442 ms ^C --- 10.0.238.3 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.442/0.752/1.062/0.310 ms
[root@CCC ~]# docker exec -ti redis /bin/bash [root@b6801f99494a /]# ping 10.0.238.1 (不同VLAN,不可达) PING 10.0.238.1 (10.0.238.1) 56(84) bytes of data. ^C --- 10.0.238.1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2001ms [root@b6801f99494a /]# ping 10.0.238.2 (同VLAN,可达) PING 10.0.238.2 (10.0.238.2) 56(84) bytes of data. 64 bytes from 10.0.238.2: icmp_seq=1 ttl=62 time=0.384 ms 64 bytes from 10.0.238.2: icmp_seq=2 ttl=62 time=0.460 ms ^C --- 10.0.238.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1016ms rtt min/avg/max/mdev = 0.384/0.422/0.460/0.038 ms [root@b6801f99494a /]# ping 10.0.238.3 (本机,可达) PING 10.0.238.3 (10.0.238.3) 56(84) bytes of data. 64 bytes from 10.0.238.3: icmp_seq=1 ttl=64 time=0.055 ms 64 bytes from 10.0.238.3: icmp_seq=2 ttl=64 time=0.054 ms ^C --- 10.0.238.3 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.054/0.054/0.055/0.007 ms