下面从node3上操作
node3# docker run -d -p 8500:8500 --name consul progrium/consul -server -bootstrap

node3# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      949/master
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      866/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      949/master
tcp6       0      0 :::8500                 :::*                    LISTEN      2790/docker-proxy
tcp6       0      0 :::22                   :::*                    LISTEN      866/sshd

看到了 8500 已经开启,然后打开浏览器输入以下网址
http://192.168.56.13:8500
发现可以打开,到目前为止,node3的操作就告一段了

下面开始操作node1

node1# vi /etc/docker/daemon.json  

修改为下面

{
"registry-mirrors": ["https://a14c78qe.mirror.aliyuncs.com"],
"dns": ["192.168.56.2","8.8.4.4"],
"data-root": "/data/docker",
"cluster-store":"consul://192.168.56.13:8500",
"cluster-advertise":"192.168.56.11:2375"
}

然后记得修改
vi /usr/lib/systemd/system/docker.service
14 行为 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://192.168.56.11
要在node1和node2上都做修改,修改后重启docker服务生效(之前网络篇最后的私有网络通过容器名ping互通的实验,没做成功时是因为修改了此配置,是否跟这个配置有关,后面有时间在

测试)

dockerd --help | grep cluster  可以使用这个命令查看和cluster相关的命令

然后重启服务

node1# systemctl daemon-reload
node1# systemctl restart docker

此时node1操作完成,切换到node2上操作

node2# vi /etc/docker/daemon.json  

修改为下面

{
"registry-mirrors": ["https://a14c78qe.mirror.aliyuncs.com"],
"dns": ["192.168.56.2","8.8.4.4"],
"data-root": "/data/docker",
"cluster-store":"consul://192.168.56.13:8500",
"cluster-advertise":"192.168.56.12:2375"
}

然后记得修改
vi /usr/lib/systemd/system/docker.service
14 行为 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://192.168.56.12


node2# systemctl daemon-reload
node2# systemctl restart docker

node2上操作完成

接下来打开浏览器输入 http://192.168.56.13:8500/ui/#/dc1/kv/docker/nodes/
如果能在左边看到2个节点分别是56.11和56.12则ok

下面切换回node1上操作

node1# docker network create -d overlay ov_net1   #创建一个overlay类型的新网络
dcb8e33bb80fe8cde9212af8c2b9171ac8003d08f020f9dcff5c253de802be5d

node1# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
a068acefb7f1        bridge              bridge              local
b3f8fb0d9f71        host                host                local
202ad8a28eb5        my_net2             bridge              local
2ce830695bd5        none                null                local
dcb8e33bb80f        ov_net1             overlay             global  多了一条全局的网络

下面切换回node2

node2# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
41368417f49e        bridge              bridge              local
f5eb9cdc223d        host                host                local
55280c8579d3        none                null                local
dcb8e33bb80f        ov_net1             overlay             global  看到了刚才创建的全局的overlay网络

下面切换回node1 开始创建主机

node1# docker run  -it --rm --network ov_net1 busybox   
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
21: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
    link/ether 02:42:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.2/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
24: eth1@if25: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
       valid_lft forever preferred_lft forever
发现多了一块10段的网卡

下面切换到node2

node2# docker run  -it --rm --network ov_net1 busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
    link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.3/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
9: eth1@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
       valid_lft forever preferred_lft forever

发现同样也多了一块10段的网卡

此时 互ping10段ip可以通

但是为何会出现172.18段的网卡呢
此时打开一个会话登录node1上查看


node1# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
a068acefb7f1        bridge              bridge              local
005a2bc2da66        docker_gwbridge     bridge              local
b3f8fb0d9f71        host                host                local
202ad8a28eb5        my_net2             bridge              local
2ce830695bd5        none                null                local
dcb8e33bb80f        ov_net1             overlay             global


多了一条  docker_gwbridge ,每当使用docker创建一个overlay的网络,docker就会自动穿件一个桥接网络 docker_gwbridge (名字不一定就是这个),用于与外界通信

 # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.18.0.1      0.0.0.0         UG    0      0        0 eth1
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth1

通过路由表也可以印证这一点


还有就是创建泽中overlay网络时也可以指定子网范围,创建主机时可以指定ip地址

# docker network create -d overlay --subnet 10.10.0.0/16 ov_net2
5a7ade4e6aa4b79ff11f6a179a4a8baa9fa3783e4ab497648202a6e2499cb48e



# docker run -it --rm --network ov_net2 --ip 10.10.0.10 busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
    link/ether 02:42:0a:0a:00:0a brd ff:ff:ff:ff:ff:ff
    inet 10.10.0.10/16 brd 10.10.255.255 scope global eth0
       valid_lft forever preferred_lft forever
29: eth1@if30: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.3/16 brd 172.18.255.255 scope global eth1
       valid_lft forever preferred_lft forever



posted on 2020-02-24 17:05  wilson'blog  阅读(258)  评论(0)    收藏  举报