目录
1. 网络类型
1.1 link (已不建议使用)
-
link:
--link 连接的容器名或ID- 使用了link机制后,可以通过指定的名字来和目标容器通信,这其实是通过给
/etc/hosts中加入名称和IP的解析关系来实现的
- 使用了link机制后,可以通过指定的名字来和目标容器通信,这其实是通过给
# 创建两个容器
[root@docker-node01 ~]# docker run -dit --env FLAG=01 --name busybox01 busybox # 创建busybox01容器并设置环境变量FLAG=01
[root@docker-node01 ~]# docker run -dit --name busybox02 --link busybox01 busybox # 创建busybox02容器连接busybox01
# busybox02单向与busybox01通
[root@docker-node01 ~]# docker exec -it busybox02 ping busybox01
PING busybox01 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.084 ms
# 自动写入了hosts
[root@docker-node01 ~]# docker exec -it busybox02 cat /etc/hosts
...
172.17.0.2 busybox01 c21093078f69
# 获取到busybox01的环境变量
[root@docker-node01 ~]# docker exec -it busybox02 env
...
BUSYBOX01_NAME=/busybox02/busybox01
BUSYBOX01_ENV_FLAG=01
# box01到busybox02不通
[root@docker-node01 ~]# docker exec -it busybox01 ping busybox02
ping: bad address 'busybox02'
1.2 container (已不建议使用)
-
container:
--net container:连接的容器名或ID新创建的容器和已经存在的一个容器共享一个 Network Namespace,而不是和宿主机共享。
新创建的容器不会创建自己的网卡,配置自己的 IP,而是和一个指定的容器共享 IP、端口范围等
两个容器除了网络方面,其他的如文件系统、进程列表等还是隔离的。两个容器的进程可以通过 lo 网卡设备通信
# 创建一个默认网络容器,一个container网络的容器 [root@docker-node01 ~]# docker run -dit --name busybox01 busybox [root@docker-node01 ~]# docker run -dit --name busybox02 --net container:busybox01 busybox # 两个容器共用IP [root@docker-node01 ~]# for num in {01..02};do docker exec -it busybox$num ifconfig | grep 'inet addr' | head -1 ;done inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 # 两个容器共用端口 [root@docker-node01 ~]# docker exec -it busybox01 nohup nc -lp 666 & [root@docker-node01 ~]# docker exec -it busybox01 netstat -anpt | grep 666 tcp 0 0 :::666 :::* LISTEN [root@docker-node01 ~]# docker exec -it busybox02 netstat -ant | grep 666 tcp 0 0 :::666 :::* LISTEN # 两个容器的进程还是隔离的 [root@docker-node01 ~]# docker exec -it busybox01 ps -ef | grep nc 28 root 0:00 nc -lp 666 [root@docker-node01 ~]# docker exec -it busybox0 ps -ef | grep nc | wc -l 0
1.3 none
- none
--net none- Docker容器拥有自己的Network Namespace,但是,并不为Docker容器进行任何网络配置。也就是说,这个Docker容器没有网卡、IP、路由等信息。需要我们自己为Docker容器添加网卡、配置IP等。
- 这种网络模式下容器只有lo回环网络,没有其他网卡,这种类型的网络没有办法联网,封闭的网络能很好的保证容器的安全性。
1.4 host
-
host:
--net host-
和宿主机共用一个Network Namespace
-
容器将不会虚拟出自己的网卡,配置自己的IP等,而是使用宿主机的IP和端口。但是,容器的其他方面,如文件系统、进程列表等还是和宿主机隔离的
# 创建两个host网络模式的容器 [root@docker-node01 ~]# docker run -dit --net host --name busybox01 busybox [root@docker-node01 ~]# docker run -dit --net host --name busybox02 busybox # 共用端口,宿主机IP
-
1.5 bridge/自定义网络 (推荐)
-
bridge:默认启动容器就是bridge模式
-
会在主机上创建一个名为docker0的虚拟网桥,此主机上启动的Docker容器会连接到这个虚拟网桥上
(在主机上创建一对虚拟网卡veth pair设备,Docker将veth pair设备的一端放在新创建的容器中,并命名为eth0(容器的网卡),另一端放在主机中,以vethxxx这样类似的名字命名,并将这个网络设备加入到docker0网桥中,可以通过
brctl show命令查看) -
虚拟网桥的工作方式和物理交换机类似,这样主机上的所有容器就通过交换机连在了一个二层网络中
-
从docker0子网中分配一个IP给容器使用,并设置docker0的IP地址为容器的默认网关
-
[root@docker-node01 ~]# yum install bridge-utils -y
...
[root@docker-node01 ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242d2e9b418 no vethd939d3e
vethef340a1
# 创建两个默认网络容器
[root@docker-node01 ~]# docker run -dit --name=busybox01 busybox
[root@docker-node01 ~]# docker run -dit --name=busybox02 busybox
# 查看busybox01和buxybox02容器的IP
[root@docker-node01 ~]# for num in {01..02};do docker exec -it busybox$num ifconfig | grep 'inet addr' | head -1 ;done
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 # busybox01的IP
inet addr:172.17.0.3 Bcast:172.17.255.255 Mask:255.255.0.0 # busybox02的IP
# 两个容器互通
[root@docker-node01 ~]# docker exec -it busybox01 ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.094 ms
...
[root@docker-node01 ~]# docker exec -it busybox02 ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.082 ms
...
# 无法通过主机名通信
[root@docker-node01 ~]# docker exec -it busybox01 ping busybox02
ping: bad address 'busybox02'
[root@docker-node01 ~]# docker exec -it busybox02 ping busybox01
ping: bad address 'busybox01'
2. 自定义网段网络
自定义网络:基于bridge
-
Usage: docker network create [OPTIONS] NETWORK -
在默认的bridge网络上增加了可通过主机名通信
# 创建自定义网络
[root@docker-node01 ~]# docker network create --driver bridge --subnet 172.18.0.0/16 --gateway 172.18.0.1 mybridge # --driver bridge可以不加,默认就是bridge
# 查看自定义创建的网络
[root@docker-node01 ~]# docker network ls |grep mybridge
3f4358fcb4f9 mybridge bridge local
[root@docker-node01 ~]# ip a
...
inet 172.18.0.1/16 brd 172.18.255.255 scope
...
# 创建两个自定义网络容器
[root@docker-node01 ~]# docker run -dit --net mybridge --name busybox01 busybox
[root@docker-node01 ~]# docker run -dit --net mybridge --name busybox02 busybox
# 查看容器IP
[root@docker-node01 ~]# for num in {01..02};do docker exec -it busybox$num ifconfig | grep 'inet addr' | head -1 ;done
inet addr:172.18.0.2 Bcast:172.18.255.255 Mask:255.255.0.0
inet addr:172.18.0.3 Bcast:172.18.255.255 Mask:255.255.0.0
# 自己创建的网络通过容器名和容器IP互相都可以通信
[root@docker-node01 ~]# docker exec -it busybox01 ping 172.18.0.3
PING 172.18.0.3 (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.127 ms
...
[root@docker-node01 ~]# docker exec -it busybox01 ping busybox02
PING busybox02 (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.063 ms
...
[root@docker-node01 ~]# docker exec -it busybox02 ping busybox01
PING busybox01 (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.054 ms
...
[root@docker-node01 ~]# docker exec -it busybox02 ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.112 ms
...
# 自定义网络容器与默认bridge网关是通的
[root@docker-node01 ~]# docker exec -it busybox01 ping 172.18.0.1
PING 172.18.0.1 (172.18.0.1): 56 data bytes
64 bytes from 172.18.0.1: seq=0 ttl=64 time=0.065 ms
3. bridge与自定义网段网络通信
- 容器与指定网络连接:
Usage: docker network connect [OPTIONS] NETWORK CONTAINER - 断开一个容器与指定网络的连接:
Usage: docker network disconnect [OPTIONS] NETWORK CONTAINER
# 创建两个默认bridge网络容器
[root@docker-node01 ~]# docker run -dit --name=busybox01 busybox
[root@docker-node01 ~]# docker run -dit --name=busybox02 busybox
# 创建两个自定义bridge网络容器
[root@docker-node01 ~]# docker network create --driver bridge --subnet 172.18.0.0/16 --gateway 172.18.0.1 mybridge
[root@docker-node01 ~]# docker run -dit --net mybridge --name busybox03 busybox
[root@docker-node01 ~]# docker run -dit --net mybridge --name busybox04 busybox
# 将默认bridge网络容器busybox01 加入 自定义bridge网络mybridge
[root@docker-node01 ~]# docker network connect mybridge busybox01
[root@docker-node01 ~]# docker exec -it busybox01 ip a |grep inet
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 # busybox01原来的IP
inet 172.18.0.4/16 brd 172.18.255.255 scope global eth1 # busybox01新增加自定义bridge网络mybridge的IP
# 此时busybox01 既可以和自己的默认bridge网络的主机(busybox02)通过IP互相通信,也可以和mybridge网络的主机(busybox03,busybox04)通过IP和名通信
4. 部署Redis集群示例
[root@docker-node01 ~]# docker network create redis --subnet 172.19.0.0/16
# 创建redis配置文件脚本
"""
#!/bin/bash
DATA_PATH="/data/redis"
for port in {1..6};do
mkdir -p $DATA_PATH/node-${port}/conf
touch $DATA_PATH/node-${port}/conf/redis.conf
cat >> $DATA_PATH/node-${port}/conf/redis.conf << EOF
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.19.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done
"""
# 创建6个redis-server容器脚本
"""
#!/bin/bash
for num in {1..6};do
docker run -p 637${num}:6379 -p 1637${num}:16379 --name redis-${num} \
-v /data/redis/node-${num}/data:/data \
-v /data/redis/node-${num}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.19.0.1${num} \
redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
done
"""
# 创建redis集群
"""
[root@docker-node01 ~]# docker exec -it redis-1 /bin/sh
/data # redis-cli --cluster create 172.19.0.11:6379 172.19.0.12:6379 172.19.0.13:6379 172.19.0.14:6379 172.19.0
.15:6379 172.19.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.19.0.15:6379 to 172.19.0.11:6379
Adding replica 172.19.0.16:6379 to 172.19.0.12:6379
Adding replica 172.19.0.14:6379 to 172.19.0.13:6379
M: 9f81174241206871ac1fbb8d6d527b78998d45ad 172.19.0.11:6379
slots:[0-5460] (5461 slots) master
M: 1e027e88b2a381694a22707cb3c490119b85a1b2 172.19.0.12:6379
slots:[5461-10922] (5462 slots) master
M: 94a4a409108c7c07db4374adb5a9644383da9a67 172.19.0.13:6379
slots:[10923-16383] (5461 slots) master
S: 707485cde3fcec45f6b2c274a064596a179c68ec 172.19.0.14:6379
replicates 94a4a409108c7c07db4374adb5a9644383da9a67
S: 0b05e6dcac5a7e040ca8b9e888200a3621343a38 172.19.0.15:6379
replicates 9f81174241206871ac1fbb8d6d527b78998d45ad
S: a9434352947043771af7771fd41ac54a010768c0 172.19.0.16:6379
replicates 1e027e88b2a381694a22707cb3c490119b85a1b2
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 172.19.0.11:6379)
M: 9f81174241206871ac1fbb8d6d527b78998d45ad 172.19.0.11:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 94a4a409108c7c07db4374adb5a9644383da9a67 172.19.0.13:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 707485cde3fcec45f6b2c274a064596a179c68ec 172.19.0.14:6379
slots: (0 slots) slave
replicates 94a4a409108c7c07db4374adb5a9644383da9a67
S: 0b05e6dcac5a7e040ca8b9e888200a3621343a38 172.19.0.15:6379
slots: (0 slots) slave
replicates 9f81174241206871ac1fbb8d6d527b78998d45ad
M: 1e027e88b2a381694a22707cb3c490119b85a1b2 172.19.0.12:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: a9434352947043771af7771fd41ac54a010768c0 172.19.0.16:6379
slots: (0 slots) slave
replicates 1e027e88b2a381694a22707cb3c490119b85a1b2
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
"""
# 查看集群
"""
/data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:145
cluster_stats_messages_pong_sent:135
cluster_stats_messages_sent:280
cluster_stats_messages_ping_received:130
cluster_stats_messages_pong_received:145
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:280
127.0.0.1:6379> cluster nodes
94a4a409108c7c07db4374adb5a9644383da9a67 172.19.0.13:6379@16379 master - 0 1650702253568 3 connected 10923-16383
707485cde3fcec45f6b2c274a064596a179c68ec 172.19.0.14:6379@16379 slave 94a4a409108c7c07db4374adb5a9644383da9a67 0 1650702253000 4 connected
0b05e6dcac5a7e040ca8b9e888200a3621343a38 172.19.0.15:6379@16379 slave 9f81174241206871ac1fbb8d6d527b78998d45ad 0 1650702252560 5 connected
1e027e88b2a381694a22707cb3c490119b85a1b2 172.19.0.12:6379@16379 master - 0 1650702253467 2 connected 5461-10922
9f81174241206871ac1fbb8d6d527b78998d45ad 172.19.0.11:6379@16379 myself,master - 0 1650702253000 1 connected 0-5460
a9434352947043771af7771fd41ac54a010768c0 172.19.0.16:6379@16379 slave 1e027e88b2a381694a22707cb3c490119b85a1b2 0 1650702252460 6 connected
"""
# 验证集群高可用
"""
127.0.0.1:6379> set a b # 设置一个值
-> Redirected to slot [15495] located at 172.19.0.13:6379(redis-3) # 设置到了172.19.0.13:6379(redis-3)
OK
[root@docker-node01 ~]# docker stop redis-3 # 停掉redis-3
redis-3
[root@docker-node01 ~]# docker exec -it redis-2 /bin/sh
/data # redis-cli -c
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.19.0.14:6379 # 依然能从集群中获取到值
"b"
"""
浙公网安备 33010602011771号