SwarmKit测试
官方网站:
环境:
go-1.7
docker-1.12.0
CentOS 7.2
Manager1:192.168.8.201
Worker1:192.168.8.101
Worker2:192.168.8.102
Worker3:192.168.8.103

上图显示swarmkit在docker公司的研发投入己排到第3位,docker官方对其的重视可见一斑。
体验swarmkit最好的方式是从docker-1.12.0开始的swarm模式,swarmkit直接内嵌在1.12.0之后的docker版本中,有兴趣的朋友请参看
Docker-1.12
swarm模式,swarmd,swarmctl主要是开发调试之用
玩过k8s的同学肯定深有体会,swarmkit在编排上借鉴了很多k8s的概念,从kubectl过渡到swarmctl非常轻松,并且部署难度远低于k8s,虽说目前功能完备性上还不及k8s,毕竟才诞生不久,相信经过docker官方和社区不但改进和增强,前途一定会非常看好,等到swarmkit放出企业级API后,相信会有不少技术公司转投swarmkit
docker
提示:
理论上docker只需安装在Worker节点上,Manager节点作为管理节点可以不用安装,但为了方便swarmctl node
demote|promode转换Worker和Manager,建议还是装上
swarmkit
一.配置go环境
mkdir /var/tmp/go
sudo cat >>/etc/profile <<'HERE'
export
GOROOT=/opt/go
export
GOPATH=/var/tmp/go
export
PATH=$GOROOT/bin:$PATH
HERE
source /etc/profile
提示:默认安装在/opt/go下,主要设置GOROOT(安装路径),GOPATH(go项目的存放位置,自定义)
root@router:swarmkit#go
version
go
version go1.7
linux/amd64
二.安装swarmkit
A.二进制包
B.编译
自动构建
go get github.com/docker/swarmkit
cd
$GOPATH/src/github.com/docker/swarmkit
make binaries
或者手动构建
https://golang.org/doc/code.html
mkdir -p $GOPATH/src/github.com/docker
git clone https://github.com/docker/swarmkit.git
mv swarmkit $GOPATH/src/github.com/docker
cd
$GOPATH/src/github.com/docker/swarmkit
make binaries
root@router:swarmkit#pwd
/var/tmp/go/src/github.com/docker/swarmkit
root@router:swarmkit#ls
agent/
ca/
cmd/
design/
identity/
log/
manager/
remotes/
api/
circle.yml
codecov.yml
doc.go ioutils/
MAINTAINERS
protobuf/
vendor/
BUILDING.md
cli/
CONTRIBUTING.md Godeps/
LICENSE Makefile
README.md version/
root@router:swarmkit#make
binaries
🐳
bin/swarmd
🐳
bin/swarmctl
🐳
bin/swarm-bench
🐳
bin/protoc-gen-gogoswarm
🐳
binaries
编译完成后,会在bin目录生成swarmd,swarmctl等二进制文件
root@router:swarmkit#ls
bin/
protoc-gen-gogoswarm*
swarm-bench*
swarmctl*
swarmd*
root@router:swarmkit#cp
-a bin/* /usr/local/bin/
root@router:swarmkit#swarmd
-v
swarmd
github.com/docker/swarmkit v1.12.0-381-g3be4c3f
可以将这些文件同步到对应节点的PATH路径下,我这里放在节点的/usr/local/bin下
三.配置swarmkit集群
Manager
Manager1:192.168.8.201
swarmd -d /tmp/${HOSTNAME} --listen-control-api
/tmp/${HOSTNAME}/swarm.sock --hostname ${HOSTNAME}
确认Token,新加入的节点作为Worker则采用Worker Token,作为Manager则采用Manager
Token,实际测试中,多台Manager节点加入swarm失败,不管是直接加入还是从Worker提升为Manager,状态都变为UNKOWN状态,有待进一步测试
[root@node4 ~]#
netstat -tunlp|grep swarmd
tcp6
0 0 :::4242
:::*
LISTEN
2617/swarmd
[root@node4 ~]#
export SWARM_SOCKET=/tmp/${HOSTNAME}/swarm.sock
[root@node4 ~]#
swarmctl cluster inspect default
ID
: 7xq6gmnrupvulamznbikk2vu2
Name
:
default
Orchestration
settings:
Task history entries: 5
Dispatcher
settings:
Dispatcher heartbeat period: 5s
Certificate Authority
settings:
Certificate Validity Duration: 2160h0m0s
Join Tokens:
Worker:
SWMTKN-1-2p2zwgpu4v6qxhqugwevbbctaj3ody14cla5pufrggs4fne7wt-0tyu8kedqevjol14z0vl9mjp5
Manager:
SWMTKN-1-2p2zwgpu4v6qxhqugwevbbctaj3ody14cla5pufrggs4fne7wt-650ehhda8yzpdb1x3duw6fxia
Worker
Worker1:192.168.8.101
Worker2:192.168.8.102
Worker3:192.168.8.103
swarmd -d /tmp/${HOSTNAME}
--hostname ${HOSTNAME} --join-addr 192.168.8.254:4242 --join-token
SWMTKN-1-31hzks4sz09wkqher45qg4zugxfjgenwa4xg2g9kcr59eflgui-4ai7b8o6dlybycd9ke8cx930a
Worker节点正常启动后,可以看到各节点状态
[root@node4 ~]#
swarmctl node ls
ID
Name
Membership Status Availability Manager
Status
--
----
---------- ------ ------------ --------------
03h6uy6tv2mugqq4imwx7jdrw node1.example.com ACCEPTED
READY ACTIVE
3xzh1g4pu6fge3r4v4e5d7x8k node4.example.com ACCEPTED
READY ACTIVE
REACHABLE *
brb7n7u1zs0l7c0iepnp6cr5t node3.example.com ACCEPTED
READY ACTIVE
c8ff0a68y545th5zq3bdke70e node2.example.com ACCEPTED
READY ACTIVE
四.管理swarm
1.创建service
[root@node4 ~]#
swarmctl service create --name redis --image
192.168.8.254:5000/redis
4aaw6z00fp1e31z2ju7bp0r94
[root@node4 ~]#
swarmctl service ls
ID
Name
Image
Replicas
--
----
-----
--------
4aaw6z00fp1e31z2ju7bp0r94
redis 192.168.8.254:5000/redis
1/1
2.更新(增加或减少)service
[root@node4 ~]#
swarmctl service update redis --replicas=3
4aaw6z00fp1e31z2ju7bp0r94
[root@node4 ~]#
swarmctl service inspect redis
ID
: 4aaw6z00fp1e31z2ju7bp0r94
Name
:
redis
Replicas
: 3/3
Template
Container
Image
:
192.168.8.254:5000/redis
Task
ID
Service
Slot Image
Desired
State Last State
Node
-------
-------
---- -----
-------------
----------
----
81z5ajocn0icz0qmypt192ohj
redis
1
192.168.8.254:5000/redis
RUNNING
RUNNING 1
minute ago
node4.example.com
8l2q9mqc4zu0dzgiv0vga0567
redis
2
192.168.8.254:5000/redis
RUNNING
RUNNING 3
seconds ago
node2.example.com
duvetwbzsw9u9cjsdcz6h23xv
redis
3
192.168.8.254:5000/redis
RUNNING
RUNNING 3
seconds ago
node1.example.com
3.零downtime节点维护
[root@node4 ~]#
swarmctl task ls
ID
Service Desired State Last
State
Node
--
------- -------------
----------
----
81z5ajocn0icz0qmypt192ohj
redis.1 RUNNING
RUNNING 2
minutes ago node4.example.com
8l2q9mqc4zu0dzgiv0vga0567
redis.2 RUNNING
RUNNING 1
minute ago node2.example.com
duvetwbzsw9u9cjsdcz6h23xv
redis.3 RUNNING
RUNNING 1
minute ago node1.example.com
[root@node4 ~]#
swarmctl node pause
node1.example.com
[root@node4 ~]#
swarmctl node drain
node1.example.com
pause #将该节点标识为不接收新task,也就是说,如果有新的容器要运行的时候不会分配给pause状态的节点
drain #不仅不再接收新task,还将该节点上的容器在线迁移到其它可用的节点上
[root@node4 ~]#
swarmctl node ls
ID
Name
Membership
Status Availability Manager
Status
--
----
----------
------ ------------
--------------
03h6uy6tv2mugqq4imwx7jdrw
node1.example.com ACCEPTED
READY DRAIN
3xzh1g4pu6fge3r4v4e5d7x8k
node4.example.com ACCEPTED
READY
ACTIVE
REACHABLE *
brb7n7u1zs0l7c0iepnp6cr5t
node3.example.com ACCEPTED
READY ACTIVE
c8ff0a68y545th5zq3bdke70e
node2.example.com ACCEPTED
READY ACTIVE
[root@node4 ~]#
swarmctl task ls
ID
Service Desired State Last
State
Node
--
------- -------------
----------
----
81z5ajocn0icz0qmypt192ohj
redis.1 RUNNING
RUNNING 4
minutes ago node4.example.com
8l2q9mqc4zu0dzgiv0vga0567
redis.2 RUNNING
RUNNING 2
minutes ago node2.example.com
cc3577waq1n9z4hglyn1k55by
redis.3 RUNNING
RUNNING 54
seconds ago node3.example.com
可以看到node1上的容器全部在线迁移到了其它swarm节点上,node
activate可以重新将drain状态的节点恢复为集群可用节点
[root@node4 ~]#
swarmctl node activate
node1.example.com
[root@node4 ~]#
swarmctl node ls
ID
Name
Membership
Status Availability Manager
Status
--
----
----------
------ ------------
--------------
03h6uy6tv2mugqq4imwx7jdrw
node1.example.com ACCEPTED
READY ACTIVE
3xzh1g4pu6fge3r4v4e5d7x8k
node4.example.com ACCEPTED
READY
ACTIVE
REACHABLE *
brb7n7u1zs0l7c0iepnp6cr5t
node3.example.com ACCEPTED
READY ACTIVE
c8ff0a68y545th5zq3bdke70e
node2.example.com ACCEPTED
READY
ACTIVE