Kubernetes(k8s)之分布式文件系统glusterfs

1、为什么使用分布式文件系统glusterfs

答:使用持久化存储nfs,但是使用nfs这个共享存储特别简单,但是后期在扩容和管理方面都特别的不方便,在生产中一般都是使用分布式文件系统,这里使用的是分布式文件系统glusterfs。

2、什么是分布式文件系统glusterfs

答:分布式文件系统glusterfs是一个开源分布式文件系统,具有强大的横向扩展能力,可以支持数PB存储容器和数千客户端,通过网络互联成一个并行的网络文件系统。具有可扩展性、高性能、高可用性等特点。

3、安装分布式文件系统glusterfs,需要所有节点都需要进行安装的,如下所示:

首先在三台节点上都安装glusterfs的yum源,如下所示:

 1 [root@k8s-master ~]# yum install centos-release-gluster -y
 2 Loaded plugins: fastestmirror, langpacks, product-id, search-disabled-repos, subscription-manager
 3 
 4 This system is not registered with an entitlement server. You can use subscription-manager to register.
 5 
 6 Loading mirror speeds from cached hostfile
 7  * base: mirrors.bfsu.edu.cn
 8  * extras: mirrors.bfsu.edu.cn
 9  * updates: mirrors.bfsu.edu.cn
10 base                                                                                                     | 3.6 kB  00:00:00     
11 extras                                                                                                   | 2.9 kB  00:00:00     
12 updates                                                                                                  | 2.9 kB  00:00:00     
13 Resolving Dependencies
14 --> Running transaction check
15 ---> Package centos-release-gluster7.noarch 0:1.0-2.el7.centos will be installed
16 --> Processing Dependency: centos-release-storage-common for package: centos-release-gluster7-1.0-2.el7.centos.noarch
17 --> Running transaction check
18 ---> Package centos-release-storage-common.noarch 0:2-2.el7.centos will be installed
19 --> Finished Dependency Resolution
20 
21 Dependencies Resolved
22 
23 =================================================================================================================================================================================================================
24  Package                                                          Arch                                      Version                                              Repository                                 Size
25 =================================================================================================================================================================================================================
26 Installing:
27  centos-release-gluster7                                          noarch                                    1.0-2.el7.centos                                     extras                                    5.2 k
28 Installing for dependencies:
29  centos-release-storage-common                                    noarch                                    2-2.el7.centos                                       extras                                    5.1 k
30 
31 Transaction Summary
32 =================================================================================================================================================================================================================
33 Install  1 Package (+1 Dependent package)
34 
35 Total download size: 10 k
36 Installed size: 2.4 k
37 Downloading packages:
38 (1/2): centos-release-gluster7-1.0-2.el7.centos.noarch.rpm                                                                                                                                | 5.2 kB  00:00:00     
39 (2/2): centos-release-storage-common-2-2.el7.centos.noarch.rpm                                                                                                                            | 5.1 kB  00:00:00     
40 -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
41 Total                                                                                                                                                                             13 kB/s |  10 kB  00:00:00     
42 Running transaction check
43 Running transaction test
44 Transaction test succeeded
45 Running transaction
46   Installing : centos-release-storage-common-2-2.el7.centos.noarch                                                                                                                                           1/2 
47   Installing : centos-release-gluster7-1.0-2.el7.centos.noarch                                                                                                                                               2/2 
48   Verifying  : centos-release-gluster7-1.0-2.el7.centos.noarch                                                                                                                                               1/2 
49   Verifying  : centos-release-storage-common-2-2.el7.centos.noarch                                                                                                                                           2/2 
50 
51 Installed:
52   centos-release-gluster7.noarch 0:1.0-2.el7.centos                                                                                                                                                              
53 
54 Dependency Installed:
55   centos-release-storage-common.noarch 0:2-2.el7.centos                                                                                                                                                          
56 
57 Complete!
58 [root@k8s-master ~]# 

可以看到yum源里面多了一些文件,默认安装的是CentOS-Gluster-7.repo这个7.x版本的,如下所示:

1 [root@k8s-master ~]# ls /etc/yum.repos.d/
2 CentOS-Base.repo  CentOS-CR.repo  CentOS-Debuginfo.repo  CentOS-fasttrack.repo  CentOS-Gluster-7.repo  CentOS-Media.repo  CentOS-Sources.repo  CentOS-Storage-common.repo  CentOS-Vault.repo
3 [root@k8s-master ~]# 

然后开始在三台节点上安装glusterfs-server包安装上,最主要的就是安装这个包的,如下所示:

  1 [root@k8s-master ~]# yum install install glusterfs-server -y
  2 Loaded plugins: fastestmirror, langpacks, product-id, search-disabled-repos, subscription-manager
  3 
  4 This system is not registered with an entitlement server. You can use subscription-manager to register.
  5 
  6 Loading mirror speeds from cached hostfile
  7  * base: mirrors.bfsu.edu.cn
  8  * centos-gluster7: mirrors.tuna.tsinghua.edu.cn
  9  * extras: mirrors.bfsu.edu.cn
 10  * updates: mirrors.bfsu.edu.cn
 11 centos-gluster7                                                                                                                                                                           | 3.0 kB  00:00:00     
 12 centos-gluster7/7/x86_64/primary_db                                                                                                                                                       |  69 kB  00:00:00     
 13 No package install available.
 14 Resolving Dependencies
 15 --> Running transaction check
 16 ---> Package glusterfs-server.x86_64 0:7.6-1.el7 will be installed
 17 --> Processing Dependency: glusterfs = 7.6-1.el7 for package: glusterfs-server-7.6-1.el7.x86_64
 18 --> Processing Dependency: glusterfs-api = 7.6-1.el7 for package: glusterfs-server-7.6-1.el7.x86_64
 19 --> Processing Dependency: glusterfs-cli = 7.6-1.el7 for package: glusterfs-server-7.6-1.el7.x86_64
 20 --> Processing Dependency: glusterfs-client-xlators = 7.6-1.el7 for package: glusterfs-server-7.6-1.el7.x86_64
 21 --> Processing Dependency: glusterfs-fuse = 7.6-1.el7 for package: glusterfs-server-7.6-1.el7.x86_64
 22 --> Processing Dependency: glusterfs-libs = 7.6-1.el7 for package: glusterfs-server-7.6-1.el7.x86_64
 23 --> Processing Dependency: libgfapi.so.0(GFAPI_6.0)(64bit) for package: glusterfs-server-7.6-1.el7.x86_64
 24 --> Processing Dependency: libgfapi.so.0(GFAPI_PRIVATE_6.0)(64bit) for package: glusterfs-server-7.6-1.el7.x86_64
 25 --> Processing Dependency: libgfapi.so.0(GFAPI_PRIVATE_6.1)(64bit) for package: glusterfs-server-7.6-1.el7.x86_64
 26 --> Processing Dependency: liburcu-bp.so.6()(64bit) for package: glusterfs-server-7.6-1.el7.x86_64
 27 --> Processing Dependency: liburcu-cds.so.6()(64bit) for package: glusterfs-server-7.6-1.el7.x86_64
 28 --> Running transaction check
 29 ---> Package glusterfs.x86_64 0:3.12.2-18.el7 will be updated
 30 ---> Package glusterfs.x86_64 0:7.6-1.el7 will be an update
 31 ---> Package glusterfs-api.x86_64 0:3.12.2-18.el7 will be updated
 32 ---> Package glusterfs-api.x86_64 0:7.6-1.el7 will be an update
 33 ---> Package glusterfs-cli.x86_64 0:3.12.2-18.el7 will be updated
 34 ---> Package glusterfs-cli.x86_64 0:7.6-1.el7 will be an update
 35 ---> Package glusterfs-client-xlators.x86_64 0:3.12.2-18.el7 will be updated
 36 ---> Package glusterfs-client-xlators.x86_64 0:7.6-1.el7 will be an update
 37 ---> Package glusterfs-fuse.x86_64 0:7.6-1.el7 will be installed
 38 ---> Package glusterfs-libs.x86_64 0:3.12.2-18.el7 will be updated
 39 ---> Package glusterfs-libs.x86_64 0:7.6-1.el7 will be an update
 40 ---> Package userspace-rcu.x86_64 0:0.10.0-3.el7 will be installed
 41 --> Finished Dependency Resolution
 42 
 43 Dependencies Resolved
 44 
 45 =================================================================================================================================================================================================================
 46  Package                                                     Arch                                      Version                                          Repository                                          Size
 47 =================================================================================================================================================================================================================
 48 Installing:
 49  glusterfs-server                                            x86_64                                    7.6-1.el7                                        centos-gluster7                                    1.3 M
 50 Installing for dependencies:
 51  glusterfs-fuse                                              x86_64                                    7.6-1.el7                                        centos-gluster7                                    156 k
 52  userspace-rcu                                               x86_64                                    0.10.0-3.el7                                     centos-gluster7                                     93 k
 53 Updating for dependencies:
 54  glusterfs                                                   x86_64                                    7.6-1.el7                                        centos-gluster7                                    640 k
 55  glusterfs-api                                               x86_64                                    7.6-1.el7                                        centos-gluster7                                    114 k
 56  glusterfs-cli                                               x86_64                                    7.6-1.el7                                        centos-gluster7                                    198 k
 57  glusterfs-client-xlators                                    x86_64                                    7.6-1.el7                                        centos-gluster7                                    850 k
 58  glusterfs-libs                                              x86_64                                    7.6-1.el7                                        centos-gluster7                                    425 k
 59 
 60 Transaction Summary
 61 =================================================================================================================================================================================================================
 62 Install  1 Package  (+2 Dependent packages)
 63 Upgrade             ( 5 Dependent packages)
 64 
 65 Total download size: 3.7 M
 66 Downloading packages:
 67 No Presto metadata available for centos-gluster7
 68 warning: /var/cache/yum/x86_64/7/centos-gluster7/packages/glusterfs-cli-7.6-1.el7.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID e451e5b5: NOKEY                             ]  0.0 B/s |    0 B  --:--:-- ETA 
 69 Public key for glusterfs-cli-7.6-1.el7.x86_64.rpm is not installed
 70 (1/8): glusterfs-cli-7.6-1.el7.x86_64.rpm                                                                                                                                                 | 198 kB  00:00:00     
 71 (2/8): glusterfs-api-7.6-1.el7.x86_64.rpm                                                                                                                                                 | 114 kB  00:00:00     
 72 (3/8): glusterfs-libs-7.6-1.el7.x86_64.rpm                                                                                                                                                | 425 kB  00:00:00     
 73 (4/8): glusterfs-fuse-7.6-1.el7.x86_64.rpm                                                                                                                                                | 156 kB  00:00:01     
 74 (5/8): glusterfs-client-xlators-7.6-1.el7.x86_64.rpm                                                                                                                                      | 850 kB  00:00:03     
 75 (6/8): glusterfs-7.6-1.el7.x86_64.rpm                                                                                                                                                     | 640 kB  00:00:03     
 76 (7/8): glusterfs-server-7.6-1.el7.x86_64.rpm                                                                                                                                              | 1.3 MB  00:00:04     
 77 (8/8): userspace-rcu-0.10.0-3.el7.x86_64.rpm                                                                                                                                              |  93 kB  00:00:13     
 78 -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 79 Total                                                                                                                                                                            242 kB/s | 3.7 MB  00:00:15     
 80 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
 81 Importing GPG key 0xE451E5B5:
 82  Userid     : "CentOS Storage SIG (http://wiki.centos.org/SpecialInterestGroup/Storage) <security@centos.org>"
 83  Fingerprint: 7412 9c0b 173b 071a 3775 951a d4a2 e50b e451 e5b5
 84  Package    : centos-release-storage-common-2-2.el7.centos.noarch (@extras)
 85  From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
 86 Running transaction check
 87 Running transaction test
 88 Transaction test succeeded
 89 Running transaction
 90   Updating   : glusterfs-libs-7.6-1.el7.x86_64                                                                                                                                                              1/13 
 91   Updating   : glusterfs-client-xlators-7.6-1.el7.x86_64                                                                                                                                                    2/13 
 92   Updating   : glusterfs-7.6-1.el7.x86_64                                                                                                                                                                   3/13 
 93   Updating   : glusterfs-api-7.6-1.el7.x86_64                                                                                                                                                               4/13 
 94   Installing : glusterfs-fuse-7.6-1.el7.x86_64                                                                                                                                                              5/13 
 95   Updating   : glusterfs-cli-7.6-1.el7.x86_64                                                                                                                                                               6/13 
 96   Installing : userspace-rcu-0.10.0-3.el7.x86_64                                                                                                                                                            7/13 
 97   Installing : glusterfs-server-7.6-1.el7.x86_64                                                                                                                                                            8/13 
 98   Cleanup    : glusterfs-api-3.12.2-18.el7.x86_64                                                                                                                                                           9/13 
 99   Cleanup    : glusterfs-3.12.2-18.el7.x86_64                                                                                                                                                              10/13 
100   Cleanup    : glusterfs-client-xlators-3.12.2-18.el7.x86_64                                                                                                                                               11/13 
101   Cleanup    : glusterfs-cli-3.12.2-18.el7.x86_64                                                                                                                                                          12/13 
102   Cleanup    : glusterfs-libs-3.12.2-18.el7.x86_64                                                                                                                                                         13/13 
103   Verifying  : glusterfs-libs-7.6-1.el7.x86_64                                                                                                                                                              1/13 
104   Verifying  : glusterfs-api-7.6-1.el7.x86_64                                                                                                                                                               2/13 
105   Verifying  : glusterfs-cli-7.6-1.el7.x86_64                                                                                                                                                               3/13 
106   Verifying  : glusterfs-fuse-7.6-1.el7.x86_64                                                                                                                                                              4/13 
107   Verifying  : glusterfs-client-xlators-7.6-1.el7.x86_64                                                                                                                                                    5/13 
108   Verifying  : glusterfs-server-7.6-1.el7.x86_64                                                                                                                                                            6/13 
109   Verifying  : glusterfs-7.6-1.el7.x86_64                                                                                                                                                                   7/13 
110   Verifying  : userspace-rcu-0.10.0-3.el7.x86_64                                                                                                                                                            8/13 
111   Verifying  : glusterfs-3.12.2-18.el7.x86_64                                                                                                                                                               9/13 
112   Verifying  : glusterfs-cli-3.12.2-18.el7.x86_64                                                                                                                                                          10/13 
113   Verifying  : glusterfs-client-xlators-3.12.2-18.el7.x86_64                                                                                                                                               11/13 
114   Verifying  : glusterfs-libs-3.12.2-18.el7.x86_64                                                                                                                                                         12/13 
115   Verifying  : glusterfs-api-3.12.2-18.el7.x86_64                                                                                                                                                          13/13 
116 
117 Installed:
118   glusterfs-server.x86_64 0:7.6-1.el7                                                                                                                                                                            
119 
120 Dependency Installed:
121   glusterfs-fuse.x86_64 0:7.6-1.el7                                                                      userspace-rcu.x86_64 0:0.10.0-3.el7                                                                     
122 
123 Dependency Updated:
124   glusterfs.x86_64 0:7.6-1.el7        glusterfs-api.x86_64 0:7.6-1.el7        glusterfs-cli.x86_64 0:7.6-1.el7        glusterfs-client-xlators.x86_64 0:7.6-1.el7        glusterfs-libs.x86_64 0:7.6-1.el7       
125 
126 Complete!
127 [root@k8s-master ~]# 

然后开始启动glusterfs,并设置开机自启动,如下所示:

1 [root@k8s-master ~]# systemctl start glusterd.service
2 [root@k8s-master ~]# systemctl enable glusterd.service
3 [root@k8s-master ~]# 

生产环境可以增加一个/dev/sdb或者/dev/sdc用于挂载,将它挂载到某一个目录下面去使用的,默认只有/dev/sda。这里直接增加目录来使用了,在三台机器上都执行下面的创建目录的命令,如下所示:

1 [root@k8s-master ~]# mkdir -p /gfs/test1
2 [root@k8s-master ~]# mkdir -p /gfs/test2
3 [root@k8s-master ~]# 

添加存储资源池,即将节点都组装起来,在master主节点进行操作,可以查看gluster的资源池(如果gluster没有提示,可以先退出xshell,再重新登录即可),如下所示:

 1 [root@k8s-master ~]# gluster pool list
 2 UUID                    Hostname     State
 3 d2ea56bf-6402-49c5-93a2-0d784110b231    localhost    Connected 
 4 [root@k8s-master ~]# gluster pool 
 5 unrecognized command
 6 
 7  Usage: gluster [options] <help> <peer> <pool> <volume>
 8  Options:
 9  --help  Shows the help information
10  --version  Shows the version
11  --print-logdir  Shows the log directory
12  --print-statedumpdir Shows the state dump directory
13 
14 [root@k8s-master ~]# 

资源池默认只能识别出自己本身,这里需要将另外两个节点加入到资源池中,根据hostname加入到资源池,需要做host解析的,如下所示:

1 [root@k8s-master ~]# gluster peer 
2 detach  probe   status  
3 [root@k8s-master ~]# gluster peer probe k8s-node2
4 peer probe: success. 
5 [root@k8s-master ~]# gluster peer probe k8s-node3
6 peer probe: success. 
7 [root@k8s-master ~]# 

此时,再次查看资源池,发现已经有了三个节点了,如下所示:

1 [root@k8s-master ~]# gluster pool list 
2 UUID                    Hostname     State
3 977fb392-500c-4353-b03c-0ee9e53cee29    k8s-node2    Connected 
4 3b7c328f-2eff-4098-b3ac-ebc730975d22    k8s-node3    Connected 
5 d2ea56bf-6402-49c5-93a2-0d784110b231    localhost    Connected 
6 [root@k8s-master ~]# 

资源池添加完毕之后,开始进行glusterfs卷管理,创建分布式复制卷,如下所示:

可以在资源池里面使用资源池创建一个卷,在生产环境中glusterfs支持七种模式的卷,使用最多的是分布式复制卷,其他有的卷不稳定肯能会丢失书数据,最稳定的一种就是分布式复制卷,分布式复制卷要求至少四个存储单元,就是刚才创建的目录,正常情况下,一个目录就要挂载一个空的硬盘,一个硬盘就是一个存储单元,或者说一个目录就是一个存储单元,glusterfs这个分布式复制卷,最少复制的副本是2,再加上是分布式,至少要有四个目录作为存储单元。

1 [root@k8s-master ~]# gluster volume create biehl replica 2 k8s-master:/gfs/test1 k8s-master:/gfs/test2 k8s-node2:/gfs/test1 k8s-node2:/gfs/test2 force
2 volume create: biehl: success: please start the volume to access data
3 [root@k8s-master ~]# 

卷已经创建成功了,现在需要启动一下才可以访问里面的数据,启动卷并进行查看卷,如下所示:

 1 [root@k8s-master ~]# gluster volume start biehl 
 2 volume start: biehl: success
 3 [root@k8s-master ~]# gluster volume info biehl 
 4  
 5 Volume Name: biehl
 6 Type: Distributed-Replicate
 7 Volume ID: 5729192c-627f-4444-92eb-32a245086f43
 8 Status: Started
 9 Snapshot Count: 0
10 Number of Bricks: 2 x 2 = 4
11 Transport-type: tcp
12 Bricks:
13 Brick1: k8s-master:/gfs/test1
14 Brick2: k8s-master:/gfs/test2
15 Brick3: k8s-node2:/gfs/test1
16 Brick4: k8s-node2:/gfs/test2
17 Options Reconfigured:
18 transport.address-family: inet
19 storage.fips-mode-rchecksum: on
20 nfs.disable: on
21 performance.client-io-threads: off
22 [root@k8s-master ~]# 

下面开始挂载卷,可以在三台机器的任意一个节点上进行挂载,如下所示:

1 [root@k8s-master ~]# mount -t glusterfs 192.168.110.133:/biehl /mnt
2 [root@k8s-master ~]# 

含义就是将192.168.110.133的biehl卷,挂载到mnt目录下面,现在可以进行查看,如下所示:

 1 [root@k8s-master ~]# df -h
 2 Filesystem               Size  Used Avail Use% Mounted on
 3 /dev/mapper/centos-root   18G   14G  3.9G  79% /
 4 devtmpfs                 1.2G     0  1.2G   0% /dev
 5 tmpfs                    1.2G     0  1.2G   0% /dev/shm
 6 tmpfs                    1.2G   40M  1.1G   4% /run
 7 tmpfs                    1.2G     0  1.2G   0% /sys/fs/cgroup
 8 /dev/sda1                197M  157M   41M  80% /boot
 9 tmpfs                    229M   12K  229M   1% /run/user/42
10 overlay                   18G   14G  3.9G  79% /var/lib/docker/overlay2/fbf4133aed53d4e17a706be46fadcbea9fe6eb1e615379bfaa157570f0de7bc6/merged
11 shm                       64M     0   64M   0% /var/lib/docker/containers/5e72b0961647003c757fd41ffe824027c857d1f3938b9a53535645f4f372a40a/shm
12 overlay                   18G   14G  3.9G  79% /var/lib/docker/overlay2/80967f34f76b4afe42411f1b1dc6514b2b19d997644dfa4687a3a0924df06a08/merged
13 overlay                   18G   14G  3.9G  79% /var/lib/docker/overlay2/e085a4fba182d19139db32c664fcc1f08895ca603db3b1f9855183fc071c1adc/merged
14 shm                       64M     0   64M   0% /var/lib/docker/containers/276a98c9d5ccd61f42a0a1ef55c30f76beb2977483ed2d79281dfcec79922029/shm
15 shm                       64M     0   64M   0% /var/lib/docker/containers/0f89914f64c09c0fbd777e637d8f987414420861628d7b78f8a1bafc9054df59/shm
16 overlay                   18G   14G  3.9G  79% /var/lib/docker/overlay2/23afbc15307f6e5b397db4b1fd0e124c943bdb95312234e8d41e4f80a104b0cd/merged
17 shm                       64M     0   64M   0% /var/lib/docker/containers/f686976b5743236b368b8b70e222d230b737612acadc716ca4cbabcd9a187011/shm
18 overlay                   18G   14G  3.9G  79% /var/lib/docker/overlay2/15696b3573a066b5d3fd7aa202086b2bcc4145d9d0bb46e54c7d8d7787d1cc73/merged
19 overlay                   18G   14G  3.9G  79% /var/lib/docker/overlay2/6d9b6978705654a815dd1e72d5501408cead42493c25bdd33e4faa0c9decdcd0/merged
20 overlay                   18G   14G  3.9G  79% /var/lib/docker/overlay2/534837088c8ba6021c80d2ebb4ddb87a075106a784c499cc9ffca541d2e86dce/merged
21 overlay                   18G   14G  3.9G  79% /var/lib/docker/overlay2/8cdf1249e1823f24caf1d14ca9c05caa4e8e569d81292d06c642e618bc90b82b/merged
22 overlay                   18G   14G  3.9G  79% /var/lib/docker/overlay2/3ee0d48c04a32a2f0938af17898697042423d7d147df3ee8c1178bfda6e5fd9a/merged
23 overlay                   18G   14G  3.9G  79% /var/lib/docker/overlay2/31f5db66511cc7fd15cf1e2a23b7178d0eafbf1d7ddbdef1e0f82a254b6f6fc5/merged
24 overlay                   18G   14G  3.9G  79% /var/lib/docker/overlay2/16288e087e06b2c1fc369d19528eeb933cde44b8aa785f35b6756ec4da3a0f60/merged
25 tmpfs                    229M     0  229M   0% /run/user/0
26 192.168.110.133:/biehl    18G   11G  7.1G  61% /mnt
27 [root@k8s-master ~]# df -h /mnt/
28 Filesystem              Size  Used Avail Use% Mounted on
29 192.168.110.133:/biehl   18G   11G  7.1G  61% /mnt
30 [root@k8s-master ~]# 

分布式复制卷扩容,可以先使用df -h命令进行查看扩容前的容量,然后扩容后查看容量,在运行状态也可以进行扩容的哦,如下所示:

1 [root@k8s-master ~]# df -h /mnt/
2 Filesystem              Size  Used Avail Use% Mounted on
3 192.168.110.133:/biehl   18G   11G  7.1G  61% /mnt
4 [root@k8s-master ~]# gluster volume add-brick biehl k8s-node3:/gfs/test1 k8s-node3:/gfs/test2 force
5 volume add-brick: success
6 [root@k8s-master ~]# df -h /mnt/
7 Filesystem              Size  Used Avail Use% Mounted on
8 192.168.110.133:/biehl   23G   15G  8.8G  62% /mnt
9 [root@k8s-master ~]# 

接下来,可以向/mnt目录里面上传一些文件进行观察,如下所示:

1 [root@k8s-master ~]# cd /mnt/
2 [root@k8s-master mnt]# rz -E
3 rz waiting to receive.
4 [root@k8s-master mnt]# ls
5 book-master6.0.war
6 [root@k8s-master mnt]# unzip book-master6.0.war 

解压之后显示一堆文件,这些文件会分别在放到不同的存储单元中,首先查看master主节点的目录单元,如下所示:

 1 [root@k8s-master mnt]# yum install tree
 2 Loaded plugins: fastestmirror, langpacks, product-id, search-disabled-repos, subscription-manager
 3 
 4 This system is not registered with an entitlement server. You can use subscription-manager to register.
 5 
 6 Loading mirror speeds from cached hostfile
 7  * base: mirrors.bfsu.edu.cn
 8  * centos-gluster7: mirrors.tuna.tsinghua.edu.cn
 9  * extras: mirrors.bfsu.edu.cn
10  * updates: mirrors.bfsu.edu.cn
11 Resolving Dependencies
12 --> Running transaction check
13 ---> Package tree.x86_64 0:1.6.0-10.el7 will be installed
14 --> Finished Dependency Resolution
15 
16 Dependencies Resolved
17 
18 =================================================================================================================================================================================================================
19  Package                                         Arch                                              Version                                                 Repository                                       Size
20 =================================================================================================================================================================================================================
21 Installing:
22  tree                                            x86_64                                            1.6.0-10.el7                                            base                                             46 k
23 
24 Transaction Summary
25 =================================================================================================================================================================================================================
26 Install  1 Package
27 
28 Total download size: 46 k
29 Installed size: 87 k
30 Is this ok [y/d/N]: y
31 Downloading packages:
32 tree-1.6.0-10.el7.x86_64.rpm                                                                                                                                                              |  46 kB  00:00:05     
33 Running transaction check
34 Running transaction test
35 Transaction test succeeded
36 Running transaction
37   Installing : tree-1.6.0-10.el7.x86_64                                                                                                                                                                      1/1 
38   Verifying  : tree-1.6.0-10.el7.x86_64                                                                                                                                                                      1/1 
39 
40 Installed:
41   tree.x86_64 0:1.6.0-10.el7                                                                                                                                                                                     
42 
43 Complete!
44 [root@k8s-master mnt]# tree /gfs/

由于我的文件太多,就不进行展示了,使用命令tree /gfs就可以进行查看了,可以在三台机器都进行查看观察的。

 

4、glusterfs作为k8s的后端存储,可以使用命令查看k8s支持的后端存储类型,如下所示:

  1 [root@k8s-master test1]# kubectl explain pv.spec
  2 RESOURCE: spec <Object>
  3 
  4 DESCRIPTION:
  5      Spec defines a specification of a persistent volume owned by the cluster.
  6      Provisioned by an administrator. More info:
  7      http://kubernetes.io/docs/user-guide/persistent-volumes#persistent-volumes
  8 
  9     PersistentVolumeSpec is the specification of a persistent volume.
 10 
 11 FIELDS:
 12    awsElasticBlockStore    <Object>
 13      AWSElasticBlockStore represents an AWS Disk resource that is attached to a
 14      kubelet's host machine and then exposed to the pod. More info:
 15      http://kubernetes.io/docs/user-guide/volumes#awselasticblockstore
 16 
 17    cephfs    <Object>
 18      CephFS represents a Ceph FS mount on the host that shares a pod's lifetime
 19 
 20    fc    <Object>
 21      FC represents a Fibre Channel resource that is attached to a kubelet's host
 22      machine and then exposed to the pod.
 23 
 24    rbd    <Object>
 25      RBD represents a Rados Block Device mount on the host that shares a pod's
 26      lifetime. More info:
 27      http://releases.k8s.io/HEAD/examples/volumes/rbd/README.md
 28 
 29    accessModes    <[]Object>
 30      AccessModes contains all ways the volume can be mounted. More info:
 31      http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes
 32 
 33    azureFile    <Object>
 34      AzureFile represents an Azure File Service mount on the host and bind mount
 35      to the pod.
 36 
 37    capacity    <object>
 38      A description of the persistent volume's resources and capacity. More info:
 39      http://kubernetes.io/docs/user-guide/persistent-volumes#capacity
 40 
 41    cinder    <Object>
 42      Cinder represents a cinder volume attached and mounted on kubelets host
 43      machine More info:
 44      http://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md
 45 
 46    flocker    <Object>
 47      Flocker represents a Flocker volume attached to a kubelet's host machine
 48      and exposed to the pod for its usage. This depends on the Flocker control
 49      service being running
 50 
 51    nfs    <Object>
 52      NFS represents an NFS mount on the host. Provisioned by an admin. More
 53      info: http://kubernetes.io/docs/user-guide/volumes#nfs
 54 
 55    azureDisk    <Object>
 56      AzureDisk represents an Azure Data Disk mount on the host and bind mount to
 57      the pod.
 58 
 59    claimRef    <Object>
 60      ClaimRef is part of a bi-directional binding between PersistentVolume and
 61      PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName
 62      is the authoritative bind between PV and PVC. More info:
 63      http://kubernetes.io/docs/user-guide/persistent-volumes#binding
 64 
 65    gcePersistentDisk    <Object>
 66      GCEPersistentDisk represents a GCE Disk resource that is attached to a
 67      kubelet's host machine and then exposed to the pod. Provisioned by an admin.
 68      More info: http://kubernetes.io/docs/user-guide/volumes#gcepersistentdisk
 69 
 70    persistentVolumeReclaimPolicy    <string>
 71      What happens to a persistent volume when released from its claim. Valid
 72      options are Retain (default) and Recycle. Recycling must be supported by the
 73      volume plugin underlying this persistent volume. More info:
 74      http://kubernetes.io/docs/user-guide/persistent-volumes#recycling-policy
 75 
 76    photonPersistentDisk    <Object>
 77      PhotonPersistentDisk represents a PhotonController persistent disk attached
 78      and mounted on kubelets host machine
 79 
 80    vsphereVolume    <Object>
 81      VsphereVolume represents a vSphere volume attached and mounted on kubelets
 82      host machine
 83 
 84    flexVolume    <Object>
 85      FlexVolume represents a generic volume resource that is
 86      provisioned/attached using an exec based plugin. This is an alpha feature
 87      and may change in future.
 88 
 89    glusterfs    <Object>
 90      Glusterfs represents a Glusterfs volume that is attached to a host and
 91      exposed to the pod. Provisioned by an admin. More info:
 92      http://releases.k8s.io/HEAD/examples/volumes/glusterfs/README.md
 93 
 94    hostPath    <Object>
 95      HostPath represents a directory on the host. Provisioned by a developer or
 96      tester. This is useful for single-node development and testing only! On-host
 97      storage is not supported in any way and WILL NOT WORK in a multi-node
 98      cluster. More info: http://kubernetes.io/docs/user-guide/volumes#hostpath
 99 
100    iscsi    <Object>
101      ISCSI represents an ISCSI Disk resource that is attached to a kubelet's
102      host machine and then exposed to the pod. Provisioned by an admin.
103 
104    quobyte    <Object>
105      Quobyte represents a Quobyte mount on the host that shares a pod's lifetime
106 
107 
108 [root@k8s-master test1]# 

可以查看详细的glusterfs配置需要什么,endpoints就是svc里面的一个资源,endpoints就是(clusterIP + 端口号),path就是glusterfs卷的一个路径,readOnly只读模式,如下所示:

 1 [root@k8s-master test1]# kubectl explain pv.spec.glusterfs
 2 RESOURCE: glusterfs <Object>
 3 
 4 DESCRIPTION:
 5      Glusterfs represents a Glusterfs volume that is attached to a host and
 6      exposed to the pod. Provisioned by an admin. More info:
 7      http://releases.k8s.io/HEAD/examples/volumes/glusterfs/README.md
 8 
 9     Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling.
10 
11 FIELDS:
12    endpoints    <string> -required-
13      EndpointsName is the endpoint name that details Glusterfs topology. More
14      info:
15      http://releases.k8s.io/HEAD/examples/volumes/glusterfs/README.md#create-a-pod
16 
17    path    <string> -required-
18      Path is the Glusterfs volume path. More info:
19      http://releases.k8s.io/HEAD/examples/volumes/glusterfs/README.md#create-a-pod
20 
21    readOnly    <boolean>
22      ReadOnly here will force the Glusterfs volume to be mounted with read-only
23      permissions. Defaults to false. More info:
24      http://releases.k8s.io/HEAD/examples/volumes/glusterfs/README.md#create-a-pod
25 
26 
27 [root@k8s-master test1]# 

开始创建endpoints的glusterfs-ep.yaml配置文件,如下所示:

1 [root@k8s-master k8s]# ls
2 book-master.war  dashboard  dashboard.zip  deploy  health  heapster  hpa  metrics  namespace  pod  rc  skydns  skydns.zip  svc  tomcat_demo  tomcat_demo.zip  volume
3 [root@k8s-master k8s]# mkdir glusterfs
4 [root@k8s-master k8s]# cd glusterfs/
5 [root@k8s-master glusterfs]# ls
6 [root@k8s-master glusterfs]# vim glusterfs-ep.yaml
7 [root@k8s-master glusterfs]# 

具体内容,如下所示:

 1 apiVersion: v1
 2 kind: Endpoints
 3 metadata:
 4   name: glusterfs
 5   namespace: default
 6 subsets:
 7 # k8s连接glusterfs的ip地址和端口号
 8 - addresses:
 9   - ip: 192.168.110.133
10   - ip: 192.168.110.134
11   - ip: 192.168.110.135
12   ports:
13   - port: 49152
14     protocol: TCP
15   

每个节点启动了glusterfs就存在了49152端口号,如下所示:

 1 [root@k8s-node3 test1]# netstat -lntup 49152
 2 Active Internet connections (only servers)
 3 Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
 4 tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN      47623/glusterd      
 5 tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      14515/kubelet       
 6 tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      6779/kube-proxy     
 7 tcp        0      0 0.0.0.0:40842           0.0.0.0:*               LISTEN      -                   
 8 tcp        0      0 192.168.110.135:10250   0.0.0.0:*               LISTEN      14515/kubelet       
 9 tcp        0      0 192.168.110.135:10255   0.0.0.0:*               LISTEN      14515/kubelet       
10 tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd           
11 tcp        0      0 0.0.0.0:20048           0.0.0.0:*               LISTEN      90097/rpc.mountd    
12 tcp        0      0 0.0.0.0:6000            0.0.0.0:*               LISTEN      7122/X              
13 tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      7304/dnsmasq        
14 tcp        0      0 0.0.0.0:55318           0.0.0.0:*               LISTEN      52362/rpc.statd     
15 tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      6776/sshd           
16 tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      6778/cupsd          
17 tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      7094/master         
18 tcp        0      0 0.0.0.0:49152           0.0.0.0:*               LISTEN      71495/glusterfsd    

开始创建这个endpoint,如下所示:

1 [root@k8s-master glusterfs]# kubectl create -f glusterfs-ep.yaml 
2 endpoints "glusterfs" created
3 [root@k8s-master glusterfs]# 

创建成功可以查看一下,如下所示:

 1 [root@k8s-master glusterfs]# kubectl get ep
 2 NAME         ENDPOINTS                                                           AGE
 3 glusterfs    192.168.110.133:49152,192.168.110.134:49152,192.168.110.135:49152   34s
 4 kubernetes   192.168.110.133:6443                                                25d
 5 mysql        172.16.66.6:3306                                                    5h
 6 myweb        172.16.66.5:8080                                                    6h
 7 [root@k8s-master glusterfs]# kubectl get endpoints 
 8 NAME         ENDPOINTS                                                           AGE
 9 glusterfs    192.168.110.133:49152,192.168.110.134:49152,192.168.110.135:49152   1m
10 kubernetes   192.168.110.133:6443                                                25d
11 mysql        172.16.66.6:3306                                                    5h
12 myweb        172.16.66.5:8080                                                    6h
13 [root@k8s-master glusterfs]# 

接下来,给这个endpoint创建一个Servic,需要注意的是Service和endpoint是靠名称来进行关联的,他们两个的名称必须一致的,如下所示:

1 [root@k8s-master glusterfs]# vim glusterfs-svc.yaml 
2 [root@k8s-master glusterfs]# 

配置内容,如下所示:

 1 apiVersion: v1
 2 kind: Service
 3 metadata:
 4   name: glusterfs
 5   namespace: default
 6 spec:
 7   ports:
 8   - port: 49152
 9     protocol: TCP
10     targetPort: 49152
11   sessionAffinity: None
12   type: ClusterIP
13   

开始进行创建Service,并进行查看,如下所示:

 1 [root@k8s-master glusterfs]# vim glusterfs-svc.yaml 
 2 [root@k8s-master glusterfs]# kubectl create -f glusterfs-svc.yaml 
 3 service "glusterfs" created
 4 [root@k8s-master glusterfs]# kubectl get svc 
 5 NAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
 6 glusterfs    10.254.182.41    <none>        49152/TCP        10s
 7 kubernetes   10.254.0.1       <none>        443/TCP          25d
 8 mysql        10.254.126.11    <none>        3306/TCP         5h
 9 myweb        10.254.188.155   <nodes>       8080:30008/TCP   6h
10 [root@k8s-master glusterfs]# 

可以使用查看详情,看是否已经关联上了,如下所示:

 1 [root@k8s-master glusterfs]# kubectl describe svc glusterfs 
 2 Name:            glusterfs
 3 Namespace:        default
 4 Labels:            <none>
 5 Selector:        <none>
 6 Type:            ClusterIP
 7 IP:            10.254.182.41
 8 Port:            <unset>    49152/TCP
 9 Endpoints:        192.168.110.133:49152,192.168.110.134:49152,192.168.110.135:49152
10 Session Affinity:    None
11 No events.
12 [root@k8s-master glusterfs]# 

接下来,开始创建glusterfs类型的PV,k8s对接各种后端存储的时候最关键的地方就是创建PV,如下所示:

1 [root@k8s-master glusterfs]# vim glusterfs-pv.yaml
2 [root@k8s-master glusterfs]# 

配置内容,如下所示:

 1 apiVersion: v1
 2 kind: PersistentVolume
 3 metadata:
 4   name: gluster
 5   labels:
 6     type: glusterfs
 7 spec:
 8   capacity:
 9     storage: 50Gi
10   accessModes:
11   - ReadWriteMany
12   glusterfs:
13     endpoints: "glusterfs"
14     path: "biehl"
15     readOnly: false

创建并查看PV,如下所示:

1 [root@k8s-master glusterfs]# gluster volume list 
2 biehl
3 [root@k8s-master glusterfs]# kubectl create -f glusterfs-pv.yaml 
4 persistentvolume "gluster" created
5 [root@k8s-master glusterfs]# kubectl get pv
6 NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM           REASON    AGE
7 gluster   50Gi       RWX           Retain          Available                             9s
8 mysql     5Gi        RWX           Recycle         Bound       default/mysql             5h
9 [root@k8s-master glusterfs]# 

开始创建PVC,才可以使用PV,开始创建PVC,如下所示:

1 [root@k8s-master glusterfs]# cp ../tomcat_demo/mysql-pvc.yaml .
2 [root@k8s-master glusterfs]# ls
3 glusterfs-ep.yaml  glusterfs-pv.yaml  glusterfs-svc.yaml  mysql-pvc.yaml
4 [root@k8s-master glusterfs]# mv mysql-pvc.yaml glusterfs-pvc.yaml
5 [root@k8s-master glusterfs]# ls
6 glusterfs-ep.yaml  glusterfs-pvc.yaml  glusterfs-pv.yaml  glusterfs-svc.yaml
7 [root@k8s-master glusterfs]# 

配置内容,如下所示:

 1 kind: PersistentVolumeClaim
 2 apiVersion: v1
 3 metadata:
 4   name: gluster
 5 spec:
 6   accessModes:
 7     - ReadWriteMany
 8   resources:
 9     requests:
10       storage: 5Gi

创建并查看PVC、PV的内容,如下所示:

 1 [root@k8s-master glusterfs]# kubectl create -f glusterfs-pvc.yaml 
 2 persistentvolumeclaim "gluster" created
 3 [root@k8s-master glusterfs]# kubectl get pvc 
 4 NAME      STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
 5 gluster   Bound     gluster   50Gi       RWX           4s
 6 mysql     Bound     mysql     5Gi        RWX           6h
 7 [root@k8s-master glusterfs]# kubectl get pv
 8 NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM             REASON    AGE
 9 gluster   50Gi       RWX           Retain          Bound     default/gluster             5m
10 mysql     5Gi        RWX           Recycle         Bound     default/mysql               6h
11 [root@k8s-master glusterfs]# 

PVC绑定PV之后就可以使用了,可以对pod进行使用,如下所示:

1 [root@k8s-master glusterfs]# ls
2 glusterfs-ep.yaml  glusterfs-pvc.yaml  glusterfs-pv.yaml  glusterfs-svc.yaml
3 [root@k8s-master glusterfs]# cp ../pod/nginx_pod.yaml .
4 [root@k8s-master glusterfs]# ls
5 glusterfs-ep.yaml  glusterfs-pvc.yaml  glusterfs-pv.yaml  glusterfs-svc.yaml  nginx_pod.yaml
6 [root@k8s-master glusterfs]# vim nginx_pod.yaml 
7 [root@k8s-master glusterfs]# 

将volumeMounts、volumes都配置好,如下所示:

 1 apiVersion: v1
 2 kind: Pod
 3 metadata:
 4   name: nginx
 5   labels:
 6     app: web
 7     env: nginx
 8 spec:
 9   containers:
10     - name: nginx
11       image: 192.168.110.133:5000/nginx:1.13
12       ports:
13         - containerPort: 80
14       volumeMounts:
15         - name: nfs-vol2
16           mountPath: /usr/share/nginx/html
17   volumes:
18     - name: nfs-vol2
19     persistentVolumeClaim:
20       claimName: gluster

创建nginx这个pod,并进行查看,发现已经正常运行起来了,如下所示:

1 [root@k8s-master glusterfs]# kubectl create -f nginx_pod.yaml 
2 pod "nginx" created
3 [root@k8s-master glusterfs]# kubectl get pods -o wide
4 NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
5 mysql-wldks   1/1       Running   0          5h        172.16.66.6   k8s-node3
6 myweb-c8sf6   1/1       Running   1          7h        172.16.66.5   k8s-node3
7 nginx         1/1       Running   0          3s        172.16.66.3   k8s-node3
8 [root@k8s-master glusterfs]# 

然后可以访问一下这个Pod,如下所示:

1 [root@k8s-master glusterfs]# kubectl get pods -o wide
2 NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
3 mysql-wldks   1/1       Running   0          5h        172.16.66.6   k8s-node3
4 myweb-c8sf6   1/1       Running   1          7h        172.16.66.5   k8s-node3
5 nginx         1/1       Running   0          3s        172.16.66.3   k8s-node3
6 [root@k8s-master glusterfs]# curl 172.16.66.3

我们的容器除了SVC里面通过反向代理和端口映射的方式访问之外,还可以让Pod不经过我们的SVC进行访问,可以在Pod里面,新增一个hostPort进行访问。

将之前的nginx的pod删除掉,再重新创建一个,如下所示:

 1 [root@k8s-master glusterfs]# kubectl get pods -o wide
 2 NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
 3 mysql-wldks   1/1       Running   0          7h        172.16.66.6   k8s-node3
 4 myweb-c8sf6   1/1       Running   1          9h        172.16.66.5   k8s-node3
 5 nginx         1/1       Running   0          2h        172.16.66.3   k8s-node3
 6 [root@k8s-master glusterfs]# kubectl delete pod nginx 
 7 pod "nginx" deleted
 8 [root@k8s-master glusterfs]# kubectl get pods -o wide
 9 NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
10 mysql-wldks   1/1       Running   0          7h        172.16.66.6   k8s-node3
11 myweb-c8sf6   1/1       Running   1          9h        172.16.66.5   k8s-node3
12 [root@k8s-master glusterfs]# kubectl create -f nginx_pod.yaml 
13 pod "nginx" created
14 [root@k8s-master glusterfs]# kubectl get pods -o wide
15 NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
16 mysql-wldks   1/1       Running   0          7h        172.16.66.6   k8s-node3
17 myweb-c8sf6   1/1       Running   1          9h        172.16.66.5   k8s-node3
18 nginx         1/1       Running   0          2s        172.16.66.3   k8s-node3
19 [root@k8s-master glusterfs]# 

可以看到新创建的nginx在k8s-node3这个节点上了,但是此时我发现无论curl访问还是浏览器访问都是403错误,这不都是很郁闷了。

1 [root@k8s-master glusterfs]# curl -I 172.16.66.3
2 HTTP/1.1 403 Forbidden
3 Server: nginx/1.15.12
4 Date: Tue, 30 Jun 2020 14:03:56 GMT
5 Content-Type: text/html
6 Content-Length: 154
7 Connection: keep-alive

后来经过仔细地分析和查找,发现是确实访问不到页面才出现地错误,究其原因,主要就是/mnt目录下面要有可以直接访问地界面才可以地。

1 [root@k8s-master glusterfs]# cd /mnt/
2 [root@k8s-master mnt]# ls
3 css  img  index.html  js

删除之前地nginx地pod,创建新的pod,如下所示:

 1 [root@k8s-master glusterfs]# kubectl get pod -o wide
 2 NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
 3 mysql-wldks   1/1       Running   0          8h        172.16.66.6   k8s-node3
 4 myweb-c8sf6   1/1       Running   1          10h       172.16.66.5   k8s-node3
 5 nginx         1/1       Running   0          6m        172.16.66.3   k8s-node3
 6 nginx2        1/1       Running   0          17m       172.16.74.5   k8s-node2
 7 [root@k8s-master glusterfs]# kubectl delete pod nginx
 8 pod "nginx" deleted
 9 [root@k8s-master glusterfs]# kubectl get pod -o wide
10 NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
11 mysql-wldks   1/1       Running   0          8h        172.16.66.6   k8s-node3
12 myweb-c8sf6   1/1       Running   1          10h       172.16.66.5   k8s-node3
13 nginx2        1/1       Running   0          17m       172.16.74.5   k8s-node2
14 [root@k8s-master glusterfs]# kubectl create -f nginx_pod.yaml 
15 pod "nginx" created
16 [root@k8s-master glusterfs]# kubectl get pod -o wide
17 NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
18 mysql-wldks   1/1       Running   0          8h        172.16.66.6   k8s-node3
19 myweb-c8sf6   1/1       Running   1          10h       172.16.66.5   k8s-node3
20 nginx         1/1       Running   0          2s        172.16.66.3   k8s-node3
21 nginx2        1/1       Running   0          18m       172.16.74.5   k8s-node2
22 [root@k8s-master glusterfs]# curl 172.16.66.3
23 <!DOCTYPE html>
24 <html lang="en">
25 <head>
26 <meta charset="UTF-8">
27 <meta name="viewport" content="width=device-width, initial-scale=1.0">
28 <meta http-equiv="X-UA-Compatible" content="ie=edge">
29 <title>HTML5ȫı·ɻ񴕽СԎϷ - Դë֮¼м/title>
30 
31 <style>
32     html {
33         width: 100%;
34         height: 100%;
35         margin: 0;
36         padding: 0;
37         position: relative;
38         background-image: linear-gradient(#2C3E50,#4CA1AF);
39 
40     }
41 
42     .canvasbig {
43         position: absolute;
44         left: calc(50% - 260px);
45         top: calc(50% - 400px);
46         width: 520px;
47         height: 800px;
48         
49     }
50 
51     .canvasdiv {
52         position: absolute;
53         cursor: pointer;
54         left: 160px;
55         top: 500px;
56         width: 200px;
57         height: 53px;
58         background-image: url(img/starting.png);
59     }
60 
61     .none {
62         display: none;
63     }
64 </style>
65 
66 </head>
67 <body>
68 
69 <!-- <script src="js/circle.js"></script> -->
70 
71 <div class="canvasbig">
72     <div class="canvasdiv"></div>
73     <canvas width="520" height="800" id="canvas1"></canvas>
74 </div>
75 
76 <script src="js/index.js"></script>
77 <div style="text-align:center;">
78 <p>¸񐢓Ϗ·£º<a href="http://www.mycodes.net/" target="_blank">Դë֮¼м/a></p>
79 </div>
80 </body>
81 </html>[root@k8s-master glusterfs]# 

访问http://192.168.110.135:8889/,在分析地过程中,我将nginx_pod.yaml地hostPort: 8889改成了8889端口号哦。

游戏是可以玩的,功能也算是搞完了。

posted on 2020-07-02 21:49  别先生  阅读(3665)  评论(0编辑  收藏  举报