Steps of Configuring GFS2 Filesystem In A Cluster

以下是我在工作期间写的使用红帽的RHEL6.3搭建GFS2文件系统的步骤。出于保密原因,部分地址进行了替换和省略。

由于工作环境为外企,故写的资料全使用了英文。最近比较懒,就暂时不翻译了。啥时候不忙了再回来用中文改写一遍。

Steps of Configuring GFS2 Filesystem In A Cluster
@Author:zsun
@date:Jan 21,2013
#################################################
Topology:
Node1 192.168.79.40 node1.zsun.com
Node2 192.168.79.41 node2.zsun.com
Storage 192.168.78.43 iscsitarget.zsun.com
=================================================
Prepare:
1.Install the three VMs with RHEL6.3
2.Configure the yum repo and adding RHEL repo, Server repo, ResilientStorage repo, HighAvailability repo, and ScalableFileSystem repo on all the three nodes.
3.Install luci ricci rgmanager cman openais iscsi-initiator-utils gfs2-utils cluster-glue-libs on all the three VMs and install scsi-target-utils on the Storage VM.
4.Make luci ricci rgmanager iscsi autoboot on the two nodes with chkconfig
5.Configure the /etc/hosts with the right ip and hostname (As the topology).Configure your /etc/dhcp/dhclient-eth0.conf if you uses DHCP for your network. Adding lines like the following:
supersede host-name "node2";
supersede domain-name "zsun.com";
6.Chkconfig NetworkManager off and set nm=off in interface configure files on all the three VMs. Then service NetworkManager stop on all the three VMs.
=================================================
Configure the ISCSI target( The storage VM )

1.Create the partitions for your iscsi.
(I will use /dev/vdb as the iscsi storage in the following descrpition)
2.Configure the iscsi target.
#tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.2013-01.com.zsun.disk1
#tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/vdb1
#tgtadm --lld iscsi --op bind --node target --tid 1 -I ALL
3.Show the running target config to make it is right.
#tgt-admin -s
Then write the running config to the config file so that it will work after the service started.
#tgt-admin --dump >/etc/tgt/targets.conf
4.Set your firewall properly
#iptables -I INPUT -p tcp --dport 3260 -j ACCEPT
#service iptables save
5.Start the clusting services AS THE FOLLOWING ORDER
#service luci start
#service ricci start
#service rgmanager start
#chkconfig luci on
#chkconfig ricci on
#chkconfig rgmanager on
6.Configure the firewall to allow luci web console.
#iptables -I INPUT -p tcp --dport 8084 -j ACCEPT
#service iptables save
=================================================
Configure the nodes

1.Switch off the NetworkManager
#chkconfig NetworkManager off
#service NetworkManager stop
2.Install the clusting packages if it is not installed.
#yum install -y cman luci ricci rgmanager iscsi-initiator-utils gfs2-utils cluster-glue-libs
3.Configure the firewall to enable remote connection for ricci
#iptables -I INPUT -p tcp --dport 11111 -j ACCEPT
#service iptables save
4.Start the clusting services AS THE FOLLOWING ORDER
#service luci start
#service ricci start
#service rgmanager start
#chkconfig luci on
#chkconfig ricci on
#chkconfig rgmanager on
5.Change the password of ricci service
#passwd ricci
=================================================
Creating the cluster environment

1.Open https://192.168.78.43:8084 in browser.(The ip I used is the one of Storage)
2.Login in with root and its password.
3.Click on Manage Clusters
4.Click on "Create" on the shown page.
5.Adding Cluster name, node name, ricci password, and other clusting information for the two nodes. Select "Use Locally Installed Packages" then click Create Cluster.
6.Wait a moment, then the two nodes should be shown as Cluster Member in the web page.
=================================================
Configure ISCSI and GFS2 for the two nodes

1.Config iscsi initiators on both of the two nodes.
#iscsiadm -m discovery -t sendtargets -p 192.168.78.43

192.168.78.43:3260,1 iqn.2013-01.com.zsun.disk1

2.Login to the iscsi target.
#iscsiadm -m node -T iqn.2013-01.com.zsun.disk1 -p 192.168.78.43 -l
3.Enable clvm
#lvmconf --enable-cluster
#chkconfig clvmd on
#service clvmd start
4.Create partition for the iscsi disk(I will use /dev/sda for the following).
#fdisk -l
#fdisk -cu /dev/sda
5.On all nodes update the kernel views of the partition table and multipath mapping if necessary.
#partprobe
#multipath -r
(If you don't have the multipath command, you can yum install -y device-mapper-multipath)
6.Create PV for the partition
#pvcreate /dev/sda1
7.Create VG for clvm
#vgcreate -cy vg_gfs2 /dev/sda1
8.Create a logical volume
#lvcreate -n lv_gfs2 -L 10G vg_gfs2
9.Create GFS2 filesystem.
#mkfs.gfs2 -t <Cluster name>:<Filesystem name> -j 3 -J 64 /dev/vg_gfs2/lv_gfs2
10.Mount it on both nodes.
#mount /dev/vg_gfs2/lv_gfs2 /mnt

Then you will see that if you modify any file in /mnt from node1, it can be seen on the other node.

 

posted on 2013-05-30 21:11  sztsian  阅读(538)  评论(0编辑  收藏  举报

导航