作业五

1、如果主节点已经运行了一段时间,且有大量数据时,如何配置并启动slave节点(写出操作步骤)

    (1) 在主服务器完全备份

   (2) 将完全备份还原到新的从节点

  (3) 配置从节点,从完全备份的位置之后开始复制


2、当master服务器宕机,提升一个slave成为新的master(写出操作步骤)

  (1) 找到哪个从节点的数据库是最新,让它成为新master

  (2) 新master修改配置文件,关闭read-only配置

 (3) 清除旧的master复制信息

 (4) 在新master上完全备份

 (5) 其它所有 slave 重新还原数据库,指向新的master


3、通过 MHA 0.58 搭建一个数据库集群结构

  环境:四台主机
  10.0.0.7 CentOS7 MHA管理端
  10.0.0.17 CentOS7 Master
  10.0.0.27 CentOS7 Slave1
  10.0.0.37 CentOS7 Slave2

  (1) 在管理节点上安装两个包mha4mysql-manager和mha4mysql-node

    [root@manager ~]#yum -y install mha4mysql-node-0.58-0.el7.centos.noarch.rpm 

   [root@manager ~]#yum -y install mha4mysql-manager-0.58-0.el7.centos.noarch.rpm

  (2) 在所有MySQL服务器上安装mha4mysql-node包

    [root@master ~]#yum -y install mha4mysql-node-0.58-0.el7.centos.noarch.rpm 

  (3) 在所有节点实现相互之间ssh key验证

    [root@manager ~]#ssh-keygen

    [root@manager ~]#ssh-copy-id 10.0.0.7

   [root@manager ~]#rsync -av .ssh 10.0.0.17:/root/

   [root@manager ~]#rsync -av .ssh 10.0.0.27:/root/

   [root@manager ~]#rsync -av .ssh 10.0.0.37:/root/

   (4) 在管理节点建立配置文件

    [root@manager ~]#mkdir /etc/mastermha

   [root@manager ~]#vim /etc/mastermha/app1.cnf

[server default]
user=mhauser     
password=magedu
manager_workdir=/data/mastermha/app1/  
manager_log=/data/mastermha/app1/manager.log
remote_workdir=/data/mastermha/app1/
ssh_user=root   
repl_user=repluser  
repl_password=magedu
ping_interval=1   
master_ip_failover_script=/usr/local/bin/master_ip_failover
report_script=/usr/local/bin/sendmail.sh
check_repl_delay=0
master_binlog_dir=/data/mysql/  
[server1]
hostname=10.0.0.17
candidate_master=1  
[server2]
hostname=10.0.0.27
candidate_master=1
[server3]
hostname=10.0.0.37

  (5) 相关脚本

[root@manager ~]#cat /usr/local/bin/sendmail.sh
echo "MySQL is down" | mail -s "MHA Warning" 987678498@qq.com
[root@manager ~]#chmod +x /usr/local/bin/sendmail.sh

[root@manager ~]#cat /usr/local/bin/master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
my $vip = '10.0.0.100/24';
my $gateway = '10.0.0.2'
my $interface = 'eth0';
my $key = "1";
my $ssh_start_vip = "/sbin/ifconfig $interface:$key $vip;/sbin/arping -I
$interface -c 3 -s $vip $gateway >/dev/null 2>&1";
my $ssh_stop_vip = "/sbin/ifconfig $interface:$key down";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
# $orig_master_host, $orig_master_ip, $orig_master_port are passed.
# If you manage master ip address at global catalog database,
# invalidate orig_master_ip here.
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
# all arguments are passed.
# If you manage master ip address at global catalog database,
# activate new_master_ip here.
# You can also grant write access (create user, set read_only=0, etc) here.
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
`ssh $ssh_user\@$orig_master_host \" $ssh_start_vip \"`;
exit 0;
}
else {
&usage();
exit 1;
}
}
# A simple system call that enable the VIP on the new master
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --
orig_master_host=host --orig_master_ip=ip --orig_master_port=port --
new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
[root@manager ~]#chmod +x /usr/local/bin/master_ip_failover

 

  (6) 实现Master

[root@master ~]#cat /etc/my.cnf
[mysqld]
server-id=17
log-bin=/data/mysql/mysql-bin
skip_name_resolve=1
datadir=/data/mysql
socket=/data/mysql/mysql.sock
log-error=/data/mysql/mysql.log
pid-file=/data/mysql/mysql.pid
[client]
socket=/data/mysql/mysql.sock

[root@master ~]#mysql -pmagedu

mysql> show master logs;
+------------------+-----------+
| Log_name | File_size |
+------------------+-----------+
| mysql-bin.000001 | 154 |
+------------------+-----------+
1 row in set (0.00 sec)

mysql> grant replication slave on *.* to repluser@'10.0.0.%' identified by 'magedu';
Query OK, 0 rows affected, 1 warning (0.01 sec)

mysql> grant all on *.* to mhauser@'10.0.0.%' identified by 'magedu';
Query OK, 0 rows affected, 1 warning (0.00 sec)

[root@master ~]#ifconfig eth0:1 10.0.0.100/24

  (7) 实现slave

[root@slave1 ~]#mysql -pmagedu

mysql> CHANGE MASTER TO
-> MASTER_HOST='10.0.0.17',
-> MASTER_USER='repluser',
-> MASTER_PASSWORD='magedu',
-> MASTER_LOG_FILE='mysql-bin.000001',
-> MASTER_LOG_POS=154;
Query OK, 0 rows affected, 2 warnings (0.00 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

  (8) 检查Mha的环境

   

[root@manager ~]#masterha_check_ssh --conf=/etc/mastermha/app1.cnf
Wed Oct 14 20:36:37 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Wed Oct 14 20:36:37 2020 - [info] Reading application default configuration from /etc/mastermha/app1.cnf..
Wed Oct 14 20:36:37 2020 - [info] Reading server configuration from /etc/mastermha/app1.cnf..
Wed Oct 14 20:36:37 2020 - [info] Starting SSH connection tests..
Wed Oct 14 20:36:38 2020 - [debug]
Wed Oct 14 20:36:37 2020 - [debug] Connecting via SSH from root@10.0.0.17(10.0.0.17:22) to root@10.0.0.27(10.0.0.27:22)..
Wed Oct 14 20:36:37 2020 - [debug] ok.
Wed Oct 14 20:36:37 2020 - [debug] Connecting via SSH from root@10.0.0.17(10.0.0.17:22) to root@10.0.0.37(10.0.0.37:22)..
Wed Oct 14 20:36:38 2020 - [debug] ok.
Wed Oct 14 20:36:39 2020 - [debug]
Wed Oct 14 20:36:38 2020 - [debug] Connecting via SSH from root@10.0.0.27(10.0.0.27:22) to root@10.0.0.17(10.0.0.17:22)..
Wed Oct 14 20:36:38 2020 - [debug] ok.
Wed Oct 14 20:36:38 2020 - [debug] Connecting via SSH from root@10.0.0.27(10.0.0.27:22) to root@10.0.0.37(10.0.0.37:22)..
Wed Oct 14 20:36:38 2020 - [debug] ok.
Wed Oct 14 20:36:40 2020 - [debug]
Wed Oct 14 20:36:38 2020 - [debug] Connecting via SSH from root@10.0.0.37(10.0.0.37:22) to root@10.0.0.17(10.0.0.17:22)..
Wed Oct 14 20:36:39 2020 - [debug] ok.
Wed Oct 14 20:36:39 2020 - [debug] Connecting via SSH from root@10.0.0.37(10.0.0.37:22) to root@10.0.0.27(10.0.0.27:22)..
Wed Oct 14 20:36:39 2020 - [debug] ok.
Wed Oct 14 20:36:40 2020 - [info] All SSH connection tests passed successfully.
[root@manager ~]#masterha_check_repl --conf=/etc/mastermha/app1.cnf
Wed Oct 14 20:36:46 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Wed Oct 14 20:36:46 2020 - [info] Reading application default configuration from /etc/mastermha/app1.cnf..
Wed Oct 14 20:36:46 2020 - [info] Reading server configuration from /etc/mastermha/app1.cnf..
Wed Oct 14 20:36:46 2020 - [info] MHA::MasterMonitor version 0.58.
Wed Oct 14 20:36:47 2020 - [info] GTID failover mode = 0
Wed Oct 14 20:36:47 2020 - [info] Dead Servers:
Wed Oct 14 20:36:47 2020 - [info] Alive Servers:
Wed Oct 14 20:36:47 2020 - [info] 10.0.0.17(10.0.0.17:3306)
Wed Oct 14 20:36:47 2020 - [info] 10.0.0.27(10.0.0.27:3306)
Wed Oct 14 20:36:47 2020 - [info] 10.0.0.37(10.0.0.37:3306)
Wed Oct 14 20:36:47 2020 - [info] Alive Slaves:
Wed Oct 14 20:36:47 2020 - [info] 10.0.0.27(10.0.0.27:3306) Version=5.7.29-log (oldest major version between slaves) log-bin:enabled
Wed Oct 14 20:36:47 2020 - [info] Replicating from 10.0.0.17(10.0.0.17:3306)
Wed Oct 14 20:36:47 2020 - [info] Primary candidate for the new Master (candidate_master is set)
Wed Oct 14 20:36:47 2020 - [info] 10.0.0.37(10.0.0.37:3306) Version=5.7.29-log (oldest major version between slaves) log-bin:enabled
Wed Oct 14 20:36:47 2020 - [info] Replicating from 10.0.0.17(10.0.0.17:3306)
Wed Oct 14 20:36:47 2020 - [info] Current Alive Master: 10.0.0.17(10.0.0.17:3306)
Wed Oct 14 20:36:47 2020 - [info] Checking slave configurations..
Wed Oct 14 20:36:47 2020 - [info] Checking replication filtering settings..
Wed Oct 14 20:36:47 2020 - [info] binlog_do_db= , binlog_ignore_db=
Wed Oct 14 20:36:47 2020 - [info] Replication filtering check ok.
Wed Oct 14 20:36:47 2020 - [info] GTID (with auto-pos) is not supported
Wed Oct 14 20:36:47 2020 - [info] Starting SSH connection tests..
Wed Oct 14 20:36:50 2020 - [info] All SSH connection tests passed successfully.
Wed Oct 14 20:36:50 2020 - [info] Checking MHA Node version..
Wed Oct 14 20:36:50 2020 - [info] Version check ok.
Wed Oct 14 20:36:50 2020 - [info] Checking SSH publickey authentication settings on the current master..
Wed Oct 14 20:36:50 2020 - [info] HealthCheck: SSH to 10.0.0.17 is reachable.
Wed Oct 14 20:36:51 2020 - [info] Master MHA Node version is 0.58.
Wed Oct 14 20:36:51 2020 - [info] Checking recovery script configurations on 10.0.0.17(10.0.0.17:3306)..
Wed Oct 14 20:36:51 2020 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/data/mysql/ --output_file=/data/mastermha/app1//save_binary_logs_test --manager_version=0.58 --start_file=mysql-bin.000001
Wed Oct 14 20:36:51 2020 - [info] Connecting to root@10.0.0.17(10.0.0.17:22)..
Creating /data/mastermha/app1 if not exists.. ok.
Checking output directory is accessible or not..
ok.
Binlog found at /data/mysql/, up to mysql-bin.000001
Wed Oct 14 20:36:51 2020 - [info] Binlog setting check done.
Wed Oct 14 20:36:51 2020 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Wed Oct 14 20:36:51 2020 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='mhauser' --slave_host=10.0.0.27 --slave_ip=10.0.0.27 --slave_port=3306 --workdir=/data/mastermha/app1/ --target_version=5.7.29-log --manager_version=0.58 --relay_log_info=/data/mysql/relay-log.info --relay_dir=/data/mysql/ --slave_pass=xxx
Wed Oct 14 20:36:51 2020 - [info] Connecting to root@10.0.0.27(10.0.0.27:22)..
Checking slave recovery environment settings..
Opening /data/mysql/relay-log.info ... ok.
Relay log found at /data/mysql, up to slave1-relay-bin.000002
Temporary relay log file is /data/mysql/slave1-relay-bin.000002
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
mysql: [Warning] Using a password on the command line interface can be insecure.
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Wed Oct 14 20:36:51 2020 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='mhauser' --slave_host=10.0.0.37 --slave_ip=10.0.0.37 --slave_port=3306 --workdir=/data/mastermha/app1/ --target_version=5.7.29-log --manager_version=0.58 --relay_log_info=/data/mysql/relay-log.info --relay_dir=/data/mysql/ --slave_pass=xxx
Wed Oct 14 20:36:51 2020 - [info] Connecting to root@10.0.0.37(10.0.0.37:22)..
Checking slave recovery environment settings..
Opening /data/mysql/relay-log.info ... ok.
Relay log found at /data/mysql, up to slave2-relay-bin.000002
Temporary relay log file is /data/mysql/slave2-relay-bin.000002
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
mysql: [Warning] Using a password on the command line interface can be insecure.
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Wed Oct 14 20:36:51 2020 - [info] Slaves settings check done.
Wed Oct 14 20:36:51 2020 - [info]
10.0.0.17(10.0.0.17:3306) (current master)
+--10.0.0.27(10.0.0.27:3306)
+--10.0.0.37(10.0.0.37:3306)

Wed Oct 14 20:36:51 2020 - [info] Checking replication health on 10.0.0.27..
Wed Oct 14 20:36:51 2020 - [info] ok.
Wed Oct 14 20:36:51 2020 - [info] Checking replication health on 10.0.0.37..
Wed Oct 14 20:36:51 2020 - [info] ok.
Wed Oct 14 20:36:51 2020 - [info] Checking master_ip_failover_script status:
Wed Oct 14 20:36:51 2020 - [info] /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=10.0.0.17 --orig_master_ip=10.0.0.17 --orig_master_port=3306


IN SCRIPT TEST====/sbin/ifconfig eth0:1 down==/sbin/ifconfig eth0:1 10.0.0.100/24;/sbin/arping -I
eth0 -c 3 -s 10.0.0.100/24 10.0.0.2 >/dev/null 2>&1===

Checking the Status of the script.. OK
/sbin/arping: option requires an argument -- 'I'
Usage: arping [-fqbDUAV] [-c count] [-w timeout] [-I device] [-s source] destination
-f : quit on first reply
-q : be quiet
-b : keep broadcasting, don't go unicast
-D : duplicate address detection mode
-U : Unsolicited ARP mode, update your neighbours
-A : ARP answer mode, update your neighbours
-V : print version and exit
-c count : how many packets to send
-w timeout : how long to wait for a reply
-I device : which ethernet device to use
-s source : source ip address
destination : ask for what ip address
Wed Oct 14 20:36:51 2020 - [info] OK.
Wed Oct 14 20:36:51 2020 - [warning] shutdown_script is not defined.
Wed Oct 14 20:36:51 2020 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.
[root@manager ~]#masterha_check_status --conf=/etc/mastermha/app1.cnf
app1 is stopped(2:NOT_RUNNING).

  (9) 启动MHA

[root@manager ~]#masterha_manager --conf=/etc/mastermha/app1.cnf
Wed Oct 14 20:45:15 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Wed Oct 14 20:45:15 2020 - [info] Reading application default configuration from /etc/mastermha/app1.cnf..
Wed Oct 14 20:45:15 2020 - [info] Reading server configuration from /etc/mastermha/app1.cnf..

[root@manager ~]#masterha_check_status --conf=/etc/mastermha/app1.cnf
app1 (pid:3620) is running(0:PING_OK), master:10.0.0.17

4、实战案例:Percona XtraDB Cluster(PXC 5.7)

  (1) 环境准备

  四台主机:

  pxc1:10.0.0.7
  pxc2:10.0.0.17
  pxc3:10.0.0.27
  pxc4:10.0.0.37

  (2) 安装 Percona XtraDB Cluster 5.7

[root@pxc1 ~]#cat /etc/yum.repos.d/pxc.repo
[percona]
name=percona_repo
baseurl = https://mirrors.tuna.tsinghua.edu.cn/percona/release/$releasever/RPMS/$basearch
enabled = 1
gpgcheck = 0

[root@pxc1 ~]#scp /etc/yum.repos.d/pxc.repo 10.0.0.17:/etc/yum.repos.d

[root@pxc1 ~]#scp /etc/yum.repos.d/pxc.repo 10.0.0.27:/etc/yum.repos.d

[root@pxc1 ~]#yum install Percona-XtraDB-Cluster-57 -y
[root@pxc2 ~]#yum install Percona-XtraDB-Cluster-57 -y
[root@pxc3 ~]#yum install Percona-XtraDB-Cluster-57 -y

  (3) 在各个节点上分别配置mysql及集群配置文件

[root@pxc1 ~]#grep -Ev '^#|^$' /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.7,10.0.0.17,10.0.0.27
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=10.0.0.7
wsrep_cluster_name=pxc-cluster
wsrep_node_name=pxc-cluster-node-1
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:s3cretPass"

[root@pxc2 ~]#grep -Ev '^#|^$' /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.7,10.0.0.17,10.0.0.27
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=10.0.0.17
wsrep_cluster_name=pxc-cluster
wsrep_node_name=pxc-cluster-node-2
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:s3cretPass"

[root@pxc3 ~]#grep -Ev '^#|^$' /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.7,10.0.0.17,10.0.0.27
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=10.0.0.27
wsrep_cluster_name=pxc-cluster
wsrep_node_name=pxc-cluster-node-3
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:s3cretPass"

  (4) 启动PXC集群中第一个节点

[root@pxc1 ~]#systemctl start mysql@bootstrap.service
[root@pxc1 ~]#ss -nlt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:22 *:*
LISTEN 0 128 *:4567 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 100 [::1]:25 [::]:*
LISTEN 0 80 [::]:3306 [::]:*
[root@pxc1 ~]#grep "temporary password" /var/log/mysqld.log
2020-10-14T12:05:38.147627Z 1 [Note] A temporary password is generated for root@localhost: zg0_*nM._uuh
[root@pxc1 ~]#mysql -uroot -p'zg0_*nM._uuh'
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 5.7.31-34-57-log

Copyright (c) 2009-2020 Percona LLC and/or its affiliates
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> alter user 'root'@'localhost' identified by 'magedu';
Query OK, 0 rows affected (0.00 sec)

mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cretPass';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
Query OK, 0 rows affected (0.00 sec)

mysql> show status like 'wsrep%';
+----------------------------------+--------------------------------------+
| Variable_name                  | Value                                     |
+----------------------------------+--------------------------------------+
| wsrep_local_state_uuid | 974b714c-0e15-11eb-8744-d3b35ad1bc1b |
| wsrep_protocol_version | 9 |
| wsrep_last_applied | 3 |
| wsrep_last_committed | 3 |
| wsrep_replicated | 3 |
| wsrep_replicated_bytes | 760 |
| wsrep_repl_keys | 3 |
| wsrep_repl_keys_bytes | 96 |
| wsrep_repl_data_bytes | 463 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 2 |
| wsrep_received_bytes | 150 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0.000000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 2 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0.500000 |
| wsrep_local_cached_downto | 1 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_flow_control_interval | [ 100, 100 ] |
| wsrep_flow_control_interval_low | 100 |
| wsrep_flow_control_interval_high | 100 |
| wsrep_flow_control_status | OFF |
| wsrep_cert_deps_distance | 1.000000 |
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 1.000000 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 1.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 1 |
| wsrep_cert_bucket_count | 22 |
| wsrep_gcache_pool_size | 2200 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.000000 |
| wsrep_open_transactions | 0 |
| wsrep_open_connections | 0 |
| wsrep_ist_receive_status | |
| wsrep_ist_receive_seqno_start | 0 |
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | 10.0.0.7:3306 |
| wsrep_cluster_weight | 1 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | 974acaaa-0e15-11eb-99c5-573de782c041 |
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 974b714c-0e15-11eb-8744-d3b35ad1bc1b |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 3.45(ra60e019) |
| wsrep_ready | ON |
+----------------------------------+--------------------------------------+
71 rows in set (0.01 sec)

  (5) 启动PXC集群中其它所有节点

[root@pxc2 ~]#systemctl start mysql
[root@pxc2 ~]#ss -nlt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:38631 *:*
LISTEN 0 128 *:111 *:*
LISTEN 0 5 192.168.122.1:53 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 128 *:4567 *:*
LISTEN 0 128 127.0.0.1:631 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 127.0.0.1:6010 *:*
LISTEN 0 80 [::]:3306 [::]:*
LISTEN 0 128 [::]:111 [::]:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 128 [::1]:631 [::]:*
LISTEN 0 100 [::1]:25 [::]:*
LISTEN 0 128 [::]:44153 [::]:*
LISTEN 0 128 [::1]:6010 [::]:*

  (6) 查看集群状态,验证集群是否成功

[root@pxc1 ~]#mysql -uroot -pmagedu

mysql> SHOW VARIABLES LIKE 'wsrep_node_name';
+-----------------+--------------------+
| Variable_name | Value |
+-----------------+--------------------+
| wsrep_node_name | pxc-cluster-node-1 |
+-----------------+--------------------+
1 row in set (0.00 sec)

mysql> SHOW VARIABLES LIKE 'wsrep_node_address';
+--------------------+----------+
| Variable_name | Value |
+--------------------+----------+
| wsrep_node_address | 10.0.0.7 |
+--------------------+----------+
1 row in set (0.00 sec)

mysql> SHOW VARIABLES LIKE 'wsrep_on';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wsrep_on | ON |
+---------------+-------+
1 row in set (0.01 sec)

mysql> SHOW STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
1 row in set (0.00 sec)

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.00 sec)

mysql> create database testdb1;
Query OK, 1 row affected (0.00 sec)

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| testdb1 |
+--------------------+
5 rows in set (0.00 sec)

[root@pxc2 ~]#mysql -uroot -pmagedu

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| testdb1 |
+--------------------+
5 rows in set (0.00 sec)

  (7) 在PXC集群中加入节点

[root@pxc4 ~]#yum install Percona-XtraDB-Cluster-57 -y

[root@pxc4 ~]#vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
[root@pxc4 ~]#grep -Ev "^#|^$" /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.7,10.0.0.17,10.0.0.27,10.0.0.37
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=10.0.0.37
wsrep_cluster_name=pxc-cluster
wsrep_node_name=pxc-cluster-node-4
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:s3cretPass"
[root@centos7 ~]#systemctl start mysql
[root@centos7 ~]#mysql -uroot -pmagedu

mysql> SHOW STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 4 |
+--------------------+-------+
1 row in set (0.00 sec)

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| testdb1 |
+--------------------+
5 rows in set (0.00 sec)

[root@pxc1 ~]#vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
wsrep_cluster_address=gcomm://10.0.0.7,10.0.0.17,10.0.0.27,10.0.0.37
[root@pxc2 ~]#vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
[root@pxc3 ~]#vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf

  (8) 在PXC集群中修复故障节点

[root@pxc4 ~]#systemctl stop mysql

[root@pxc1 ~]#mysql -uroot -pmagedu

mysql> SHOW STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
1 row in set (0.00 sec)

mysql> create database testdb2;
Query OK, 1 row affected (0.00 sec)

[root@centos7 ~]#systemctl start mysql
[root@centos7 ~]#mysql -uroot -pmagedu

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| testdb1 |
| testdb2 |
+--------------------+
6 rows in set (0.00 sec)

mysql> SHOW STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 4 |
+--------------------+-------+
1 row in set (0.01 sec)

5、通过 ansible 部署二进制 mysql 8

[root@ansible ~]#ls -l /data/ansible/files/mysql-8.0.19-linux-glibc2.12-x86_64.tar.xz
-rw-r--r-- 1 root root 485074552 Oct 14 19:37 /data/ansible/files/mysql-8.0.19-linux-glibc2.12-x86_64.tar.xz
[root@ansible ~]#cat /data/ansible/files/my.cnf
[mysqld]
server-id=1
log-bin
datadir=/data/mysql
socket=/data/mysql/mysql.sock                         
log-error=/data/mysql/mysql.log
pid-file=/data/mysql/mysql.pid
[client]
socket=/data/mysql/mysql.sock
[root@ansible ~]#cat /data/ansible/install_mysql_yml
---

- hosts: 10.0.0.17
remote_user: root
tasks:
- name: install packages
yum: name=libaio,numactl-libs
- name: create mysql group
group: name=mysql gid=306
- name: create mysql user
user: name=mysql uid=306 group=mysql shell=/sbin/nologin system=yes create_home=no home=/data/mysql
- name: copy tar to remote host and file mode
unarchive: src=/data/ansible/files/mysql-8.0.19-linux-glibc2.12-x86_64.tar.xz dest=/usr/local/ owner=root group=root
- name: mkdir /user/local/mysql
file: src=/usr/local/mysql-8.0.19-linux-glibc2.12-x86_64 dest=/usr/local/mysql state=link
- name: data dir
shell: chdir=/usr/local/mysql/bin mysqld --initialize --user=mysql --datadir=/data/mysql
- name: config my.cnf
copy: src=/data/ansible/files/my.cnf dest=/etc/my.cnf
- name: service script
shell: /bin/cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld
- name: enable service
shell: /etc/init.d/mysqld start; chkconfig --add mysqld;chkconfig mysqld on
- name: change password
shell: chdir=/usr/local/mysql/bin mysqladmin -uroot -p`awk '/temporary password/{print $NF}' /data/mysql/mysql.log`  password magedu
- name: PATH variable
copy: content='PATH=/usr/local/mysql/bin:$PATH' dest=/etc/profile.d/mysql.sh

 

posted @ 2020-10-17 21:06  Zintent  阅读(87)  评论(0)    收藏  举报