使用ProxySQL实现MySQL Group Replication的故障转移、读写分离(二)

在上一篇文章《使用ProxySQL实现MySQL Group Replication的故障转移、读写分离(一) 》 中,已经完成了MGR+ProxySQL集群的搭建,也测试了ProxySQL实现业务层面的故障转移,接下来继续测试读写分离。


王国维大师笔下的人生三境界:

第一重境界:昨夜西风凋碧树。独上高楼,望尽天涯路;
第二重境界:衣带渐宽终不悔,为伊消得人憔悴;
第三重境界:众里寻他千百度,蓦然回首,那人却在灯火阑珊处。

作为一个一根筋的学渣程序员,我还没能想透彻。但是数据库读写分离的三境界却有了一定的了解,我们不妨来看一看MySQL数据库读写分离的三境界。

第一重境界:人工实现读写分离。通过IP、端口读写分离,业务层面人工识别读写语句,然后将其分配到不同的主机,实现读写分离;
第二重境界:正则实现读写分离。通过路由中间件识别SQL语句,通过正则表达式匹配SQL语句,然后根据匹配结果分发到不同的主机;
第三重境界:识别TOP SQL,将高负载SQL分发到不同的主机;


(一)第一境界:人工实现读写分离

通过IP、端口读写分离,业务层面人工识别读写语句,然后使用不同的连接数据库配置信息,将其分配到不同的主机,实现读写分离。在ProxySQL里面,我们是通过端口来实现读写分离的。具体操作如下:

STEP1:配置ProxySQL在两个端口上侦听,并且重新启动ProxySQL

mysql -uadmin -padmin -h127.0.0.1 -P6032
mysql> SET mysql-interfaces='0.0.0.0:6401;0.0.0.0:6402';
-- save it on disk and restart proxysql
mysql> SAVE MYSQL VARIABLES TO DISK;
mysql> PROXYSQL RESTART;

STEP2:配置路由规则,通过端口将请求分发到不同的组

mysql> INSERT INTO mysql_query_rules (rule_id,active,proxy_port,destination_hostgroup,apply) VALUES (1,1,6401,10,1), (3,1,6402,20,1);
mysql> LOAD MYSQL QUERY RULES TO RUNTIME;
mysql> SAVE MYSQL QUERY RULES TO DISK;

这样,通过6401端口访问数据库的请求就会被转发到组1(写组)中,通过6402端口访问数据库的请求会被转发到组3(读组)中,从而实现读写分离,具体使用6401端口还是6402端口访问数据库,取决于开发人员人工识别SQL的读写特性。


(二)第二境界:使用正则表达式实现读写分离

通过路由中间件识别SQL语句,通过正则表达式匹配SQL语句,然后根据匹配结果分发到不同的主机。操作过程如下

STEP1:为避免干扰测试,删除之前定义的规则

DELETE FROM mysql_query_rules;

STEP2:定义新的读写分离规则

INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(1,1,'^SELECT.*FOR UPDATE$',1,1);
INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(2,1,'^SELECT',3,1);

LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;

现在,ProxySQL的路由规则为:

  • SELECT FOR UPDATE操作将被路由到组1(写组);
  • 其它的SELECT语句将被路由到组3(读组);
  • 其它的路由到默认组,即组1。



这里对使用正则表达式方式进行测试,整个过程如下:

(1)测试之前读写组信息修改

-- 根据组的规则:最多1个写节点,其余的写节点放入备用写组。目前我们可以看到节点192.168.10.13是写节点,其余2个节点是备用写节点,没有读节点
mysql> select * from mysql_group_replication_hostgroups;
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| 1                | 2                       | 3                | 4                 | 1      | 1           | 0                     | 100                     | NULL    |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
1 row in set (0.00 sec)

mysql> mysql> select * from runtime_mysql_servers;
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 192.168.10.13 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 192.168.10.12 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 192.168.10.11 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
3 rows in set (0.01 sec)


-- 为了实现读写分离,需要有读节点,我们可以修改writer_is_also_reader参数,让backup_writer_hostgroup中的节点既做备用写节点,又做读节点
mysql> update  mysql_group_replication_hostgroups set writer_is_also_reader = 2 ;
Query OK, 1 row affected (0.00 sec)

mysql> select * from mysql_group_replication_hostgroups;
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| 1                | 2                       | 3                | 4                 | 1      | 1           | 2                     | 100                     | NULL    |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
1 row in set (0.00 sec)

mysql> 
mysql> select * from runtime_mysql_group_replication_hostgroups;
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| 1                | 2                       | 3                | 4                 | 1      | 1           | 0                     | 100                     | NULL    |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
1 row in set (0.00 sec)


--需要生效、永久保存mysql server配置
mysql> load mysql servers to runtime;
Query OK, 0 rows affected (0.01 sec)

mysql> save mysql servers to disk;
Query OK, 0 rows affected (0.03 sec)


mysql> select * from runtime_mysql_group_replication_hostgroups;
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| 1                | 2                       | 3                | 4                 | 1      | 1           | 2                     | 100                     | NULL    |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
1 row in set (0.01 sec)


-- 最终mysql server的组信息如下
mysql> select * from runtime_mysql_servers;
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 192.168.10.13 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 3            | 192.168.10.12 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 3            | 192.168.10.11 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 192.168.10.11 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 192.168.10.12 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
5 rows in set (0.00 sec)


(2)导入规则

-- 为避免测试干扰,先删除之前的规则
DELETE FROM mysql_query_rules;

-- 导入规则
INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(1,1,'^SELECT.*FOR UPDATE$',1,1);
INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(2,1,'^SELECT',3,1);

-- 生效、保存规则
LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;


(3)测试规则是否生效

测试SQL语句:

mysql -uusera -p123456 -h192.168.10.10 -P6033 

-- 写测试
insert into testdb.test01 values(3,'c');

-- 读测试
SELECT * from testdb.test01;

-- 正则大小写测试
select * from testdb.test01;

-- select for update测试
SELECT * from testdb.test01 FOR UPDATE;
select * from testdb.test01 FOR UPDATE;

exit;

ProxySQL将SQL语句分发到哪一台主机上执行,可以查看统计视图:stats_mysql_query_digest和stats_mysql_query_digest_reset。两个表的内容和结构相同,但是查询stats_mysql_query_digest_reset表会自动将内部统计信息重置为零,即执行了stats_mysql_query_digest_reset的查询后,2个表的数据都会被完全清除。这里我们直接使用stats_mysql_query_digest_reset来查询上面的测试:

mysql> select hostgroup,schemaname,username,digest_text,count_star from   stats_mysql_query_digest_reset;
+-----------+--------------------+----------+----------------------------------------+------------+
| hostgroup | schemaname         | username | digest_text                            | count_star |
+-----------+--------------------+----------+----------------------------------------+------------+
| 1         | information_schema | usera    | SELECT * from testdb.test01 FOR UPDATE | 1          |
| 3         | information_schema | usera    | select * from testdb.test01            | 1          |
| 3         | information_schema | usera    | SELECT * from testdb.test01            | 1          |
| 1         | information_schema | usera    | select * from testdb.test01 FOR UPDATE | 1          |
| 1         | information_schema | usera    | insert into testdb.test01 values(?,?)  | 1          |
| 1         | information_schema | usera    | select @@version_comment limit ?       | 1          |
+-----------+--------------------+----------+----------------------------------------+------------+
6 rows in set (0.00 sec)

可以看到,正则表达式规则不区分大小写,并且根据匹配规则,已经将SQL发到了对应的主机上执行。

个人觉得基于正则表达式路由SQL语句到不同主机执行已经十分智能了,然而ProxySQL官方并不建议这么干,因为我们无法准确知道各类型的SQL语句的开销,从而可能会导致流量分布不均。

接下来我们来看看ProxySQL推荐的方法,基于正则表达式和摘要进行读写拆分。


(三)第三境界:使用正则表达式和digest实现读写分离

以下是ProxySQL推荐的有效设置读写分离的配置过程:
(1)配置ProxySQL以将所有流量仅发送到一个MySQL主节点,写和读都发送到一个节点;
(2)检查stats_mysql_query_digest哪些是最昂贵的SELECT语句;
(3)确定哪些昂贵的语句应移至读节点;
(4)配置mysql_query_rules(创建规则)以仅将昂贵的SELECT语句发送给读者

总之,想法非常简单:仅发送那些你想发送的SQL给读节点,而不是发送所有SELECT语句。

我们来整理一下整个过程:

STEP1:去除规则,让所有SQL语句都在默认组上执行

mysql> delete from mysql_query_rules;
Query OK, 2 rows affected (0.00 sec)

mysql> LOAD MYSQL QUERY RULES TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)

mysql> SAVE MYSQL QUERY RULES TO DISK;
Query OK, 0 rows affected (0.01 sec)


STEP2:查找最昂贵的SQL

假设目前所有读写操作都在同一台机器上执行,且执行了很久,读写比例都具有代表性,我们可以使用stats_mysql_query_digest查找最昂贵的SQL,可以多维度进行查找。

(1)查找查询总耗时最多的5个SQL

mysql> SELECT digest,SUBSTR(digest_text,0,25),count_star,sum_time FROM stats_mysql_query_digest WHERE digest_text LIKE 'SELECT%' ORDER BY sum_time DESC LIMIT 5;
+--------------------+--------------------------+------------+----------+
| digest             | SUBSTR(digest_text,0,25) | count_star | sum_time |
+--------------------+--------------------------+------------+----------+
| 0xBF001A0C13781C1D | SELECT c FROM sbtest1 WH | 9594       | 9837782  |
| 0xC4771449056AB3AC | SELECT c FROM sbtest14 W | 9984       | 9756595  |
| 0xD84E4E04982951C1 | SELECT c FROM sbtest9 WH | 9504       | 9596185  |
| 0x9B090963F41AD781 | SELECT c FROM sbtest10 W | 9664       | 9530433  |
| 0x9AF59B998A3688ED | SELECT c FROM sbtest2 WH | 9744       | 9513180  |
+--------------------+--------------------------+------------+----------+
5 rows in set (0.00 sec)


(2)查看执行次数最多的5个SQL语句

mysql> SELECT digest,SUBSTR(digest_text,0,25),count_star,sum_time FROM stats_mysql_query_digest WHERE digest_text LIKE 'SELECT%' ORDER BY count_star DESC LIMIT 5;
+--------------------+--------------------------+------------+----------+
| digest             | SUBSTR(digest_text,0,25) | count_star | sum_time |
+--------------------+--------------------------+------------+----------+
| 0xC4771449056AB3AC | SELECT c FROM sbtest14 W | 9984       | 9756595  |
| 0x9AF59B998A3688ED | SELECT c FROM sbtest2 WH | 9744       | 9513180  |
| 0x9B090963F41AD781 | SELECT c FROM sbtest10 W | 9664       | 9530433  |
| 0x03744DC190BC72C7 | SELECT c FROM sbtest5 WH | 9604       | 9343514  |
| 0x1E7B7AC5611F30C2 | SELECT c FROM sbtest6 WH | 9594       | 9245838  |
+--------------------+--------------------------+------------+----------+


(3)查看平均执行时间最长的5个SQL语句

mysql> SELECT digest,SUBSTR(digest_text,0,25),count_star,sum_time, sum_time/count_star as avg_time FROM stats_mysql_query_digest WHERE digest_text LIKE 'SELECT%' ORDER BY avg_time DESC LIMIT 5;
+--------------------+--------------------------+------------+----------+----------+
| digest             | SUBSTR(digest_text,0,25) | count_star | sum_time | avg_time |
+--------------------+--------------------------+------------+----------+----------+
| 0x0DCAF47B4A363A7A | SELECT * from testdb.tes | 1          | 11400    | 11400    |
| 0x2050E81DB9C7038E | select * from testdb.tes | 1          | 10817    | 10817    |
| 0xF340A73F6EDA5B20 | SELECT c FROM sbtest11 W | 964        | 1726994  | 1791     |
| 0xC867A28C90150A81 | SELECT DISTINCT c FROM s | 929        | 1282699  | 1380     |
| 0x283AA9863F85EFC8 | SELECT DISTINCT c FROM s | 963        | 1318362  | 1369     |
+--------------------+--------------------------+------------+----------+----------+


(4)查看平均执行时间最长的5个SQL语句,且满足平均执行时间大于1s,并显示该SQL执行时间占所有SQL执行时间的百分比

SELECT  digest,SUBSTR(digest_text,0,25),count_star,sum_time,sum_time/count_star as avg_time,round(sum_time/1000000*100/(SELECT sum(sum_time/1000000) FROM stats_mysql_query_digest ),3) as pct 
FROM    stats_mysql_query_digest 
WHERE   digest_text LIKE 'SELECT%' 
AND     sum_time/count_star > 1000000
ORDER BY avg_time DESC LIMIT 5;

说明:在测试该语句时,是使用sysbench压测出来的数据,发现存在一个sum_time非常大的SQL,导致在求sum(sum_time)时返回NULL值,故先做了预处理,把sum_time/1000000变为进行计算。


STEP3:结合digest和正则表达式实现路由

我们先观察一下,未使用路由规则时候的流量分布,可以看到,所有流量都到了hostgroup1

mysql>  select hostgroup,schemaname,username,digest_text,count_star from   stats_mysql_query_digest_reset;
+-----------+--------------------+----------+---------------------------------------------------------------------+------------+
| hostgroup | schemaname         | username | digest_text                                                         | count_star |
+-----------+--------------------+----------+---------------------------------------------------------------------+------------+
| 1         | information_schema | usera    | SET PROFILING = ?                                                   | 1          |
| 1         | information_schema | usera    | SHOW DATABASES                                                      | 3          |
| 1         | information_schema | usera    | SHOW VARIABLES LIKE ?;                                              | 2          |
| 1         | information_schema | usera    | SET NAMES utf8mb4                                                   | 3          |
| 1         | tssysbench         | usera    | INSERT INTO sbtest15 (id, k, c, pad) VALUES (?, ?, ?, ?)            | 1285       |
| 1         | tssysbench         | usera    | INSERT INTO sbtest14 (id, k, c, pad) VALUES (?, ?, ?, ?)            | 1309       |
| 1         | tssysbench         | usera    | INSERT INTO sbtest13 (id, k, c, pad) VALUES (?, ?, ?, ?)            | 1303       |
| 1         | tssysbench         | usera    | INSERT INTO sbtest12 (id, k, c, pad) VALUES (?, ?, ?, ?)            | 1240       |
| 1         | tssysbench         | usera    | UPDATE sbtest3 SET k=k+? WHERE id=?                                 | 1280       |
| 1         | tssysbench         | usera    | UPDATE sbtest2 SET k=k+? WHERE id=?                                 | 1280       |
| 1         | tssysbench         | usera    | UPDATE sbtest1 SET k=k+? WHERE id=?                                 | 1219       |
| 1         | tssysbench         | usera    | SELECT DISTINCT c FROM sbtest15 WHERE id BETWEEN ? AND ? ORDER BY c | 1207       |
| 1         | tssysbench         | usera    | SELECT DISTINCT c FROM sbtest14 WHERE id BETWEEN ? AND ? ORDER BY c | 1262       |
| 1         | tssysbench         | usera    | SELECT DISTINCT c FROM sbtest11 WHERE id BETWEEN ? AND ? ORDER BY c | 1227       |


插入路由规则:

-- 根据digest插入规则,匹配特定的SQL语句
INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(1,1,'0x0DCAF47B4A363A7A',3,1);
INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(2,1,'0x63F9BD89D906209B',3,1);
INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(3,1,'0x10D8D9CC551E199B',3,1);
INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(4,1,'0xC867A28C90150A81',3,1);
INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(5,1,'0x283AA9863F85EFC8',3,1);
INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(6,1,'0x16BD798E66615299',3,1);

-- 根据正则表达式插入规则,匹配所有SELECT 开头的语句
INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(7,1,'^SELECT COUNT\(\*\)',3,1);


-- 使规则生效、保存
LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;


STEP4:使用sysbench查询,再次查看流量分布,可以看到,符合路由条件的SQL语句已经转移到了hostgroup3执行。

mysql> select hostgroup,schemaname,username,digest_text,count_star from   stats_mysql_query_digest_reset;
+-----------+------------+----------+---------------------------------------------------------------------+------------+
| hostgroup | schemaname | username | digest_text                                                         | count_star |
+-----------+------------+----------+---------------------------------------------------------------------+------------+
| 1         | tssysbench | usera    | UPDATE sbtest3 SET k=k+? WHERE id=?                                 | 863        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest14 WHERE id BETWEEN ? AND ? ORDER BY c | 841        |
| 3         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest13 WHERE id BETWEEN ? AND ? ORDER BY c | 765        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest12 WHERE id BETWEEN ? AND ? ORDER BY c | 837        |
| 3         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest11 WHERE id BETWEEN ? AND ? ORDER BY c | 813        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest10 WHERE id BETWEEN ? AND ? ORDER BY c | 861        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest9 WHERE id BETWEEN ? AND ? ORDER BY c  | 835        |
| 3         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest8 WHERE id BETWEEN ? AND ? ORDER BY c  | 823        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest6 WHERE id BETWEEN ? AND ? ORDER BY c  | 834        |
| 1         | tssysbench | usera    | UPDATE sbtest5 SET c=? WHERE id=?                                   | 870        |
| 3         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest4 WHERE id BETWEEN ? AND ? ORDER BY c  | 802        |
| 1         | tssysbench | usera    | UPDATE sbtest1 SET c=? WHERE id=?                                   | 835        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest3 WHERE id BETWEEN ? AND ? ORDER BY c  | 838        |
| 1         | tssysbench | usera    | SELECT DISTINCT c FROM sbtest2 WHERE id BETWEEN ? AND ? ORDER BY c  | 885

至此,以实现根据负载进行流量分发。





================================================================================================================

附1:读写分离路由规则表解析

读写分离路由解析信息存放在mysql_query_rules表中,表的语法如下:

CREATE TABLE mysql_query_rules (
    rule_id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
    active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 0,
    username VARCHAR,
    schemaname VARCHAR,
    flagIN INT CHECK (flagIN >= 0) NOT NULL DEFAULT 0,
    client_addr VARCHAR,
    proxy_addr VARCHAR,
    proxy_port INT,
    digest VARCHAR,
    match_digest VARCHAR,
    match_pattern VARCHAR,
    negate_match_pattern INT CHECK (negate_match_pattern IN (0,1)) NOT NULL DEFAULT 0,
    re_modifiers VARCHAR DEFAULT 'CASELESS',
    flagOUT INT CHECK (flagOUT >= 0),
    replace_pattern VARCHAR CHECK(CASE WHEN replace_pattern IS NULL THEN 1 WHEN replace_pattern IS NOT NULL AND match_pattern IS NOT NULL THEN 1 ELSE 0 END),
    destination_hostgroup INT DEFAULT NULL,
    cache_ttl INT CHECK(cache_ttl > 0),
    cache_empty_result INT CHECK (cache_empty_result IN (0,1)) DEFAULT NULL,
    reconnect INT CHECK (reconnect IN (0,1)) DEFAULT NULL,
    timeout INT UNSIGNED,
    retries INT CHECK (retries>=0 AND retries <=1000),
    delay INT UNSIGNED,
    next_query_flagIN INT UNSIGNED,
    mirror_flagOUT INT UNSIGNED,
    mirror_hostgroup INT UNSIGNED,
    error_msg VARCHAR,
    OK_msg VARCHAR,
    sticky_conn INT CHECK (sticky_conn IN (0,1)),
    multiplex INT CHECK (multiplex IN (0,1,2)),
    gtid_from_hostgroup INT UNSIGNED,
    log INT CHECK (log IN (0,1)),
    apply INT CHECK(apply IN (0,1)) NOT NULL DEFAULT 0,
    comment VARCHAR)

重要列的含义如下:

  • rule_id         :规则的id,是主键,具有唯一非空特性,规则匹配时,按照rule_id从小到大匹配;
  • active          :规则是否启用,1代表启用;
  • username:   : 匹配来自特定用户的流量;
  • client_addr   :匹配来自特定客户端的流量;
  • proxy_addr   : 匹配特定本地IP上的传入流量;
  • proxy_port    : 匹配特定本地端口上的传入流量,具体见上面使用端口进行读写分离的方案;
  • digest           : 将查询与特定摘要匹配,每个相同的SQL文本都会生成一个唯一的diagst码(类似Oracle的sql_id),按照码进行匹配;
  • match_digest :将查询摘要与正则表达式匹配;
  • match_pattern:将查询文本与正则表达式匹配;
  • destination_hostgroup:将匹配的查询路由到该主机组,除非存在已启动的事务并且已登录的用户将transaction_persistent标志设置为1(请参见表mysql_users),否则将发生这种情况。
  • cache_ttl     :查询结果缓存保留的时间(单位:s);
  • timeout       :执行匹配或重写的查询的最大超时(以毫秒为单位)。如果查询的运行时间超过特定阈值,则会自动终止该查询。如果未指定超时,则mysql-default_query_timeout应用全局变量
  • retries         : 在执行查询检测到失败的情况下,重新执行查询的次数
  • apply           : 如果这只为1,则不再匹配后面的查询规则。


附2:本次实验用到的sysbench脚本

-- 准备阶段
sysbench /usr/share/sysbench/oltp_read_write.lua  --mysql-host=192.168.10.10  --mysql-port=6033 --mysql-user=usera  --mysql-password='123456' --mysql-db=tssysbench --db-driver=mysql  --tables=15  --table-size=50000  --report-interval=10 --threads=4  --time=120 prepare

-- 测试阶段
sysbench /usr/share/sysbench/oltp_read_write.lua  --mysql-host=192.168.10.10  --mysql-port=6033 --mysql-user=usera  --mysql-password='123456' --mysql-db=tssysbench --db-driver=mysql  --tables=15  --table-size=500000  --report-interval=10 --threads=4 --time=120 run

-- 清除阶段
sysbench /usr/share/sysbench/oltp_read_write.lua  --mysql-host=192.168.10.10  --mysql-port=6033 --mysql-user=usera  --mysql-password='123456' --mysql-db=tssysbench --db-driver=mysql  --tables=15  --table-size=500000  --report-interval=10 --threads=4 --time=120 cleanup


【完】

posted @ 2020-08-02 15:37  gegeman  阅读(856)  评论(0编辑  收藏  举报