Mysql 5.6 复制特性 使用详解

———————————————————主从复制的两种方式——————————————————————
升级测试:
主从:主———>从
主服务器可以读写,但是从服务器不可以写,不然从服务器的数据于主服务器
数据不同步,务必记住从服务器禁止写操作

一主多从:从服务器也可以是其他从服务器的从,向他们同步数据。
一从多主?淘宝补丁方案
项目:从多机房汇总数据至中心机房时;

主-——黑洞(blackhole)————从:
黑洞服务器不负责读也不负责写,只是将主服务器负责的数据同步给其他的从服务器。

MYSQL:异步

半同步复制:插件
一主多从的模式中,只要有一从服务器报告同步ok了,那么主服务器就认为从服务器同步结束,继续下一个操作。

超时:自动降级为异步

异步复制:
与半同步步相反,需要所有从服务器回应同步OK。

复制过滤?:只复制一个库或者指定的库
master:
binlog-do-db
binlog-ignore-db

slave:
replicate-do-db
replicate-igonre-db

replicate-do-table
replicate-ignore-table

replicate-wild-do-table (支持通配符的方式)
replicate-wild-ignore-table
一般来讲我们在slve节点实行过滤动作,主服务器的数据非常关键

复制:多主?
双主:都要开启二进制日志和中继日志

案例1: mysql-server5.6基于GTID方式部署主从配置 方式:
下载mysql-server5.6
测试环境
主:172.18.119.102 client.com
从:172.18.119.112 server.com
软件版本:mysql-5.6.35-linux-glibc2.5-x86_64.tar.gz

配置主服务器:

·解压软件包到/usr/local下
tar xf mysql-5.6.35-linux-glibc2.5-x86_64.tar.gz -C /usr/local
·软连接mysql符号
ln- sv mysql-5.6.35-linux-glibc2.5-x86_64 mysql
·添加mysql的运行用户mysql
useradd -r mysql
·进入mysql的目录修改属主为root.mysql
cd Mysql
chown -R root.mysql ./*
·创建一个数据库的目录位置
mkdir /mydata/data -pv
chown -R root.mysql /mydata/data
·创建一个mysql的配置文件my.cnf(5.6版默认不提供实例文件我们需要自行创建)
touch /etc/my.cnf
cat /etc/my.cnf
[client]
#password= your_password
port= 3306
socket= /tmp/mysql.sock
## The MySQL server
[mysqld]
port = 3306
socket = /tmp/mysql.sock
skip-external-locking
key_buffer_size = 256M
max_allowed_packet = 1M
table_open_cache = 256
sort_buffer_size = 1M
read_buffer_size = 1M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 16M
### DATA WHERE IS
thread_concurrency = 2
datadir = /mydata/data

###BIN-LOG
log-bin = mysql-bin
binlog_format = row

#SERVER ID CONFIG
server-id= 102

###GTID CONFIG
log-slave-updates = true
gtid-mode = on
enforce-gtid-consistency = true
master-info-repository = TABLE
relay-log-info-repository = TABLE
sync-master-info = 1
slave-parallel-workers = 4
binlog-checksum = CRC32
master-verify-checksum = 1
slave-sql-verify-checksum = 1
binlog-rows-query-log_events = 1
report-port = 3306
report-host = server

·配置mysql的自启动脚本
cp support-files/mysql.server /etc/init.d/mysqld
chmod +x /etc/init.d/mysqld
chkconfig --add mysqld
·初始化数据库
scripts/mysql_install_db --user=mysql --datadir=/mydata/data

·启动mysql数据库
service mysqld start
****注意若启动报以下的错
· [ERROR] InnoDB: ./ibdata1 can't be opened in read-write mode
2016-12-28 22:48:27 42833 [ERROR] InnoDB: The system tablespace must be writable!
2016-12-28 22:48:27 42833 [ERROR] Plugin 'InnoDB' init function returned error.
2016-12-28 22:48:27 42833 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2016-12-28 22:48:27 42833 [ERROR] Unknown/unsupported storage engine: InnoDB
2016-12-28 22:48:27 42833 [ERROR] Aborting
· MySQL server PID file could not be found! [失败]
Starting MySQL.The server quit without updating PID file (/[失败]/data/server.pid)
解决方法:
请删除/mydata/data所有的文件,然后scripts...执行下初始化再次启动即可。

·查看主节点状态
/usr/local/mysql/bin/Mysql
show master status;
mysql> mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000003 | 151 | | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

·查看主节点的uuid
show global variables like '%uuid%';
mysql> show global variables like '%uuid%';
+---------------+--------------------------------------+
| Variable_name | Value |
+---------------+--------------------------------------+
| server_uuid | 5536810e-cd0d-11e6-8bf8-00155d771a01 |
+---------------+--------------------------------------+
1 row in set (0.00 sec)
·授权用于从服务器的复制用户.
grant replication slave,replication client on *.* to 'repluser'@'%' identified by 'replpass';


配置从服务器:
·mysql-server包的安装方式和主节点一致
·主要是修改从节点的my.cnf配置文件
touch /etc/my.cnf
cat /etc/my.cnf
·重启mysqd服务
service mysqld start
·登录mysql服务器。
/usr/loca/mysql/bin/Mysql
·查看服务器的UUID
show global variables like '%uuid%';
mysql> show global variables like '%uuid%';
+---------------+--------------------------------------+
| Variable_name | Value |
+---------------+--------------------------------------+
| server_uuid | fa046884-cd0f-11e6-8c09-00155d771a00 |
+---------------+--------------------------------------+
1 row in set (0.00 sec)

·启动从节点的复制线程
change master to master_host='server',master_user='repluser',master_password='replpass',master_auto_position=1;
mysql> change master to master_host='server',master_user='repluser',master_password='replpass',master_auto_position=1;
Query OK, 0 rows affected, 2 warnings (3.02 sec)

mysql> flush privileges
Query OK, 0 rows affected (0.00 sec)
·查看slave线程的状态信息
show slave status\G
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: server
Master_User: repluser
Master_Port: 3306
Connect_Retry: 60
Master_Log_File:
Read_Master_Log_Pos: 4
Relay_Log_File: clinet-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File:
Slave_IO_Running: No //IO和SQL线程暂没有启动
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:

·启动IO和SQL的工作线程
start slave;
mysql> start slave;
Query OK, 0 rows affected, 1 warning (0.60 sec)
再次查看下线程的状态信息
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: server
Master_User: repluser
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000003
Read_Master_Log_Pos: 818
Relay_Log_File: clinet-relay-bin.000002
Relay_Log_Pos: 1028
Relay_Master_Log_File: mysql-bin.000003
Slave_IO_Running: Yes //IO和SQL线程已经启动
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
·查看从截点启动了几个复制的线程
show processlist;
mysql> show processlist;
+----+-------------+-----------+------+---------+------+-----------------------------------------------------------------------------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+-------------+-----------+------+---------+------+-----------------------------------------------------------------------------+------------------+
| 2 | root | localhost | NULL | Query | 0 | init | show processlist |
| 3 | system user | | NULL | Connect | 270 | Waiting for master to send event | NULL |
| 4 | system user | | NULL | Connect | 267 | Slave has read all relay log; waiting for the slave I/O thread to update it | NULL |
| 5 | system user | | NULL | Connect | 807 | Waiting for an event from Coordinator | NULL |
| 6 | system user | | NULL | Connect | 270 | Waiting for an event from Coordinator | NULL |
| 7 | system user | | NULL | Connect | 270 | Waiting for an event from Coordinator | NULL |
| 8 | system user | | NULL | Connect | 814 | Waiting for an event from Coordinator | NULL |
+----+-------------+-----------+------+---------+------+-----------------------------------------------------------------------------+------------------+
7 rows in set (0.00 sec)
由此可见我们启动了4个复制的线程
如果我们在主服务器上创建了多个数据库,那么每个数据库上都启动了事务,这些事务都可以同时进行;

测试我们的部署效果
主服务器创建一个数据库
create database hellodb;
查看主服务器的processlist;
mysql> show processlist;
+----+----------+--------------+------+------------------+------+-----------------------------------------------------------------------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+----------+--------------+------+------------------+------+-----------------------------------------------------------------------+------------------+
| 3 | root | localhost | NULL | Query | 0 | init | show processlist |
| 4 | repluser | client:51505 | NULL | Binlog Dump GTID | 556 | Master has sent all binlog to slave; waiting for binlog to be updated | NULL |
+----+----------+--------------+------+------------------+------+-----------------------------------------------------------------------+------------------+
2 rows in set (0.00 sec)
**查询的结果可以见,有一个client从节点通过51505端口连接上来了,主节点已经将相关的GTID信息发送给从节点了。

 

 

—————————mysql-server5.1版 主从配置方式——————————————————


mysql二进制日志
binlog-format=row/statement
statement:2进制记录执行语句,如update...
row: 2进制记录的是磁盘变化
mix: 混合的格式,由系统根据语句来决定

哪个好?
update age=age+1 where id=3;//语句长而磁盘变化小,宜用
update salary=salary+100 //语句短,但影响上万行,磁盘变化大,宜用

 

案例1:配置mysql的主从复制
主:172.18.119.112
从:172.18.119.102
·操作步骤:
1.编辑主服务器112服务器的mysql配置文件。
vi /etc/my.cnf,添加如下参数:
server-id=112
log-bin=mysql-bin
binlog-format=mixed

2.编辑从服务器的102的服务器的mysql配置文件。
vi /etc/my.cnf,添加如下参数:
server-id=102
log-bin=mysql-bin //可以不打开二进制,开启可以恢复数据
relay-log=mysql-relay

3.启动主从mysql服务
service mysqld start
4.主服务器为从服务器授权复制用户repl
grant replication client,replication slave on *.* to repl@'172.18.119.%' identified by 'repl';
5.从服务器使用repl用户去连接主服务器的信息
change master to
master_host='172.18.1.102',
master_user='repl',
master_password='repl',
master_log_file='mysql-bin.00001',
master_log_post='348';
启动io和复制线程:
start slave;
6.检查主从是否连接并测试
show master status
show slave status

——————————————————-读写分离————————————————————————
冗余、异地灾备
读写功能的分离
master: 读/写
slave: 读
ipvs、haproxy(负载均衡)
读写分离
mysql-proxy
lua script
amoeba: java,jvm
Mysql
mongdb

amoeba: java研发,配置文件:xml
mysql-proxy:lua脚本规格

案例1:使用amoeba实现代理请求后端的mysql-server服务器
实现环境:
三台服务器信息分布如下
主节点 172.18.119.102 server
从节点 172.18.119.112 client
amoeba 172.18.119.27 amoeba

·下载JDK1.6和amoeba程序
jdk: http://58.215.134.110/file3.data.weipan.cn/77723460/24425cdb69c11e86d6b58757b29e5ba4e4977660?ip=1482942217,180.102.117.160&ssig=lY%2FdW2Jtur&Expires=1482942817&KID=sae,l30zoo1wmz&fn=jdk-6u45-linux-x64.bin&skiprd=2&se_ip_debug=180.102.117.160&corp=2&from=1221134&wsiphost=local
amoeba: http://dl.download.csdn.net/down10/20130628/22b8f4556aef0b5c1f65849803ab3035.gz?response-content-disposition=attachment%3Bfilename%3D%22amoeba-mysql-binary-2.2.0.tar.gz%22&OSSAccessKeyId=9q6nvzoJGowBj4q1&Expires=1482944466&Signature=JRPbLhedQl8v%2BWgss7yy3sTrpdk%3D
·安装jdk并配置其环境变量
chmod +x jdk-6u45-linux-x64.bin
./jdk-6u45-linux-x64.bin
更改目录名称为jdk1.6
mv jdk-6u45-linux-x64 jdk1.6
移动文件夹到/usr/local目录中
mv jdk1.6 /usr/local
vim /etc/profile.d/java.sh
export JAVA_HOME=/usr/local/jdk1.6
export PATH=$JAVA_HOME/bin:$PATH
执行一下这个脚本
. /etc/profile.d/java.sh

检查是否安装成功
java -version
[root@centos7 amoeba-mysql-2.2.0]# java -version
java version "1.6.0_45"
Java(TM) SE Runtime Environment (build 1.6.0_45-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.45-b01, mixed mode)

·主服务器授权root用户的远程登录权限
grant all on *.* to 'root'@'%' identified by 'mypass';
从服务器上会自动同步到这个授权的记录
mysql> show grants for 'root'@'%';
+--------------------------------------------------------------------------------------------------------------+
| Grants for root@% |
+--------------------------------------------------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY PASSWORD '*6C8989366EAF75BB670AD8EA7A7FC1176A95CEF4' |
+--------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

·安装amoeba二进制程序
创建解压目录
kdir -pv /usr/local/amoeba-mysql-2.2.0
解压直接可以使用
tar xf amoeba-mysql-binary-2.2.0.tar.gz -C /usr/local/amoeba-mysql-2.2.0
修改amoeba的jdk内存大小值
vim /usr/local/amoeba/bin
DEFAULT_OPTS="-server -Xms256m -Xmx256m -Xss160k" //默认是128k
修改amoeba的amoeba.xml配置文件
vim conf/amoeba.xml
修改以下字段
<property name="authenticator">
<bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">
<property name="user">root</property> //连接amoeba用户验证user
<property name="password">amoebapass</property> //连接amoeba用户的password
<property name="port">3306</property> //监听的端口,默认是8806,我们修改为3306
<property name="ipAddress">0.0.0.0</property> //绑定的网卡地址
<property name="defaultPool">server1</property> //'server1'是需要在dbServer.xml自定义的.

**配置项中<queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">是实现读写分离的
<property name="writePool">server1</property> //指定写操作的服务器组
<property name="readPool">server1</property> //指定读操作的服务器组
'server1'是需要在dbServer.xml自定义的,读写分离我们一会来配置此项。

配置amoeba的dbServer.xml的配置文件
dbServer.xml中abstractServer项类似于nginx中的 upstream server,默认的全局配置继承。
vim conf/dbServer.xml
修改以下字段
<dbServer name="abstractServer" abstractive="true">
<property name="manager">${defaultManager}</property>
<property name="sendBufferSize">64</property>
<property name="receiveBufferSize">128</property>

<!-- mysql port -->
<property name="port">3306</property>

<!-- mysql schema -->
<property name="schema">test</property> //默认连接到的数据库

<!-- mysql user -->
<property name="user">root</property> //连接mysql的用户名

<!-- mysql password-->
<property name="password">mypass</property> //连接mysql的用户密码

</factoryConfig>......

<dbServer name="server1" parent="abstractServer">
<factoryConfig>
<!-- mysql ip -->
<property name="ipAddress">172.18.119.102</property> //主mysql服务器的地址
</factoryConfig>
</dbServer>


·设置amoeba环境变量并启动amoeba进程。
vim /etc/profile.d/amoeba.sh
cat amoeba.sh
export AMOEBA_HOME=/usr/local/amoeba
export PATH=$AMOEBA_HOME/bin:$PATH
加载一下脚本
. /etc/profile.d/amoeba.sh
启动amoeba,一下信息表示amoeba正常启动
amoeba start
[root@centos7 bin]# amoeba start
log4j:WARN log4j config load completed from file:/usr/local/amoeba/conf/log4j.xml
2016-12-29 01:24:13,094 INFO context.MysqlRuntimeContext - Amoeba for Mysql current versoin=5.1.45-mysql-amoeba-proxy-2.2.0
log4j:WARN ip access config load completed from file:/usr/local/amoeba/conf/access_list.conf
2016-12-29 01:24:18,738 INFO net.ServerableConnectionManager - Amoeba for Mysql listening on /0.0.0.0:3306.
2016-12-29 01:24:18,754 INFO net.ServerableConnectionManager - Amoeba Monitor Server listening on /127.0.0.1:43623.
·测试一下通过amoeba端连接mysql服务器
mysql -uroot -p -h 172.18.119.27
[root@centos7 ~]# mysql -uroot -p -h 172.18.119.27
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 257820787
Server version: 5.1.45-mysql-amoeba-proxy-2.2.0 MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| hellodb |
| mysql |
| performance_schema |
| test |
+--------------------+
5 rows in set (0.00 sec)
以上表示通过amoeba代理请求后端mysql-server的方式已经设置完成

————————读写分离————————————
案例2:使用amoeba实现主从的读写分离场景
实验环境如案例1

提示:我们只需要在案例1中做一些修改即可达到我们的效果

修改dbServer.xml,添加server2的ip地址为从服务器的地址
<dbServer name="server2" parent="abstractServer">
<factoryConfig>
<!-- mysql ip -->
<property name="ipAddress">172.18.119.112</property>
</factoryConfig>
</dbServer>

修改amoeba.xml配置文件
开启一下的选项并且修改对应的dbServer.xml中定义的服务器即可
<property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>
<property name="LRUMapSize">1500</property>
<property name="defaultPool">server1</property> //默认的主服务器
<property name="writePool">server1</property> //指定可写服务器名称
<property name="readPool">server2</property> //指定读服务器的名称
<property name="needParse">true</property>

启动amoeba服务
amoeba start

测试是否实现读写分离到不同的节点

tcpdump用法
格式 tcpdump 选项 协议 目标流向 主机 主机ip
常用选项:
-i: any
-n: 不解析主机名,以数字的格式显示主机名
-nn:不解析主机名和端口号
-X: 16进制
-XX: 16进制格式并且显示以太网的首部信息
-s: 抓包大小,0标识整个包
-w:保存成为文件
-r:读取查看此前保存下来的包文件
eg3:tcpdump -i eth0 -nn -XX tcp dst port 3306 and dst host 172.18.119.102
eg:tcp src or dst port 21 AND src host 172.18.119.1
eg:tcp port 21 AND host 172.18.119.1

·我们使用tcpdumpf分别在后端的mysql服务器上抓包的方式去检测结果
主: tcpdump -i eth0 -nn -s0 -XX tcp dst port 3306 and dst host 172.18.119.102
从:tcpdump -i eth0 -nn -s0 -XX tcp dst port 3306 and dst host 172.18.119.112

我们删除一个mydb数据库:
查看主节点抓包情况:
[root@server ~]# tcpdump -i eth0 -nn -A -s0 tcp dst port 3306 and dst host 172.18.119.102
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
02:06:25.996306 IP 172.18.119.27.46806 > 172.18.119.102.3306: Flags [P.], seq 3261684681:3261684704, ack 3798897900, win 229, options [nop,nop,TS val 7112336 ecr 24237796], length 23
E..K;.@.@..C..w...wf.....i[..n.......!.....
.l...q.......drop database mydb
02:06:26.068120 IP 172.18.119.112.51505 > 172.18.119.102.3306: Flags [.], ack 4008682986, win 863, options [nop,nop,TS val 24511959 ecr 24509821], length 0
E..4..@.@.....wp..wf.1...i........._.......
.v...u.}
02:06:26.068155 IP 172.18.119.27.46806 > 172.18.119.102.3306: Flags [.], ack 12, win 229, options [nop,nop,TS val 7112407 ecr 24509821], length 0
E..4;.@.@..Y..w...wf.....i[..n......O......
.l...u.}
查看从节点的抓包情况:
[root@clinet ~]# tcpdump -i eth0 -nn -A -s0 tcp dst port 3306 and dst host 172.18.119.112
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
02:06:26.070007 IP 172.18.119.27.32912 > 172.18.119.112.3306: Flags [P.], seq 2350494065:2350494087, ack 2922410229, win 229, options [nop,nop,TS val 7112409 ecr 24240013], length 22
E..J..@.@."J..w...wp.......q.0p......d.....
.l...q.......SELECT DATABASE()
02:06:26.070623 IP 172.18.119.27.32912 > 172.18.119.112.3306: Flags [.], ack 69, win 229, options [nop,nop,TS val 7112410 ecr 24511962], length 0
E..4..@.@."_..w...wp.........0q9...........
.l...v..
意思可以看出操作是在主服务器上执行的。从服务器上执行了个select操作 这个是正常的逻辑,先查后删。

我们来查询一个请求 看看抓包的状态
select * from tb1;
MySQL [mydb]> select * from tb1;
+-------+
| name |
+-------+
| tom |
| jerry |
+-------+
2 rows in set (0.01 sec)
或者:
MySQL [mydb]> select * from mydb.tb1;
+-------+
| name |
+-------+
| tom |
| jerry |
+-------+
2 rows in set (0.01 sec)

此时查看从节点的抓包情况:
02:13:35.137924 IP 172.18.119.27.32918 > 172.18.119.112.3306: Flags [P.], seq 77:99, ack 105, win 229, options [nop,nop,TS val 7541474 ecr 24941027], length 22
E..J`.@.@.....w...wp....H......:....V......
.s...|.......select * from tb1
02:13:35.178747 IP 172.18.119.27.32918 > 172.18.119.112.3306: Flags [.], ack 190, win 229, options [nop,nop,TS val 7541516 ecr 24941030], length 0
E..4`.@.@.....w...wp....H...........-e.....
.s...|..
02:19:44.087548 IP 172.18.119.27.32918 > 172.18.119.112.3306: Flags [P.], seq 99:126, ack 190, win 229, options [nop,nop,TS val 7910422 ecr 24941030], length 27
E..O`.@.@.....w...wp....H............l.....
.x...|.......select * from mydb.tb1
02:19:44.089454 IP 172.18.119.27.32918 > 172.18.119.112.3306: Flags [.], ack 275, win 229, options [nop,nop,TS val 7910424 ecr 25309980], length 0
E..4`.@.@.....w...wp....H..................
.x....3.

可以看到我们查询的请求分配到从节点执行了

再查看主节点的抓包情况:
02:11:54.095323 IP 172.18.119.27.46809 > 172.18.119.102.3306: Flags [.], ack 94, win 229, options [nop,nop,TS val 7440430 ecr 24837805], length 0
E..4..@.@.T...w...wf...._'.T........gi.....
.q...z..
02:11:54.103059 IP 172.18.119.27.46810 > 172.18.119.102.3306: Flags [.], ack 94, win 229, options [nop,nop,TS val 7440441 ecr 24837817], length 0
E..4..@.@..c..w...wf.....f..A........r.....
.q.9.z..
02:11:54.113870 IP 172.18.119.27.46811 > 172.18.119.102.3306: Flags [.], ack 94, win 229, options [nop,nop,TS val 7440449 ecr 24837822], length 0
E..4..@.@.....w...wf....J....?+............
.q.A.z..
没有看到查询的请求,我们试验成功。

以上是通过amoeba程序基本实现mysql的主动读写分离。

思考一个问题:如何实现写是server1,读请求需要主从两台服务器都提供查询,请问我们如何设置amoeba
提示:
我们在dbServer的<dbServer name="readpool" virtual="true">项的项目中添加
<property name="poolNames">server2,server1</property>
或者
<property name="poolNames">server2,server2,server2,server1</property>
最终完整设置如下:
<dbServer name="readpool" virtual="true">
<poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">
<!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->
<property name="loadbalance">1</property>

<!-- Separated by commas,such as: server1,server2,server1 -->
<property name="poolNames">server1,server2</property>
</poolConfig>
</dbServer>

然后我们在amoeba的主配置文件调用readpool的服务器组
<property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>
<property name="LRUMapSize">1500</property>
<property name="defaultPool">server1</property>


<property name="writePool">server1</property>
<property name="readPool">readpool</property> //再此调用即可

<property name="needParse">true</property>




 

 

 

 

 

 



 

posted on 2016-12-29 02:24  筷子eason  阅读(275)  评论(0)    收藏  举报