本文记录从头搭建一个MongoDB 副本集分片集群的过程。

    我们要创建一个这样子的分布式集群:有两个shard,每个shard都是一个replica set,各有两个副本(实际产品应用中还应加上一个仅用于投票aribiter);有三个config server;有一个mongos。步骤如下(前提:你已经安装了MongoDB,并且假设你对分布式系统的一般架构有认识):

1、replica set

启动两个副本集:

replica set A
mkdir -p ./replset_shard1/node1
mkdir -p ./replset_shard1/node2
numactl --interleave=all mongod --port 20001 --dbpath ./replset_shard1/node1 --replSet set_a --oplogSize 1024 --logpath ./replset_shard1/node1/rs20001.log --fork
numactl --interleave=all mongod --port 20002 --dbpath ./replset_shard1/node2 --replSet set_a --oplogSize 1024 --logpath ./replset_shard1/node2/rs20002.log --fork
初始化,进入某个副本执行:
rs.initiate({"_id" : "set_a", "members" : [{_id: 0, host: "xxxhost:20001"}, {_id: 1, host: "xxxhost: 20002"}]})

replica set B
mkdir -p ./replset_shard2/node1
mkdir -p ./replset_shard2/node2
numactl --interleave=all mongod --port 30001 --dbpath ./replset_shard2/node1 --replSet set_b --oplogSize 1024 --logpath ./replset_shard2/node1/rs30001.log --fork
numactl --interleave=all mongod --port 30002 --dbpath ./replset_shard2/node2 --replSet set_b --oplogSize 1024 --logpath ./replset_shard2/node2/rs30002.log --fork

初始化
rs.initiate({"_id" : "set_a", "members" : [{_id: 0, host: "xxxhost:30001"}, {_id: 1, host: "xxxhost: 30002"}]})

注意1:--replSet 指定副本名,一个副本集内的副本名必须一样,--oplogSize 指定oplog大小(单位MB),如果不指定,默认为DB所在磁盘空闲空间的5%,且大于1GB,不超过50GB

注意2:本例子为测试使用,实际工作中副本要能抵御单点故障:多个副本分布在不同机器\机房上。

2、config server

mkdir -p ./data/configdb1;mkdir -p ./data/configdb2;mkdir -p ./data/configdb3;

启动mongo config server:

mongod --configsvr --fork --logpath ./data/configdb1/mongo17019.log --dbpath ./data/configdb1 --port 17019
mongod --configsvr --fork --logpath ./data/configdb2/mongo27019.log --dbpath ./data/configdb2 --port 27019
mongod --configsvr --fork --logpath ./data/configdb3/mongo37019.log --dbpath ./data/configdb3 --port 37019

3、mongos

mkdir -p ./mongosdb

启动mongos:

mongos --configdb xxxhost:17019,xxxhost:27019,xxxhost:37019 --logpath ./mongosdb/mongos.log --fork --port 8100

启动mongos时,config server的配置信息不使用localhost、也不使用127.0.0.1,否则添加其它机器的shard会出现错误提示:
"can’t use localhost as a shard since all shards need to communicate. either use all shards and configdbs in localhost or all in actual IPs host: xxxxx isLocalHost"

4、replica set 添加到 shard cluster

登陆mongos

test> use admin
switched to db admin
admin> db.runCommand({addShard: "set_a/xxxhost:20001"})
{ "shardAdded" : "set_a", "ok" : 1 }

admin> db.runCommand({addShard: "set_b/xxxhost:30001"})
{ "shardAdded" : "set_b", "ok" : 1 }

查看config.databases:
config> db.databases.find()
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "cswuyg", "partitioned" : false, "primary" : "set_a" }

查看shards:
config> db.shards.find()
{ "_id" : "set_a", "host" : "set_a/xxxhost:20001,xxxhost:20002" }
{ "_id" : "set_b", "host" : "set_b/xxxhost:30001,xxxhost:30002" }

5、对文档使用shard功能

登陆mongos:

cswuyg> use admin
switched to db admin
admin> db.runCommand({"enablesharding": "cswuyg"})
{ "ok" : 1 }
admin> db.runCommand({"shardcollection": "cswuyg.a", "key": {"_id": 1}})
{ "collectionsharded" : "cswuyg.a", "ok" : 1 }

6、插入数据测试

登陆mongos,进入测试DB,执行测试js代码:

var a = 10000
for (var i = 0; i < 1000000; ++i){
db.a.save({"b": i})
}
集合自动均衡后(或者手动启动均衡:sh.startBalancer())chunks的分布效果:
config> db.chunks.find()
{ "_id" : "cswuyg.a-_id_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("54f54f0a59b0d8e1cbf0784e"), "ns" : "cswuyg.a", "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : ObjectId("54f477859a27767875b03801") }, "shard" : "set_b" }
{ "_id" : "cswuyg.a-_id_ObjectId('54f477859a27767875b03801')", "lastmod" : Timestamp(3, 0), "lastmodEpoch" : ObjectId("54f54f0a59b0d8e1cbf0784e"), "ns" : "cswuyg.a", "min" : { "_id" : ObjectId("54f477859a27767875b03801") }, "max" : { "_id" : ObjectId("54f5507a86d364ad1c3f125f") }, "shard" : "set_b" }
{ "_id" : "cswuyg.a-_id_ObjectId('54f5507a86d364ad1c3f125f')", "lastmod" : Timestamp(4, 1), "lastmodEpoch" : ObjectId("54f54f0a59b0d8e1cbf0784e"), "ns" : "cswuyg.a", "min" : { "_id" : ObjectId("54f5507a86d364ad1c3f125f") }, "max" : { "_id" : ObjectId("54f551fe86d364ad1c44a844") }, "shard" : "set_a" }
{ "_id" : "cswuyg.a-_id_ObjectId('54f551fe86d364ad1c44a844')", "lastmod" : Timestamp(3, 2), "lastmodEpoch" : ObjectId("54f54f0a59b0d8e1cbf0784e"), "ns" : "cswuyg.a", "min" : { "_id" : ObjectId("54f551fe86d364ad1c44a844") }, "max" : { "_id" : ObjectId("54f552f086d364ad1c4aee1f") }, "shard" : "set_a" }
{ "_id" : "cswuyg.a-_id_ObjectId('54f552f086d364ad1c4aee1f')", "lastmod" : Timestamp(4, 0), "lastmodEpoch" : ObjectId("54f54f0a59b0d8e1cbf0784e"), "ns" : "cswuyg.a", "min" : { "_id" : ObjectId("54f552f086d364ad1c4aee1f") }, "max" : { "_id" : { "$maxKey" : 1 } }, "shard" : "set_b" }

7、为副本集set_a添加新的副本

启动新副本实例
mkdir -p ./replset_shard1/node3
numactl --interleave=all mongod --port 20003 --dbpath ./replset_shard1/node3 --replSet set_a --oplogSize 1024 --logpath ./replset_shard1/node3/rs20003.log --fork
新副本实例加入到副本集
进入到primary实例执行:
test> rs.add("xxxhost:20003")
{ "ok" : 1 }
加入之后的新副本实例需要时间初始化同步数据,大数据量数据初始化过程可能很长,对服务会有较大影响。而且如果同步初始化过程耗时太长时,且导致了oplog空间被写满一轮,则又要再次触发同步初始化,这种情况下可以采用其它方式来实现添加副本:拷贝primary实例的磁盘文件到新目录然后以副本启动,然后加入到replica set,这样子就不需要有同步初始化过程。

参考:

附上:删除副本
cfg = rs.config()
cfg.members.splice(0,2) #删除从位置0开始的2个成员
rs.reconfig(cfg, {'force':true})
参考:https://docs.mongodb.com/manual/tutorial/remove-replica-set-member/

8、其它

如果要把一个分片集群转为副本集集群,需要dump出数据,然后restore回去;
如果要把一个分片集群转为副本集分片集群,参考:
http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/

如果是多机器的集群,不要在replica中使用localhost、127.0.0.1,否则导致无法使用多机器部署。

 

补充:

为副本添加tag:

var conf = rs.conf()
conf.members[0].tags = { "location": "nj" }
conf.members[1].tags = { "location": "bj"}
conf.members[2].tags = { "location": "hz"  }
conf.members[3].tags = { "location": "gz"  }
rs.reconfig(conf)
 

本文所在:http://www.cnblogs.com/cswuyg/p/4356637.html

posted on 2015-03-22 01:28  烛秋  阅读(6423)  评论(2编辑  收藏  举报