01-MongoDB-3.2.9 副本集群部署

1.一台主服务器及二台副本服务器都安装mongdb

cd /usr/local/src
curl -O https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.2.9.tgz
tar -zxvf mongodb-linux-x86_64-rhel62-3.2.9.tgz
mv mongodb-linux-x86_64-rhel62-3.2.9 /usr/local/mongodb-3.2.9
ln -s /usr/local/mongodb-3.2.9/ /usr/local/mongodb
mkdir /usr/local/mongodb/data
mkdir /usr/local/mongodb/logs
touch /usr/local/mongodb/logs/mongdb.log

# 配置环境变量
vim /etc/profile
export PATH=/usr/local/mongodb/bin:$PATH
source /etc/profile

2.在每台服务器上启动mongodb

# --fork守护进程启动,--dbpath数据目录,logpath日志文件,pidfilepath PID文件,--replSet设置副本集,repset副本集名称
[root@mongodb01 logs]# mongod --fork --dbpath=/usr/local/mongodb/data/ --logpath=/usr/local/mongodb/logs/mongodb.log --pidfilepath=/usr/local/mongodb/logs/mongodb.pid --replSet repset    
about to fork child process, waiting until server is ready for connections.
forked process: 30453
child process started successfully, parent exiting

3.初始化副本集

[root@mongodb01 logs]# mongo
MongoDB shell version: 3.2.9
connecting to: test
Server has startup warnings: 
2016-08-26T17:18:06.388+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2016-08-26T17:18:06.388+0800 I CONTROL  [initandlisten] 
2016-08-26T17:18:06.388+0800 I CONTROL  [initandlisten] 
2016-08-26T17:18:06.388+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-08-26T17:18:06.388+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-08-26T17:18:06.388+0800 I CONTROL  [initandlisten] 
2016-08-26T17:18:06.388+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-08-26T17:18:06.388+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-08-26T17:18:06.388+0800 I CONTROL  [initandlisten] 
> use admin            #使用admin数据库

5.定义副本集配置变量,这里的 _id:”repset” 和上面命令参数“ –replSet repset” 要保持一样。

> use admin #使用admin数据库
> config = { _id:"repset", members:[
... {_id:0,host:"172.16.1.60:27017"},
... {_id:1,host:"172.16.1.61:27017"},
... {_id:2,host:"172.16.1.62:27017"}]
...}
# 输出
{
        "_id" : "repset",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "172.16.1.60:27017"
                },
                {
                        "_id" : 1,
                        "host" : "172.16.1.61:27017"
                },
                {
                        "_id" : 2,
                        "host" : "172.16.1.62:27017"
                }
        ]
} 

6.初始换集群

> rs.initiate(config);
{ "ok" : 1 }  

7.查看集群节点的状态

repset:OTHER> rs.status();      
{
"set" : "repset",
"date" : ISODate("2016-08-26T09:28:46.137Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "172.16.1.60:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",          #  主节点
"uptime" : 641,
"optime" : {
"ts" : Timestamp(1472203697, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2016-08-26T09:28:17Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1472203696, 1),
"electionDate" : ISODate("2016-08-26T09:28:16Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "172.16.1.61:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",       # 副本节点
"uptime" : 41,
"optime" : {
"ts" : Timestamp(1472203697, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2016-08-26T09:28:17Z"),
"lastHeartbeat" : ISODate("2016-08-26T09:28:44.330Z"),
"lastHeartbeatRecv" : ISODate("2016-08-26T09:28:45.207Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "172.16.1.62:27017",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "172.16.1.62:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",      # 副本节点
"uptime" : 41,
"optime" : {
"ts" : Timestamp(1472203697, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2016-08-26T09:28:17Z"),
"lastHeartbeat" : ISODate("2016-08-26T09:28:44.330Z"),
"lastHeartbeatRecv" : ISODate("2016-08-26T09:28:45.147Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "172.16.1.60:27017",
"configVersion" : 1
}
],
"ok" : 1
}   

mongodb副本集已经搭建成功了

第二部分

1.测试副本集群数据复制功能,主节点

# 主节点连接DB
[root@mongodb01 ~]# mongo 127.0.0.1

# 建立数据库
repset:PRIMARY> use pinhui
switched to db pinhui

# 往pinhui表插入数据
repset:PRIMARY> db.pinhuidb.insert({"hongjiu":"2000"})
WriteResult({ "nInserted" : 1 })

 2.从节点查询

# 使用pinhui数据库
repset:SECONDARY> ues pinhui
2016-09-02T15:05:25.209+0800 E QUERY    [thread1] SyntaxError: missing ; before statement @(shell):1:4

# 查询数据,mongodb默认主节点读写数据的,副节点上不允许读,需要设置副本节点可以读
repset:SECONDARY> show tables;
2016-09-02T15:05:35.860+0800 E QUERY    [thread1] Error: listCollections failed: { "ok" : 0, "errmsg" : "not master and slaveOk=false", "code" : 13435 } :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype._getCollectionInfosCommand@src/mongo/shell/db.js:773:1
DB.prototype.getCollectionInfos@src/mongo/shell/db.js:785:19
DB.prototype.getCollectionNames@src/mongo/shell/db.js:796:16
shellHelper.show@src/mongo/shell/utils.js:754:9
shellHelper@src/mongo/shell/utils.js:651:15
@(shellhelp2):1:1

 3.配置副本节点可以读

# 设置副本节点可以读
repset:SECONDARY> db.getMongo().setSlaveOk();

# 切换数据库
repset:SECONDARY> use pinhui

# 查询数据
repset:SECONDARY> db.pinhuidb.find();
{ "_id" : ObjectId("57c922fc701092979d0a245d"), "hongjiu" : "2000" }

 4.二中设置副本节点读的方法

有两种方法实现从机的查询:
第一种方法:db.getMongo().setSlaveOk();
第二种方法:rs.slaveOk();
但是这种方式有一个缺点就是,下次再通过mongo进入实例的时候,查询仍然会报错,为此可以通过下列方式

# 永久方法
vi ~/.mongorc.js    # 增加一行
rs.slaveOk();
这样的话以后每次通过mongo命令进入都可以查询了



如果是通过java访问secondary的话则会报下面的异常
com.mongodb.MongoException: not talking to master and retries used up
解决的办法很多。
第一种方法:在java代码中调用dbFactory.getDb().slaveOk();
第二种方法:在java代码中调用
dbFactory.getDb().setReadPreference(ReadPreference.secondaryPreferred());//在复制集中优先读secondary,如果secondary访问不了的时候就从master中读
或
dbFactory.getDb().setReadPreference(ReadPreference.secondary());//只从secondary中读,如果secondary访问不了的时候就不能进行查询

第三种方法:在配置mongo的时候增加slave-ok="true"也支持直接从secondary中读
<mongo:mongo id="mongo" host="${mongodb.host}" port="${mongodb.port}">
        <mongo:options slave-ok="true"/> 
</mongo:mongo>

 5.移除副本集群节点

rs.add("172.16.1.60:27017");  

6.增加副本集群节点

rs.remove("172.16.1.60:27017"); 

7.查询数据库

repset:PRIMARY> show dbs
ExceptionLog  0.000GB
OperationLog  0.000GB
Passport      0.000GB
TrackerLog    0.000GB
local         0.001GB
pinhui001     0.000GB
test          0.000GB
1. STARTUP:刚加入到复制集中,配置还未加载  
2. STARTUP2:配置已加载完,初始化状态  
3. RECOVERING:正在恢复,不适用读  
4. ARBITER: 仲裁者  
5. DOWN:节点不可到达  
6. UNKNOWN:未获取其他节点状态而不知是什么状态,一般发生在只有两个成员的架构,脑裂  
7. REMOVED:移除复制集  
8. ROLLBACK:数据回滚,在回滚结束时,转移到RECOVERING或SECONDARY状态  
9. FATAL:出错。查看日志grep “replSet FATAL”找出错原因,重新做同步  
10. PRIMARY:主节点  
11. SECONDARY:备份节点  

 

  

  

 

参考资料:

http://www.lanceyan.com/tech/mongodb/mongodb_repset1.html

http://www.cnblogs.com/xsi640/p/3765911.html

http://blog.csdn.net/sd0902/article/details/21537577

http://www.elain.org/2012/01/13/MongoDB%E5%AE%9E%E6%88%98%E7%B3%BB%E5%88%97%E4%B9%8B%E5%9B%9B%EF%BC%9Amongodb%E5%89%AF%E6%9C%AC%E9%9B%86%E9%83%A8%E7%BD%B2/

posted @ 2016-08-26 15:42  sunmmi  阅读(523)  评论(0)    收藏  举报