天天看點

mongodb叢集建立副本

副本集: 

與主從差別在于:1、沒有特定的主資料庫 2、如果哪個主資料庫當機了,叢集中就會推選出一個從屬資料庫作為主資料庫頂上 

這就具備了自動故障恢複功能。 

搭建副本集的環境如下 

mongo叢集: 

mongos 192.168.12.107:27021 

config 192.168.12.107:27018 

shard0000 192.168.12.104:27017 

shard0001 192.168.12.104:27019 

shard0002 192.168.12.104:27023 

對testdb資料庫裡面的usr表以name為key進行了分片 

現在想給shard0000 192.168.12.104:27017搭建副本 192.168.12.104:27030   仲裁 192.168.12.104:27033,并添加到分片中去。 

mongos> db.printshardingstatus() 

--- sharding status --- 

  sharding version: { 

"_id" : 1, 

"version" : 3, 

"mincompatibleversion" : 3, 

"currentversion" : 4, 

"clusterid" : objectid("541a47a9124d847f09e99204") 

  shards: 

{  "_id" : "shard0000",  "host" : "192.168.12.104:27017" } 

{  "_id" : "shard0001",  "host" : "192.168.12.104:27019" } 

{  "_id" : "shard0002",  "host" : "192.168.12.104:27023" } 

  databases: 

{  "_id" : "admin",  "partitioned" : false,  "primary" : "config" } 

{  "_id" : "testdb",  "partitioned" : true,  "primary" : "shard0000" } 

testdb.usr 

shard key: { "name" : 1 } 

chunks: 

shard0000 5 

shard0001 4 

shard0002 5 

{ "name" : { "$minkey" : 1 } } -->> { "name" : "ava0" } on : shard0000 timestamp(6, 0) 

{ "name" : "ava0" } -->> { "name" : "bobo0" } on : shard0001 timestamp(3, 1) 

{ "name" : "bobo0" } -->> { "name" : "mengtian57493" } on : shard0001 timestamp(4, 0) 

{ "name" : "mengtian57493" } -->> { "name" : "師傅10885" } on : shard0001 timestamp(7, 0) 

{ "name" : "師傅10885" } -->> { "name" : "師傅18160" } on : shard0000 timestamp(8, 0) 

{ "name" : "師傅18160" } -->> { "name" : "師傅23331" } on : shard0001 timestamp(9, 0) 

{ "name" : "師傅23331" } -->> { "name" : "師傅337444" } on : shard0000 timestamp(10, 0) 

{ "name" : "師傅337444" } -->> { "name" : "師傅47982" } on : shard0002 timestamp(10, 1) 

{ "name" : "師傅47982" } -->> { "name" : "師傅583953" } on : shard0002 timestamp(8, 2) 

{ "name" : "師傅583953" } -->> { "name" : "師傅688087" } on : shard0002 timestamp(9, 2) 

{ "name" : "師傅688087" } -->> { "name" : "師傅792944" } on : shard0002 timestamp(9, 4) 

{ "name" : "師傅792944" } -->> { "name" : "張榮達16836" } on : shard0002 timestamp(9, 5) 

{ "name" : "張榮達16836" } -->> { "name" : "張榮達9999" } on : shard0000 timestamp(5, 1) 

{ "name" : "張榮達9999" } -->> { "name" : { "$maxkey" : 1 } } on : shard0000 timestamp(3, 3) 

1、建立目錄,并啟動副本中的兩個mongodb 

[root@testvip1 ~]# ps -ef|grep mongo 

root      3399 18112  0 15:11 pts/0    00:00:02 mongod --dbpath=/mongo_zc --port 27033 --replset fubentina/192.168.12.104:27017     --仲裁mongo 

root      4547     1  0 sep24 ?        00:03:49 mongod --dbpath=/mongo3 --port 27023 --fork --logpath=/mongo3/log3/log3.txt --rest     --起rest是監控,也可以不起 

root      4846     1  0 sep24 ?        00:03:34 mongod --dbpath=/mongo2 --port 27019 --fork --logpath=/mongo2/log2/log2.txt 

root      5259     1  0 15:16 ?        00:00:01 mongod --dbpath=/mongo1 --port 27017 --fork --logpath=/mongo1/log1/log1.txt --replset fubentina/192.168.12.104:27030   --以新的方式啟動27017 

root      5489  3464  0 15:21 pts/2    00:00:00 grep mongo 

root     18446     1  0 sep24 ?        00:09:04 mongod --dbpath=/mongo1_fuben --port 27030 --fork --logpath=/mongo1_fuben/logfuben.txt --replset fubentina/192.168.12.104:27017  --啟動副本27030 

2、開啟副本配置(進入副本中任意一個admin) 

[root@testvip1 mongo3]# mongo 192.168.12.104:27017/admin 

mongodb shell version: 2.4.6 

connecting to: 192.168.12.104:27017/admin 

> db.runcommand({"replsetinitiate":{"_id":"fubentina","members":[{"_id":1,"host":"192.168.12.104:27017"},{"_id":2,"host":"192.168.12.104:27030"}]}}) 

"info" : "config now saved locally.  should come online in about a minute.", 

"ok" : 1 

3、配置仲裁伺服器 

指定fubentina中的任意一個伺服器端口 

root@testvip1 /]# mongod --dbpath=/mongo_zc --port 27033 --replset fubentina/192.168.12.104:27017 

4、就發現shard1已經變成了下面這樣 

fubentina:primary> 

5、添加仲裁節點 

fubentina:primary> rs.addarb("192.168.12.104:27033") 

{ "ok" : 1 } 

6、使用rs.status()檢視叢集中的伺服器狀态。 

fubentina:primary> rs.status() 

"set" : "fubentina", 

"date" : isodate("2014-09-24t09:22:38z"), 

"mystate" : 1, 

"members" : [ 

"name" : "192.168.12.104:27017", 

"health" : 1, 

"state" : 1, 

"statestr" : "primary",      --主 

"uptime" : 413, 

"optime" : timestamp(1411550517, 1), 

"optimedate" : isodate("2014-09-24t09:21:57z"), 

"self" : true 

}, 

"_id" : 2, 

"name" : "192.168.12.104:27030", 

"state" : 2, 

"statestr" : "secondary",    --從 

"uptime" : 373, 

"lastheartbeat" : isodate("2014-09-24t09:22:37z"), 

"lastheartbeatrecv" : isodate("2014-09-24t09:22:37z"), 

"pingms" : 0, 

"syncingto" : "192.168.12.104:27017" 

"_id" : 3, 

"name" : "192.168.12.104:27033", 

"state" : 7, 

"statestr" : "arbiter",     --仲裁 

"uptime" : 41, 

"pingms" : 0 

], 

如果某一個挂掉了或者沒起來,就會發現statestr 顯示成:"statestr" : "(not reachable/healthy)", 

7、副本建立成功,下一步就是解決分片的問題,我們需要先删除分片shard0000,讓資料移動到其他分片 

mongos> db.runcommand({"removeshard":"192.168.12.104:27017"}) 

"msg" : "draining ongoing", 

"state" : "ongoing", 

"remaining" : { 

"chunks" : numberlong(0),  ---顯示的是資料移動的剩餘量,為0表示資料已經全部移走了 

"dbs" : numberlong(2)     --這裡需要注意,要将兩個資料庫的primary也移一下 

"note" : "you need to drop or moveprimary these databases",  --看這裡的提示 

"dbstomove" : [ 

"test", 

"testdb" 

mongos> use admin 

switched to db admin 

mongos> db.runcommand({moveprimary:"testdb",to:"shard0001"})  --将主移動到另一個分片上 

{ "primary " : "shard0001:192.168.12.104:27019", "ok" : 1 } 

mongos> db.runcommand({moveprimary:"test",to:"shard0001"}) 

mongos> db.runcommand({removeshard:"192.168.12.104:27017"})   --這樣就可以成功移除 

"msg" : "removeshard completed successfully", 

"state" : "completed", 

"shard" : "shard0000", 

do not run the moveprimary until you have finished draining the shard 

也就是說,一定要等到分片資料遷移完了,再運作moveprimary指令!!! 

而且這句指令不像removeshard是異步的,這個moveprimary指令會等到将所有非分片資料都移到其他伺服器後,才響應,是以時間有可能會比較長,主要還是看這個伺服器上,非分片資料有多少。 

另外,moveprimary執行完後,還記得将db.runcommand({removeshard:"shardx"})再運作一遍,直到看到如下結果{ msg: "remove shard completed successfully" , stage: "completed", host: "mongodb0", ok : 1 } 

到此為止,遷移才真正完成,可以放心地關閉mongod。 

8、删除原192.168.12.104:27017裡面的testdb庫 

mongos> db.runcommand({addshard:"fubentina/192.168.12.104:27017,192.168.12.104:27030,192.168.12.104:27033"}) 

"ok" : 0, 

"errmsg" : "can't add shard fubentina/192.168.12.104:27017,192.168.12.104:27030,192.168.12.104:27033 because a local database 'testdb' exists in another shard0000:192.168.12.104:27017" 

} ---添加副本的時候,因為檢測到12.104:27017上面存在testdb這個同名資料庫,是以無法添加成功。 

fubentina:primary> show dbs 

local 1.078125gb 

testdb 0.453125gb 

fubentina:primary> use testdb 

switched to db testdb 

fubentina:primary> db.usr.drop(); 

true 

fubentina:primary> db.dropdatabase(); 

{ "dropped" : "testdb", "ok" : 1 } 

9、添加副本集到分片 

mongos> db.runcommand({addshard:"192.168.12.104:27017"});   --直接添加副本中的一個是不行的,要一起添加 

"errmsg" : "host is part of set: fubentina use replica set url format <setname>/<server1>,<server2>,...." 

添加副本集 

mongos> db.runcommand({addshard:"fubentina/192.168.12.104:27017,192.168.12.104:27030,192.168.12.104:27033"}) --最後一個是仲裁 

{ "shardadded" : "fubentina", "ok" : 1 } 

10、檢視分片資訊 

{  "_id" : "fubentina",  "host" : "fubentina/192.168.12.104:27017,192.168.12.104:27030" }   ---顯示的是副本集 

{  "_id" : "testdb",  "partitioned" : true,  "primary" : "shard0001" } 

fubentina 4 

shard0001 5 

{ "name" : { "$minkey" : 1 } } -->> { "name" : "ava0" } on : fubentina timestamp(16, 0)   --已經有資料被配置設定到這個副本 

{ "name" : "ava0" } -->> { "name" : "bobo0" } on : fubentina timestamp(18, 0) 

{ "name" : "bobo0" } -->> { "name" : "mengtian57493" } on : shard0001 timestamp(18, 1) 

{ "name" : "師傅10885" } -->> { "name" : "師傅18160" } on : shard0001 timestamp(12, 0) 

{ "name" : "師傅23331" } -->> { "name" : "師傅337444" } on : fubentina timestamp(17, 0) 

{ "name" : "師傅337444" } -->> { "name" : "師傅47982" } on : fubentina timestamp(19, 0) 

{ "name" : "師傅47982" } -->> { "name" : "師傅583953" } on : shard0002 timestamp(19, 1) 

{ "name" : "張榮達16836" } -->> { "name" : "張榮達9999" } on : shard0001 timestamp(14, 0) 

{ "name" : "張榮達9999" } -->> { "name" : { "$maxkey" : 1 } } on : shard0002 timestamp(15, 0) 

{  "_id" : "test",  "partitioned" : false,  "primary" : "shard0001" } 

{  "_id" : "usr",  "partitioned" : false,  "primary" : "shard0001" } 

11、檢視 

連接配接到副本中的一個點 

[root@testvip1 ~]# mongo 192.168.12.104:27017/admin 

fubentina:primary> db.usr.count()   --可以查到資料 

283898 

[root@testvip1 ~]# mongo 192.168.12.104:27030/admin 

connecting to: 192.168.12.104:27030/admin 

fubentina:secondary> show dbs 

fubentina:secondary> use testdb 

fubentina:secondary> db.usr.count() 

thu sep 25 15:01:33.879 count failed: { "note" : "from execcommand", "ok" : 0, "errmsg" : "not master" } at src/mongo/shell/query.js:180  --現在27030是副本,是以檢視不了資料 

副本集終于添加成功 

12、進行測試 

将27017給kill掉 

kill 4353 

thu sep 25 15:11:49.785 dbclientcursor::init call() failed 

thu sep 25 15:11:49.788 error: error doing query: failed at src/mongo/shell/query.js:78 

thu sep 25 15:11:49.789 trying reconnect to 192.168.12.104:27017 

thu sep 25 15:11:49.789 reconnect 192.168.12.104:27017 failed couldn't connect to server 192.168.12.104:27017  --不能連接配接 

[root@viptest2 ~]# mongo 192.168.12.107:27021/admin 

connecting to: 192.168.12.107:27021/admin 

mongos> use testdb 

mongos> db.usr.find({"name":"ava100"})        ---能正常查到副本集的資料 

{ "_id" : objectid("541bd1573351616d81bb07e7"), "name" : "ava100", "address" : "yichangdong", "age" : 50 } 

直接連到27030上去看 

fubentina:primary> db.usr.count() 

fubentina:primary> db.usr.find({"name":"ava100"}) 

13、将27017起來,去看下rs.status() 

mongod --dbpath=/mongo1 --port 27017 --fork --logpath=/mongo1/log1/log1.txt --replset fubentina/192.168.12.104:27030 

fubentina:secondary> rs.status() 

"date" : isodate("2014-09-25t07:58:41z"), 

"mystate" : 2, 

"syncingto" : "192.168.12.104:27030", 

"statestr" : "secondary",    --啟動後它變成了副本 

"uptime" : 2551, 

"optime" : timestamp(1411628295, 2057), 

"optimedate" : isodate("2014-09-25t06:58:15z"), 

"statestr" : "primary",    --它變成了主 

"lastheartbeat" : isodate("2014-09-25t07:58:41z"), 

"lastheartbeatrecv" : isodate("2014-09-25t07:58:39z"), 

"statestr" : "arbiter",    -它依然是仲裁 

"uptime" : 2549, 

"lastheartbeatrecv" : isodate("2014-09-25t07:58:40z"), 

驗證ok