天天看點

mongodb3.4.3 叢集搭建

一 replica set 和shard 配置設定

環境:

1. 三台實體機,ip分别是X.X.X.75,X.X.X.76,X.X.X.77

2. 系統:centos7.2

叢集組成:

三個replicaSet的分片+一個replicaSet的config+三個入口mongos

配置設定方案:

1. 三個資料分片: 分别為shard01,shard02,shard03; 每個分片都是一個replicaSet,每個replicaSet由三個節點組成,一個主節點,一個從節點,一個選舉節點,這樣每個分片資料的相當于一主一備。

具體配置設定如下:

shard01: 75 PRIMARY, 76 SECONDARY, 77 ARBITER

shard02: 75 ARBITER, 76 PRIMARY, 77 SECONDARY

shard03: 75 SECONDARY, 76 ARBITER, 77 PRIMARY

2. config 節點配置為replicaSet,一主兩從。

具體如下:

config:75 PRIMARY,76 SECONDARY,77 SECONDARY

3. mongos 分别在75,76,77上配置一個mongos,分擔負載。

二 具體配置

shard01的配置,shard01.yaml

systemLog:
  destination: file
  path: "/usr/local/mongodb/instance/data_shard_01/log/shard01.log"
processManagement:
  fork: true
net:
  port: 
storage:
  dbPath: "/usr/local/mongodb/instance/data_shard_01/data/"
replication:
  replSetName: "shard01"
sharding:
  clusterRole: shardsvr
           

shard02的配置,shard02.yaml

systemLog:
  destination: file
  path: "/usr/local/mongodb/instance/data_shard_02/log/shard02.log"
processManagement:
  fork: true
net:
  port: 
storage:
  dbPath: "/usr/local/mongodb/instance/data_shard_02/data/"
replication:
  replSetName: "shard02"
sharding:
  clusterRole: shardsvr
           

shard03的配置,shard03.yaml

systemLog:
  destination: file
  path: "/usr/local/mongodb/instance/data_shard_03/log/shard03.log"
processManagement:
  fork: true
net:
  port: 
storage:
  dbPath: "/usr/local/mongodb/instance/data_shard_03/data/"
replication:
  replSetName: "shard03"
sharding:
  clusterRole: shardsvr
           

config的配置,config.yaml

systemLog:
  destination: file
  path: "/usr/local/mongodb/instance/config/log/config.log"
storage:
  dbPath: "/usr/local/mongodb/instance/config/data"
net:
  port: 
processManagement: 
  fork: true
sharding: 
  clusterRole: configsvr
replication:
  replSetName: cfgSet
           

mongos的配置,mongos.yaml

systemLog:
  destination: file
  path: "/usr/local/mongodb/instance/mongos/log/mongos.log"
net:
  port: 
processManagement: 
  fork: true
sharding: 
  configDB: cfgSet/X.X.X:,X.X.X:,X.X.X:
           

三 搭建步驟

1. https://www.mongodb.com/download-center#community下載下傳RHEL 7 Linux 64-bit x64的版本

2. 解壓到 /usr/local/mongodb/ 目錄下,更名為mongodb3.4.3

3. 配置mongodb的環境變量:

> vim /etc/profile 
export PATH=$PATH:/usr/local/mongodb/mongodb3./bin
> source /etc/profile
           

4. 建立目錄結構:

> cd /usr/local/mongodb/instance
建立如下目錄
.
├── config
│   ├── config.yaml
│   ├── data
│   └── log
├── data_shard_01
│   ├── data
│   ├── log
│   └── shard01.yaml
├── data_shard_02
│   ├── data
│   ├── log
│   └── shard02.yaml
├── data_shard_03
│   ├── data
│   ├── log
│   └── shard03.yaml
└── mongos
    ├── log
    └── mongos.yaml
           

5.分别在75,76,77上配置上述目錄結構,并分别啟動shard server程序:

> mongod -f ./data_shard_01/shard01.yaml
> mongod -f ./data_shard_02/shard02.yaml
> mongod -f ./data_shard_03/shard03.yaml
           

ps指令分别檢視三台機器的程序,每台機器有三個mongod的程序:

[[email protected] instance]# ps aux | grep mongod
root             ?        Sl   :   : mongod -f ./data_shard_01/shard01.yaml
root             ?        Sl   :   : mongod -f ./data_shard_02/shard02.yaml
root             ?        Sl   :   : mongod -f ./data_shard_03/shard03.yaml
           

6. 分别配置shard01,shard02,shard03的replica set

shard01的配置為:

{
 _id: "shard01",
 members:[
 {
  _id: ,
  host: "X.X.X.75:27018",
  priority: 
 },
 {
  _id: ,
  host: "X.X.X.76:27018",
  priority: 
 },
 {
  _id: ,
  host: "X.X.X.77:27018",
  arbiterOnly: true
 }
 ]
}
           

shard02的配置為:

{
 _id: "shard02",
 members:[
 {
  _id: ,
  host: "X.X.X.75:27118",
  arbiterOnly: true
 },
 {
  _id: ,
  host: "X.X.X.76:27118",
  priority: 
 },
 {
  _id: ,
  host: "X.X.X.77:27118",
  priority: 
 }
 ]
}
           

shard03的配置為:

{
 _id: "shard03",
 members:[
 {
  _id: ,
  host: "X.X.X.75:27218",
  priority: 
 },
 {
  _id: ,
  host: "X.X.X.76:27218",
  arbiterOnly: true
 },
 {
  _id: ,
  host: "X.X.X.77:27218",
  priority: 
 }
 ]
}
           

登入75

> mongo --port 
> cfg = {
 _id: "shard01",
 members:[
 {
  _id: ,
  host: "X.X.X.75:27018",
  priority: 
 },
 {
  _id: ,
  host: "X.X.X.76:27018",
  priority: 
 },
 {
  _id: ,
  host: "X.X.X.77:27018",
  arbiterOnly: true
 }
 ]
}
> rs.initiate(cfg)
> rs.status() 
顯示如下,配置成功:
{
    "set" : "shard01",
    "date" : ISODate("2017-05-02T09:42:22.484Z"),
    "myState" : ,
    "term" : NumberLong(),
    "heartbeatIntervalMillis" : NumberLong(),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(, ),
            "t" : NumberLong()
        },
        "appliedOpTime" : {
            "ts" : Timestamp(, ),
            "t" : NumberLong()
        },
        "durableOpTime" : {
            "ts" : Timestamp(, ),
            "t" : NumberLong()
        }
    },
    "members" : [
        {
            "_id" : ,
            "name" : "X.X.X.75:27018",
            "health" : ,
            "state" : ,
            "stateStr" : "PRIMARY",
            "uptime" : ,
            "optime" : {
                "ts" : Timestamp(, ),
                "t" : NumberLong()
            },
            "optimeDate" : ISODate("2017-05-02T09:42:15Z"),
            "electionTime" : Timestamp(, ),
            "electionDate" : ISODate("2017-05-02T03:46:44Z"),
            "configVersion" : ,
            "self" : true
        },
        {
            "_id" : ,
            "name" : "X.X.X.76:27018",
            "health" : ,
            "state" : ,
            "stateStr" : "SECONDARY",
            "uptime" : ,
            "optime" : {
                "ts" : Timestamp(, ),
                "t" : NumberLong()
            },
            "optimeDurable" : {
                "ts" : Timestamp(, ),
                "t" : NumberLong()
            },
            "optimeDate" : ISODate("2017-05-02T09:42:15Z"),
            "optimeDurableDate" : ISODate("2017-05-02T09:42:15Z"),
            "lastHeartbeat" : ISODate("2017-05-02T09:42:22.203Z"),
            "lastHeartbeatRecv" : ISODate("2017-05-02T09:42:22.255Z"),
            "pingMs" : NumberLong(),
            "syncingTo" : "X.X.X.75:27018",
            "configVersion" : 
        },
        {
            "_id" : ,
            "name" : "X.X.X.77:27018",
            "health" : ,
            "state" : ,
            "stateStr" : "ARBITER",
            "uptime" : ,
            "lastHeartbeat" : ISODate("2017-05-02T09:42:21.882Z"),
            "lastHeartbeatRecv" : ISODate("2017-05-02T09:42:21.598Z"),
            "pingMs" : NumberLong(),
            "configVersion" : 
        }
    ],
    "ok" : 
}
           

分别登入76 ,77 配置76,77的shard的replicaSet

> mongo --port 27118 配置shard02
> mongo --port 27218 配置shard03
           

7. 配置config

分别啟動三台機器上的config server

> mongod -f ./config/config.yaml
           

至此,每台機器上應該有4個mongod程序

[[email protected] instance]# ps aux | grep mongod
root             ?        Sl   :   : mongod -f ./data_shard_01/shard01.yaml
root             ?        Sl   :   : mongod -f ./data_shard_02/shard02.yaml
root             ?        Sl   :   : mongod -f ./data_shard_03/shard03.yaml
root             ?        Sl   :   : mongod -f ./config/config.yaml
root               pts/    S+   :   : grep --color=auto mongod
           

8. 配置config的replicaSet,config的replicaSet為一主兩從,配置如下:

登入
> mongo --port 
> cfg = 
{
 _id: "cfgSet",
 configsvr: true,
 members:[
 {
  _id: ,
  host: "X.X.X.75:27019",
  priority: 
 },
 {
  _id: ,
  host: "X.X.X.76:27019",
  priority: 
 },
 {
  _id: ,
  host: "X.X.X.77:27019",
  priority: 
 }
 ]
}
> rs.initiate(cfg)
> rs.status() 顯示詳細配置資訊即config的replicaSet配置完成!
           

9. 配置mongos

分别啟動三台機器上的mongos程序

> mongos -f ./mongos/mongos.yaml
           

至此,每台機器上應該有5個mongo程序

[[email protected] instance]# ps aux | grep mongo
root             ?        Sl   :   : mongod -f ./data_shard_01/shard01.yaml
root             ?        Sl   :   : mongod -f ./data_shard_02/shard02.yaml
root             ?        Sl   :   : mongod -f ./data_shard_03/shard03.yaml
root             ?        Sl   :   : mongod -f ./config/config.yaml
root              ?        Sl   :   : mongos -f ./mongos/mongos.yaml
root               pts/    S+   :   : grep --color=auto mongo
           

添加路由資訊:

> mongo --port 27017
> use admin
switched to db admin
>db.runCommand({addshard:"shard01/X.X.X.75:27018,X.X.X.76:27018,X.X.X.77:27018"})
>db.runCommand({addshard:"shard02/X.X.X.75:27118,X.X.X.76:27118,X.X.X.77:27118"})
>db.runCommand({addshard:"shard03/X.X.X.75:27218,X.X.X.76:27218,X.X.X.77:27218"})
>sh.status() 輸出如下,即為配置成功:
--- Sharding Status --- 
  sharding version: {
    "_id" : ,
    "minCompatibleVersion" : ,
    "currentVersion" : ,
    "clusterId" : ObjectId("5908240e38e25ae4cf8f20e8")
}
  shards:
    {  "_id" : "shard01",  "host" : "shard01/X.X.X.75:27018,X.X.X.76:27018",  "state" :  }
    {  "_id" : "shard02",  "host" : "shard02/X.X.X.76:27118,X.X.X.77:27118",  "state" :  }
    {  "_id" : "shard03",  "host" : "shard03/X.X.X.75:27218,X.X.X.77:27218",  "state" :  }
  active mongoses:
    "3.4.3" : 
 autosplit:
    Currently enabled: yes
  balancer:
    Currently enabled:  yes
    Currently running:  no
        Balancer lock taken at Tue May   :: GMT+ (CST) by ConfigServer:Balancer
    Failed balancer rounds in last  attempts:  
    Migration Results for the last  hours: 
         : Success
  databases:
           

至此mongo3.4.3 叢集配置完成,用戶端直接通路mongos即可

繼續閱讀