天天看點

Mongodb 通過一緻性備份搭建SECONDARY.

該方法應用面比較窄,

适用于 : 一主 一備 一投票節點,資料庫較大,oplog 比較小,備庫需要修複而且主庫不能停機的情況.

該方法僅限于學習測試,線上環境慎用.

叢集結構:

opsdba-vbj01-1:27018 arbite

opsdba-vbj01-1:27019 primary

opsdba-vbj01-1:27016 secondary

模拟opsdba-vbj01-1:27016 crash ,使用一緻性備份搭建備庫.

1.在主庫,自建role,賦予restore oplog權限.

use admin

db.runcommand({ createrole: "restoreoplog",

privileges:

        [

        { resource: { anyresource: true }, actions: [ "anyaction" ] }

        ],

roles:

       []

});

db.grantrolestouser( "root", ["restoreoplog"] );

2.主庫一緻性備份.

[root@opsdba-vbj01-1 dump]# mongodump -uroot -proot123 --port=27019 --oplog --authenticationdatabase=admin -o all_backup

2016-10-11t15:45:58.019+0800    writing admin.system.users to

2016-10-11t15:45:58.019+0800    done dumping admin.system.users (1 document)

2016-10-11t15:45:58.019+0800    writing admin.system.roles to

2016-10-11t15:45:58.020+0800    done dumping admin.system.roles (1 document)

2016-10-11t15:45:58.020+0800    writing admin.system.version to

2016-10-11t15:45:58.020+0800    done dumping admin.system.version (1 document)

2016-10-11t15:45:58.021+0800    writing test.testdata to

2016-10-11t15:45:58.021+0800    writing test.tab to

2016-10-11t15:45:58.190+0800    done dumping test.tab (34056 documents)

2016-10-11t15:46:01.022+0800    [###########.............]  test.testdata  451185/909000  (49.6%)

2016-10-11t15:46:04.022+0800    [####################....]  test.testdata  774771/909000  (85.2%)

2016-10-11t15:46:05.024+0800    [########################]  test.testdata  913877/909000  (100.5%)

2016-10-11t15:46:05.024+0800    done dumping test.testdata (913877 documents)

2016-10-11t15:46:05.025+0800    writing captured oplog to

2016-10-11t15:46:05.781+0800            dumped 6470 oplog entries

3.把opsdba-vbj01-1:27016 從secondary 轉為單節點空庫.

/data/mongodb/mongodb/bin/mongod -f /data/mongodb/mongodb1/conf/mongod.cnf  --shutdown

清空data目錄,單節點啟動:

修改port = 27015,

注釋掉replset

/data/mongodb/mongodb/bin/mongod -f /data/mongodb/mongodb1/conf/mongod.cnf

建立管理者使用者:

[root@opsdba-vbj01-1 mongodb1]# mongo --port=27015 admin

db.createuser(

  {

    user: "root",

    pwd: "root123",

    roles:

    [

      {

        role: "root",

        db: "admin"

      }

    ]

  }

);

自建role,賦予restore oplog權限:

[root@opsdba-vbj01-1 mongodb1]# mongo -uroot -proot123 --port=27015 admin

4.導入

[root@opsdba-vbj01-1 dump]# mongorestore -uroot -proot123 --port=27015 --authenticationdatabase=admin --oplogreplay --dir=all_backup

2016-10-11t15:59:48.505+0800    building a list of dbs and collections to restore from all_backup dir

2016-10-11t15:59:48.507+0800    reading metadata for test.testdata from all_backup/test/testdata.metadata.json

2016-10-11t15:59:48.508+0800    reading metadata for test.tab from all_backup/test/tab.metadata.json

2016-10-11t15:59:48.556+0800    restoring test.tab from all_backup/test/tab.bson

2016-10-11t15:59:48.604+0800    restoring test.testdata from all_backup/test/testdata.bson

2016-10-11t15:59:49.147+0800    restoring indexes for collection test.tab from metadata

2016-10-11t15:59:49.147+0800    finished restoring test.tab (34056 documents)

2016-10-11t15:59:51.507+0800    [####....................]  test.testdata  18.5 mb/94.1 mb  (19.7%)

2016-10-11t15:59:54.507+0800    [##########..............]  test.testdata  39.8 mb/94.1 mb  (42.3%)

2016-10-11t15:59:57.507+0800    [###############.........]  test.testdata  59.7 mb/94.1 mb  (63.5%)

2016-10-11t16:00:00.507+0800    [####################....]  test.testdata  80.1 mb/94.1 mb  (85.1%)

2016-10-11t16:00:03.288+0800    [########################]  test.testdata  94.1 mb/94.1 mb  (100.0%)

2016-10-11t16:00:03.288+0800    restoring indexes for collection test.testdata from metadata

2016-10-11t16:00:03.289+0800    finished restoring test.testdata (913877 documents)

2016-10-11t16:00:03.289+0800    restoring users from all_backup/admin/system.users.bson

2016-10-11t16:00:03.416+0800    restoring roles from all_backup/admin/system.roles.bson

2016-10-11t16:00:03.466+0800    replaying oplog

2016-10-11t16:00:03.808+0800    done

5.擷取最後一個oplog的時間戳

[root@opsdba-vbj01-1 dump]# cd all_backup/

[root@opsdba-vbj01-1 all_backup]# bsondump oplog.bson >oplog.txt

[root@opsdba-vbj01-1 all_backup]# tail -1 oplog.txt

{"ts":{"$timestamp":{"t":1476171965,"i":805}},"t":{"$numberlong":"2"},"h":{"$numberlong":"6906152948185446623"},"v":2,"op":"i","ns":"test.testdata","o":{"_id":{"$oid":"57fc98bddfa99af76706f721"},"x":6470.0,"name":"maclean","name1":"maclean","name2":"maclean","name3":"maclean"}}

6.初始化local庫的相關表

use local

db.runcommand( { create: "oplog.rs", capped: true, size: (1* 1024 * 1024 * 1024) } );

#資料來自oplog.txt

db.oplog.rs.save({"ts" : timestamp(1476171965, 805),"h" : numberlong("6906152948185446623")});

db.db.replset.minvalid.save({"ts" : timestamp(1476171965,805), "t" : numberlong(2)});

#資料來自主庫的資料查詢

db.replset.election.save({ "_id" : objectid("57fc5ea0cfa6486e03e975d0"), "term" : numberlong(2), "candidateindex" : numberlong(2) });

db.system.replset.save({ "_id" : "myrelset", "version" : 5, "protocolversion" : numberlong(1), "members" : [ { "_id" : 1, "host" : "opsdba-vbj01-1:27018", "arbiteronly" : false, "buildindexes" : true, "hidden" : false, "priority" : 1, "tags" : {  }, "slavedelay" : numberlong(0), "votes" : 1 }, { "_id" : 2, "host" : "opsdba-vbj01-1:27019", "arbiteronly" : false, "buildindexes" : true, "hidden" : false, "priority" : 1, "tags" : {  }, "slavedelay" : numberlong(0), "votes" : 1 }, { "_id" : 3, "host" : "opsdba-vbj01-1:27016", "arbiteronly" : false, "buildindexes" : true, "hidden" : false, "priority" : 1, "tags" : {  }, "slavedelay" : numberlong(0), "votes" : 1 } ], "settings" : { "chainingallowed" : true, "heartbeatintervalmillis" : 2000, "heartbeattimeoutsecs" : 10, "electiontimeoutmillis" : 10000, "getlasterrormodes" : {  }, "getlasterrordefaults" : { "w" : 1, "wtimeout" : 0 }, "replicasetid" : objectid("57bfdcdcd40cbe4bf173396a") } });

7.重新開機

db.shutdownserver();

修改為原始值.

port=27016

replset 取消注釋

啟動

8.檢測:

比對行數.

db.collection.count()

9.啟動日志

2016-10-11t16:42:51.520+0800 i control  [initandlisten] mongodb starting : pid=633 port=27016 dbpath=/data/mongodb/mongodb1/data 64-bit host=opsdba-vbj01-1

2016-10-11t16:42:51.520+0800 i control  [initandlisten] db version v3.2.8

2016-10-11t16:42:51.520+0800 i control  [initandlisten] git version: ed70e33130c977bda0024c125b56d159573dbaf0

2016-10-11t16:42:51.520+0800 i control  [initandlisten] openssl version: openssl 1.0.1e-fips 11 feb 2013

2016-10-11t16:42:51.521+0800 i control  [initandlisten] allocator: tcmalloc

2016-10-11t16:42:51.521+0800 i control  [initandlisten] modules: none

2016-10-11t16:42:51.521+0800 i control  [initandlisten] build environment:

2016-10-11t16:42:51.521+0800 i control  [initandlisten]     distmod: rhel62

2016-10-11t16:42:51.521+0800 i control  [initandlisten]     distarch: x86_64

2016-10-11t16:42:51.521+0800 i control  [initandlisten]     target_arch: x86_64

2016-10-11t16:42:51.521+0800 i control  [initandlisten] options: { config: "/data/mongodb/mongodb1/conf/mongod.cnf", net: { http: { enabled: false }, maxincomingconnections: 3000, port: 27016, unixdomainsocket: { pathprefix: "/data/mongodb/mongodb1/data" } }, operationprofiling: { mode: "slowop", slowopthresholdms: 800 }, processmanagement: { fork: true, pidfilepath: "/data/mongodb/mongodb1/data/mongod.pid" }, replication: { replset: "myrelset" }, security: { clusterauthmode: "keyfile", keyfile: "/data/mongodb/mongodb1/conf/myrelset.keyfile" }, storage: { dbpath: "/data/mongodb/mongodb1/data", directoryperdb: true, engine: "wiredtiger", journal: { commitintervalms: 300, enabled: true }, mmapv1: { nssize: 32 }, repairpath: "/data/mongodb/mongodb1/data", syncperiodsecs: 60.0, wiredtiger: { engineconfig: { cachesizegb: 1 } } }, systemlog: { destination: "file", path: "/data/mongodb/mongodb1/log/mongod.log", quiet: true, timestampformat: "iso8601-local" } }

2016-10-11t16:42:51.521+0800 i storage  [initandlisten] wiredtiger_open config: create,cache_size=1g,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2gb),statistics_log=(wait=0),

2016-10-11t16:42:51.944+0800 i storage  [initandlisten] starting wiredtigerrecordstorethread local.oplog.rs

2016-10-11t16:42:51.944+0800 i storage  [initandlisten] the size storer reports that the oplog contains 1 records totaling to 45 bytes

2016-10-11t16:42:51.944+0800 i storage  [initandlisten] scanning the oplog to determine where to place markers for truncation

2016-10-11t16:42:51.982+0800 w storage  [initandlisten] detected configuration for non-active storage engine mmapv1 when current storage engine is wiredtiger

2016-10-11t16:42:51.982+0800 i control  [initandlisten] ** warning: you are running this process as the root user, which is not recommended.

2016-10-11t16:42:51.982+0800 i control  [initandlisten]

2016-10-11t16:42:51.982+0800 i control  [initandlisten] ** warning: soft rlimits too low. rlimits set to 1024 processes, 65535 files. number of processes should be at least 32767.5 : 0.5 times number of files.

2016-10-11t16:42:52.030+0800 i ftdc     [initandlisten] initializing full-time diagnostic data capture with directory '/data/mongodb/mongodb1/data/diagnostic.data'

2016-10-11t16:42:52.030+0800 i network  [hostnamecanonicalizationworker] starting hostname canonicalization worker

2016-10-11t16:42:52.031+0800 i network  [initandlisten] waiting for connections on port 27016

2016-10-11t16:42:52.066+0800 i repl     [replicationexecutor] new replica set config in use: { _id: "myrelset", version: 5, protocolversion: 1, members: [ { _id: 1, host: "opsdba-vbj01-1:27018", arbiteronly: false, buildindexes: true, hidden: false, priority: 1.0, tags: {}, slavedelay: 0, votes: 1 }, { _id: 2, host: "opsdba-vbj01-1:27019", arbiteronly: false, buildindexes: true, hidden: false, priority: 1.0, tags: {}, slavedelay: 0, votes: 1 }, { _id: 3, host: "opsdba-vbj01-1:27016", arbiteronly: false, buildindexes: true, hidden: false, priority: 1.0, tags: {}, slavedelay: 0, votes: 1 } ], settings: { chainingallowed: true, heartbeatintervalmillis: 2000, heartbeattimeoutsecs: 10, electiontimeoutmillis: 10000, getlasterrormodes: {}, getlasterrordefaults: { w: 1, wtimeout: 0 }, replicasetid: objectid('57bfdcdcd40cbe4bf173396a') } }

2016-10-11t16:42:52.067+0800 i repl     [replicationexecutor] this node is opsdba-vbj01-1:27016 in the config

2016-10-11t16:42:52.067+0800 i repl     [replicationexecutor] transition to startup2

2016-10-11t16:42:52.067+0800 i repl     [replicationexecutor] starting replication applier threads

2016-10-11t16:42:52.067+0800 i repl     [replicationexecutor] transition to recovering

2016-10-11t16:42:52.072+0800 i repl     [replicationexecutor] transition to secondary

2016-10-11t16:42:52.102+0800 i asio     [networkinterfaceasio-replication-0] successfully connected to opsdba-vbj01-1:27018

2016-10-11t16:42:52.102+0800 i asio     [networkinterfaceasio-replication-0] successfully connected to opsdba-vbj01-1:27019

2016-10-11t16:42:52.102+0800 i repl     [replicationexecutor] member opsdba-vbj01-1:27018 is now in state secondary

2016-10-11t16:42:52.103+0800 i repl     [replicationexecutor] member opsdba-vbj01-1:27019 is now in state primary

2016-10-11t16:42:58.069+0800 i repl     [replicationexecutor] syncing from: opsdba-vbj01-1:27018

2016-10-11t16:42:58.086+0800 i repl     [syncsourcefeedback] setting syncsourcefeedback to opsdba-vbj01-1:27018

2016-10-11t16:42:58.104+0800 i asio     [networkinterfaceasio-bgsync-0] successfully connected to opsdba-vbj01-1:27018