一、系統
IP | HOSTNAME | PG VERSION | DIR | OS |
192.168.100.161 | node1 | 9.4 | /opt/pgsql | CentOS6.5_x64 |
192.168.100.162 | node2 | 9.4 | /opt/pgsql | CentOS6.5_x64 |
# cat /etc/issue
CentOS release6.5 (Final)
Kernel \r on an\m
# uname -a
Linux barman2.6.32-431.11.2.el6.x86_64 #1 SMP Tue Mar 25 19:59:55 UTC 2014 x86_64 x86_64x86_64 GNU/Linux
# cat /etc/hosts
127.0.0.1 localhost.localdomainlocalhost.localdomainlocalhost4 localhost4.localdomain4 localhost node1
::1 localhost.localdomainlocalhost.localdomainlocalhost6 localhost6.localdomain6 localhost node1
192.168.100.161node1
192.168.100.162node2
二、安裝
2.1簡介
pg_rewind is a tool for synchronizing aPostgreSQL data directory with another PostgreSQL data directory that wasforked from the first one. The result is equivalent to rsyncing the first datadirectory with the second one. The advantage of pg_rewind over rsync is thatpg_rewind uses the WAL to determine changed data blocks, and does not requirereading through all files in the cluster. That makes it a lot faster when thedatabase is large and only a small portion of it differs between the clusters.
注意:
pg_rewind同步資料不像rsync,一般情況下使用rsync做同步時需要掃描整個檔案,當資料量不大時掃描時間相對來說可以忽略或接受,但是當資料量達幾百G甚至上T時掃描将會耗時很長,為了改進目前的同步機制,是以在pg_rewind中不再對所有同步檔案進行掃描,而是僅從wal的timeline中擷取相關資訊,定位到哪些blocks需要同步并對blocks進行标記,之後再将需要同步的blocks進行拷貝。
pg_rewind要求能夠掃描到需要的wal,若需要的wal不在pg_xlog中則去歸檔中查找(該功能尚未支援,目前需要手動将缺少的wal從歸檔中拷貝到pg_xlog)
2.2要求
在使用pg_rewind時,需要開啟以下資料庫功能(2選1):
(1)data checksums
校驗Pg資料頁并标記偵測損壞的資料塊。針對cluster設定。當然開啟該功能會帶來性能上的額外開銷,因為對每個資料頁都要有額外的計算,是以使用時要考慮全面權衡利弊。該功能在initdb時開啟(-k),之後不能改變(不過有一個新參數ignore_checksum_failure可以強制Pg在檢測到損壞資料的時候繼續執行事務,但要保證資料塊的頭資訊沒有損壞)。
-
wal_log_hints
是Pg9.4中的一個新特性。
{本次測試選擇第一種}
2.3安裝pg
Pg源碼下載下傳位址:
https://github.com/postgres/postgres
選擇最新版本:9.4devel
yum install flex bison readline-devel zlib-devel
安裝過程略
注:node1初始化資料庫,node2僅安裝資料庫軟體。初始化時加入-k參數。
2.4編譯安裝pg_rewind
# unzip pg_rewind-master.zip
# mv pg_rewind-master postgres-master/contrib/
# cd postgres-master/contrib/pg_rewind-master/
# make && make install
也可通過pg系統管理使用者編譯安裝,如:
# su - postgres
$ make USE_PGXS=1 top_srcdir=postgres-master
$ make USE_PGXS=1 top_srcdir=postgres-master install
驗證是否安裝正确:
# su - postgres
$ pg_rewind --version
pg_rewind 0.1
檢視是否開啟校驗:
$ pg_controldata | grep checksum
Data pagechecksum version: 1
postgres=# showdata_checksums ;
data_checksums
----------------
on
(1 row)
三、配置hot standby
3.1配置主節點資料庫
$ vi postgresql.conf
listen_addresses= '*'
port = 5432
wal_level =hot_standby
max_wal_senders= 2
wal_keep_segments= 100
hot_standby = on
$ vi pg_hba.conf
host all all 192.168.100.0/24 trust
host replication all 192.168.100.0/24 trust
啟動資料庫:
[[email protected] ~]$ pg_ctl restart
3.2設定備節點資料庫
[po[email protected] pgsql]$ pg_basebackup -D/opt/pgsql/data -h node1 -P
21099/21099 kB(100%), 1/1 tablespace
NOTICE: WAL archiving is not enabled; you must ensurethat all required WAL segments are copied through other means to complete thebackup
配置recovery.conf:
[[email protected] data]$ cat recovery.conf
standby_mode ='on'
primary_conninfo= 'host=node1 user=postgres port=5432'
recovery_target_timeline= 'latest'
啟動備庫:
[[email protected] data]$ pg_ctl start
四、切換模拟
4.1模拟主庫故障
[[email protected] ~]$ pg_ctl stop -m f
4.2提升備庫狀态
[[email protected] ~]$ pg_ctl promote
4.3更新資料
[[email protected] ~]$ createdb pgbench
[[email protected] ~]$ pgbench -i -s 10 pgbench
4.4将node1恢複為standby
同步資料:
[[email protected] ~]$ pg_rewind -D /opt/pgsql/data/--source-server='host=node2 user=postgres port=5432'
The serversdiverged at WAL position 0/30000C8 on timeline 1.
No rewindrequired.
[[email protected] ~]$ vi /opt/pgsql/data/recovery.conf
standby_mode ='on'
primary_conninfo= 'host=node2 user=postgres port=5432'
recovery_target_timeline= 'latest'
啟動node1上的資料庫:
[[email protected] ~]$ pg_ctl start
4.4将node1恢複為master
停止node2:
[[email protected] ~]$ pg_ctl stop -m f
提升node1狀态:
[[email protected] ~]$ pg_ctl promote
在node1上做一些更新:
[[email protected] ~]$ pgbench -s 10 -T 60 pgbench
恢複node2為standby:
[[email protected] ~]$ pg_rewind -D /opt/pgsql/data/--source-server='host=node1 user=postgres port=5432'
The serversdiverged at WAL position 0/12ACCC30 on timeline 2.
No rewindrequired.
[[email protected] ~]$ mv /opt/pgsql/data/recovery.done/opt/pgsql/data/recovery.conf
[[email protected] ~]$ vi /opt/pgsql/data/recovery.conf
(修改node2為node1)
啟動node2上的資料庫:
[[email protected] ~]$ pg_ctl start
server starting
五、基本原理
The basic idea is to copy everything fromthe new cluster to the old cluster,
except for the blocks that we know to bethe same.
1. Scan the WAL log of the old cluster,starting from the last checkpoint before
the point where the new cluster's timelinehistory forked off from the old cluster.
For each WAL record, make a note of thedata blocks that were touched. This yields
a list of all the data blocks that werechanged in the old cluster, after the new
cluster forked off.
2. Copy all those changed blocks from thenew cluster to the old cluster.
3. Copy all other files like clog, conffiles etc. from the new cluster to old cluster.
Everything except the relation files.
4. Apply the WAL from the new cluster,starting from the checkpoint created at
failover. (pg_rewind doesn't actually apply the WAL, it just creates abackup
label file indicating that when PostgreSQLis started, it will start replay
from that checkpoint and apply all the requiredWAL)
六、參考文獻
https://github.com/vmware/pg_rewind
http://michael.otacoo.com/postgresql-2/postgres-module-highlight-pg_rewind-to-recycle-a-postgres-master-into-a-slave/
七、license
pg_rewind can be distributed under theBSD-style PostgreSQL license