天天看點

Cassandra叢集管理-下線正常節點

測試前題:

測試cassandra叢集使用了vnodes,如何判斷是否用了vnodes呢? 主要看你的cassandra.yml配置檔案中。

預設(3.x)為空,系統自動生成。為空表示使用virtual nodes,預設開啟,使用了vnodes,删除了節點之後它會自己均衡資料,需要人為幹預。

測試資料生成

建立一個名為kevin_test的KeySpace

建立一個名為kevin_test的KeySpace,使用網絡拓撲政策(SimpleStrategy),叢集内3副本,另外開啟寫commit log。

cassandra@cqlsh> create keyspace kevin_test with replication = {'class':'SimpleStrategy','replication_factor':3} and durable_writes = true;      

建立表

CREATE TABLE t_users (
  user_id text PRIMARY KEY,
  first_name text,
  last_name text,
  emails set<text>
);      

批量插入資料

BEGIN BATCH
INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('0', 'kevin0', 'kang', {'[email protected]', '[email protected]'});
INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('1', 'kevin1', 'kang', {'[email protected]', '[email protected]'});
INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('2', 'kevin2', 'kang', {'[email protected]', '[email protected]'});
INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('3', 'kevin3', 'kang', {'[email protected]', '[email protected]'});
INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('4', 'kevin4', 'kang', {'[email protected]', '[email protected]'});
INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('5', 'kevin5', 'kang', {'[email protected]', '[email protected]'});
INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('6', 'kevin6', 'kang', {'[email protected]', '[email protected]'});
INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('7', 'kevin7', 'kang', {'[email protected]', '[email protected]'});
INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('8', 'kevin8', 'kang', {'[email protected]', '[email protected]'});
INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('9', 'kevin9', 'kang', {'[email protected]', '[email protected]'});
APPLY BATCH;      

驗證:

cassandra@cqlsh:kevin_test> SELECT * from t_users; 

 user_id | emails                          | first_name | last_name
---------+---------------------------------+------------+-----------
       6 | {'[email protected]', '[email protected]'} |     kevin6 |      kang
       7 | {'[email protected]', '[email protected]'} |     kevin7 |      kang
       9 | {'[email protected]', '[email protected]'} |     kevin9 |      kang
       4 | {'[email protected]', '[email protected]'} |     kevin4 |      kang
       3 | {'[email protected]', '[email protected]'} |     kevin3 |      kang
       5 | {'[email protected]', '[email protected]'} |     kevin5 |      kang
       0 | {'[email protected]', '[email protected]'} |     kevin0 |      kang
       8 | {'[email protected]', '[email protected]'} |     kevin8 |      kang
       2 | {'[email protected]', '[email protected]'} |     kevin2 |      kang
       1 | {'[email protected]', '[email protected]'} |     kevin1 |      kang      

檢視t_users表屬性:

[root@kubm-03 ~]# nodetool  cfstats kevin_test.t_users
Total number of tables: 41
----------------
Keyspace : kevin_test
        Read Count: 0
        Read Latency: NaN ms
        Write Count: 6
        Write Latency: 0.116 ms
        Pending Flushes: 0
                Table: t_users
                Number of partitions (estimate): 5
                Memtable cell count: 6
                Memtable data size: 828      

以上表資訊,在後期測試期間可以确認資料是否丢失。

叢集節點資訊

[root@kubm-03 ~]# nodetool  status
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address         Load       Tokens       Owns    Host ID                               Rack
UN  172.20.101.164  56.64 MiB  256          ?       dcbbad83-fe7c-4580-ade7-aa763b8d2c40  rack1
UN  172.20.101.165  55.44 MiB  256          ?       cefe8a3b-918f-463b-8c7d-faab0b9351f9  rack1
UN  172.20.101.166  73.96 MiB  256          ?       88e16e35-50dd-4ee3-aa1a-f10a8c61a3eb  rack1
UN  172.20.101.167  55.43 MiB  256          ?       8808aaf7-690c-4f0c-be9b-ce655c1464d4  rack1
UN  172.20.101.160  54.4 MiB   256          ?       57cc39fc-e47b-4c96-b9b0-b004f2b79242  rack1
UN  172.20.101.157  56.05 MiB  256          ?       091ff0dc-415b-48a7-b4ce-e70c84bbfafc  rack1      

下線一個正常的叢集節點

節點運作狀态正常,用于壓縮叢集節點數量,本次下線:172.20.101.165。

在要删除的機器(172.20.101.165)上執行:

nodetool decommission 或者 nodetool removenode

可以通過 nodetool status檢視叢集狀态,節點資料恢複完成後,下線節點從叢集清單消失。

檢視服務狀态:

● cassandra.service - LSB: distributed storage system for structured data
   Loaded: loaded (/etc/rc.d/init.d/cassandra; bad; vendor preset: disabled)
   Active: active (running) since Tue 2019-07-09 11:29:25 CST; 2 days ago
Jul 09 11:29:25 kubm-03 cassandra[8495]: Starting Cassandra: OK
Jul 09 11:29:25 kubm-03 systemd[1]: Started LSB: distributed storage system for structured data.      

測試重新開機服務,節點能否自動加入叢集:

INFO  [main] 2019-07-11 16:44:49,765 StorageService.java:639 - CQL supported versions: 3.4.4 (default: 3.4.4)
INFO  [main] 2019-07-11 16:44:49,765 StorageService.java:641 - Native protocol supported versions: 3/v3, 4/v4, 5/v5-beta (default: 4/v4)
INFO  [main] 2019-07-11 16:44:49,816 IndexSummaryManager.java:80 - Initializing index summary manager with a memory pool size of 198 MB and a resize interval of 60 minutes
This node was decommissioned and will not rejoin the ring unless cassandra.override_decommission=true has been set, or all existing data is removed and the node is bootstrapped again
Fatal configuration error; unable to start server.  See log for stacktrace.
ERROR [main] 2019-07-11 16:44:49,823 CassandraDaemon.java:749 - Fatal configuration error

#提示節點已經退役
org.apache.cassandra.exceptions.ConfigurationException: This node was decommissioned and will not rejoin the ring unless cassandra.override_decommission=true has been set, or all existing data is removed and the node is bootstrapped again