天天看點

淘寶存儲系統tair的安裝

1. 簡介

  tair 是淘寶自己開發的一個分布式 key/value 存儲引擎. tair 分為持久化和非持久化兩種使用方式. 非持久化的 tair 可以看成是一個分布式緩存. 持久化的 tair 将資料存放于磁盤中. 為了解決磁盤損壞導緻資料丢失, tair 可以配置資料的備份數目, tair 自動将一份資料的不同備份放到不同的主機上, 當有主機發生異常, 無法正常提供服務的時候, 其餘的備份會繼續提供服務.

  2. 安裝步驟及問題小記

  2.1 安裝步驟

  由于tair的實作用到了底層庫 tbsys 和 tbnet,是以在安裝tair之前需要先安裝依賴庫 tbsys 和 tbnet。

  2.1.1 擷取源代碼

  首先需要通過svn下載下傳源碼,可以通過 sudo yum install subversion 安裝svn服務。

  svn checkout http://code.taobao.org/svn/tb-common-utils/trunk/ tb-common-utils # 擷取tbsys 和 tbnet的源代碼

  svn checkout http://code.taobao.org/svn/tair/trunk/ tair # 擷取tair源代碼

  2.1.2 安裝依賴庫或軟體

  編譯tair或tbnet/tbsys之前需要預先安裝一些編譯所需的依賴庫或軟體。

  在安裝這些依賴之前最好首先檢查系統是否已經安裝,在用rpm管理軟體包的os上可以使用 rpm -q 軟體包名 檢視是否已安裝該軟體或庫。

  a. 安裝libtool

  sudo yum install libtool # 同時會安裝libtool所依賴的automake和autoconfig

  b. 安裝boost-devel庫

  sudo yum install boost-devel

  c. 安裝zlib庫

  sudo yum install zlib-devel

  2.1.3 編譯安裝tbsys和tbnet

  tair 的底層依賴于tbsys庫和tbnet庫, 是以要先編譯安裝這兩個庫.

  a. 設定環境變量 TBLIB_ROOT

  取得源代碼後, 先指定環境變量 TBLIB_ROOT 為需要安裝的目錄. 這個環境變量在後續 tair 的編譯安裝中仍舊會被使用到.

  比如要安裝到目前使用者的lib目錄下, 則指定export TBLIB_ROOT="~/lib"。

        b.設定頭檔案路徑

       因為tbnet和tbsys在兩個不同的目錄,但它們的源碼檔案裡頭檔案的互相引用卻沒有加絕對或相對路徑,将兩個目錄的源碼加入到C++環境變量中即可。

  CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:/home/tair/tair/tb-common-utils/tbsys/src:/home/tair/tair/tb-common-utils/tbnet/src

  export CPLUS_INCLUDE_PATH

  c. 安裝

  進入源碼目錄, 執行build.sh進行安裝.

  2.1.4 編譯安裝tair

tair的源代碼有錯誤, 具體是tbsys/src/tblog.cpp中323行代碼:需要将CLogger::CLogger& CLogger::getLogger()改為CLogger& CLogger::getLogger()

  進入 tair 源碼目錄,依次按以下順序編譯安裝

./bootstrap.sh

./configure  # 注意, 在運作configue的時候, 可以使用 --with-boost=xxxx 來指定boost的目錄. 使用--with-release=yes 來編譯release版本.

make

make install

複制代碼

  安裝成功後會在目前使用者home目錄下生成檔案夾tair_bin,即tair的安裝成功後的目錄。

  2.2 問題小記

  安裝過程并不是一帆風順的,期間出現了很多問題,在此簡單記錄以供參考。

  2.2.1 g++未安裝

  checking for C++ compiler default output file name...

  configure: error: in `/home/config_server/tair/tb-common-utils/tbnet':

  configure: error: C++ compiler cannot create executables

  See `config.log' for more details.

  make: *** No targets specified and no makefile found. Stop.

  make: *** No rule to make target `install'. Stop.

  說明安裝了gcc但未安裝g++,而tair是用C++開發的,是以隻能用g++編譯,通過過

  sudo yum install gcc-c++

  安裝即可。

  2.2.2 頭檔案路徑錯誤

如果上面沒有設定頭檔案的路徑,會報如下錯誤:

  In file included from channel.cpp:16: tbnet.h:39:19: error: tbsys.h: No such file or directory databuffer.h: In member function 'void tbnet::DataBuffer::expand(int)': databuffer.h:429: error: 'ERROR' was not declared in this scope databuffer.h:429: error: 'TBSYS_LOG' was not declared in this scope socket.h: At global scope: socket.h:191: error: 'tbsys' has not been declared socket.h:191: error: ISO C++ forbids declaration of 'CThreadMutex' with no type socket.h:191: error: expected ';' before '_dnsMutex' channelpool.h:85: error: 'tbsys' has not been declared channelpool.h:85: error: ISO C++ forbids declaration of 'CThreadMutex' with no type channelpool.h:85: error: expected ';' before '_mutex' channelpool.h:93: error: 'atomic_t' does not name a type channelpool.h:94: error: 'atomic_t' does not name a type connection.h:164: error: 'tbsys' has not been declared connection.h:164: error: ISO C++ forbids declaration of 'CThreadCond' with no type connection.h:164: error: expected ';' before '_outputCond' iocomponent.h:184: error: 'atomic_t' does not name a type iocomponent.h: In member function 'int tbnet::IOComponent::addRef()': iocomponent.h:108: error: '_refcount' was not declared in this scope iocomponent.h:108: error: 'atomic_add_return' was not declared in this scope iocomponent.h: In member function 'void tbnet::IOComponent::subRef()': iocomponent.h:115: error: '_refcount' was not declared in this scope iocomponent.h:115: error: 'atomic_dec' was not declared in this scope iocomponent.h: In member function 'int tbnet::IOComponent::getRef()': iocomponent.h:122: error: '_refcount' was not declared in this scope iocomponent.h:122: error: 'atomic_read' was not declared in this scope transport.h: At global scope: transport.h:23: error: 'tbsys' has not been declared transport.h:23: error: expected `{' before 'Runnable' transport.h:23: error: invalid function declaration packetqueuethread.h:28: error: 'tbsys' has not been declared packetqueuethread.h:28: error: expected `{' before 'CDefaultRunnable' packetqueuethread.h:28: error: invalid function declaration connectionmanager.h:93: error: 'tbsys' has not been declared connectionmanager.h:93: error: ISO C++ forbids declaration of 'CThreadMutex' with no type connectionmanager.h:93: error: expected ';' before '_mutex' make[1]: *** [channel.lo] Error 1 make[1]: Leaving directory `/home/tair/tair/tb-common-utils/tbnet/src' make: *** [install-recursive] Error 1

  have installed in ~/lib

  因為tbnet和tbsys在兩個不同的目錄,但它們的源碼檔案裡頭檔案的互相引用卻沒有加絕對或相對路徑,将兩個目錄的源碼加入到C++環境變量中即可。

  CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:/home/tair/tair/tb-common-utils/tbsys/src:/home/tair/tair/tb-common-utils/tbnet/src

  export CPLUS_INCLUDE_PATH

  3. 部署配置

  tair的運作, 至少需要一個 config server 和一個 data server. 推薦使用兩個 config server 多個data server的方式. 兩個config server有主備之分.

  tair有三個配置檔案,分别是對config server、data server及group資訊的配置,在tair_bin安裝目錄下的etc目錄下有這三個配置檔案的樣例,我們将其複制一下,成為我們需要的配置檔案。

cp configserver.conf.default configserver.conf

cp dataserver.conf.default dataserver.conf

cp group.conf.default group.conf

複制代碼

  我的部署環境:

  在配置之前,請查閱官網給出的配置檔案字段詳解,下面直接貼出我自己的配置并加以簡單的說明。

  3.1 配置config server

#

# tair 2.3 --- configserver config

#

[public]

config_server=10.10.7.144:51980

config_server=10.10.7.144:51980

[configserver]

port=51980

log_file=/home/dataserver1/tair_bin/logs/config.log

pid_file=/home/dataserver1/tair_bin/logs/config.pid

log_level=warn

group_file=/home/dataserver1/tair_bin/etc/group.conf

data_dir=/home/dataserver1/tair_bin/data/data

dev_name=venet0:0

複制代碼

  注意事項:

  (1)首先需要配置config server的伺服器位址和端口号,端口号可以預設,伺服器位址改成自己的,有一主一備兩台configserver,這裡僅為測試使用就設定為一台了。

  (2)log_file/pid_file等的路徑設定最好用絕對路徑,預設的是相對路徑,而且是不正确的相對路徑(沒有傳回上級目錄),是以這裡需要修改。注意data檔案和log檔案非常重要,data檔案不可缺少,而log檔案是部署出錯後能給你詳細的出錯原因。

  (3)dev_name很重要,需要設定為你自己目前網絡接口的名稱,預設為eth0,這裡我根據自己的網絡情況進行了修改(ifconfig檢視網絡接口名稱)。

  3.2 配置data server

#

#  tair 2.3 --- tairserver config 

#

[public]

config_server=10.10.7.144:51980

config_server=10.10.7.144:51980

[tairserver]

#

#storage_engine:

#

# mdb 

# kdb

# ldb

#

storage_engine=ldb

local_mode=0

#

#mdb_type:

# mdb

# mdb_shm

#

mdb_type=mdb_shm

#

# if you just run 1 tairserver on a computer, you may ignore this option.

# if you want to run more than 1 tairserver on a computer, each tairserver must have their own "mdb_shm_path"

#

#

mdb_shm_path=/mdb_shm_path01

#tairserver listen port

port=51910

heartbeat_port=55910

process_thread_num=16

#

#mdb size in MB

#

slab_mem_size=1024

log_file=/home/dataserver1/tair_bin/logs/server.log

pid_file=/home/dataserver1/tair_bin/logs/server.pid

log_level=warn

dev_name=venet0:0

ulog_dir=/home/dataserver1/tair_bin/data/ulog

ulog_file_number=3

ulog_file_size=64

check_expired_hour_range=2-4

check_slab_hour_range=5-7

dup_sync=1

do_rsync=0

# much resemble json format

# one local cluster config and one or multi remote cluster config.

# {local:[master_cs_addr,slave_cs_addr,group_name,timeout_ms,queue_limit],remote:[...],remote:[...]}

rsync_conf={local:[10.0.0.1:5198,10.0.0.2:5198,group_local,2000,1000],remote:[10.0.1.1:5198,10.0.1.2:5198,group_remote,2000,3000]}

# if same data can be updated in local and remote cluster, then we need care modify time to

# reserve latest update when do rsync to each other.

rsync_mtime_care=0

# rsync data directory(retry_log/fail_log..)

rsync_data_dir=/home/dataserver1/tair_bin/data/remote

# max log file size to record failed rsync data, rotate to a new file when over the limit

rsync_fail_log_size=30000000

# whether do retry when rsync failed at first time

rsync_do_retry=0

# when doing retry,  size limit of retry log's memory use

rsync_retry_log_mem_size=100000000

[fdb]

# in MB

index_mmap_size=30

cache_size=256

bucket_size=10223

free_block_pool_size=8

data_dir=/home/dataserver1/tair_bin/data/fdb

fdb_name=tair_fdb

[kdb]

# in byte

map_size=10485760      # the size of the internal memory-mapped region

bucket_size=1048583    # the number of buckets of the hash table

record_align=128       # the power of the alignment of record size

data_dir=/home/dataserver1/tair_bin/data/kdb      # the directory of kdb's data

[ldb]

#### ldb manager config

## data dir prefix, db path will be data/ldbxx, "xx" means db instance index.

## so if ldb_db_instance_count = 2, then leveldb will init in

## /data/ldb1/ldb/, /data/ldb2/ldb/. We can mount each disk to

## data/ldb1, data/ldb2, so we can init each instance on each disk.

data_dir=/home/dataserver1/tair_bin/data/ldb

## leveldb instance count, buckets will be well-distributed to instances

ldb_db_instance_count=1

## whether load backup version when startup.

## backup version may be created to maintain some db data of specifid version.

ldb_load_backup_version=0

## whether support version strategy.

## if yes, put will do get operation to update existed items's meta info(version .etc),

## get unexist item is expensive for leveldb. set 0 to disable if nobody even care version stuff.

ldb_db_version_care=1

## time range to compact for gc, 1-1 means do no compaction at all

ldb_compact_gc_range = 3-6

## backgroud task check compact interval (s)

ldb_check_compact_interval = 120

## use cache count, 0 means NOT use cache,`ldb_use_cache_count should NOT be larger

## than `ldb_db_instance_count, and better to be a factor of `ldb_db_instance_count.

## each cache mdb's config depends on mdb's config item(mdb_type, slab_mem_size, etc)

ldb_use_cache_count=1

## cache stat can't report configserver, record stat locally, stat file size.

## file will be rotate when file size is over this.

ldb_cache_stat_file_size=20971520

## migrate item batch size one time (1M)

ldb_migrate_batch_size = 3145728

## migrate item batch count.

## real batch migrate items depends on the smaller size/count

ldb_migrate_batch_count = 5000

## comparator_type bitcmp by default

# ldb_comparator_type=numeric

## numeric comparator: special compare method for user_key sorting in order to reducing compact

## parameters for numeric compare. format: [meta][prefix][delimiter][number][suffix] 

## skip meta size in compare

# ldb_userkey_skip_meta_size=2

## delimiter between prefix and number 

# ldb_userkey_num_delimiter=:

####

## use blommfilter

ldb_use_bloomfilter=1

## use mmap to speed up random acess file(sstable),may cost much memory

ldb_use_mmap_random_access=0

## how many highest levels to limit compaction

ldb_limit_compact_level_count=0

## limit compaction ratio: allow doing one compaction every ldb_limit_compact_interval

## 0 means limit all compaction

ldb_limit_compact_count_interval=0

## limit compaction time interval

## 0 means limit all compaction

ldb_limit_compact_time_interval=0

## limit compaction time range, start == end means doing limit the whole day.

ldb_limit_compact_time_range=6-1

## limit delete obsolete files when finishing one compaction

ldb_limit_delete_obsolete_file_interval=5

## whether trigger compaction by seek

ldb_do_seek_compaction=0

## whether split mmt when compaction with user-define logic(bucket range, eg) 

ldb_do_split_mmt_compaction=0

#### following config effects on FastDump ####

## when ldb_db_instance_count > 1, bucket will be sharded to instance base on config strategy.

## current supported:

##  hash : just do integer hash to bucket number then module to instance, instance's balance may be

##         not perfect in small buckets set. same bucket will be sharded to same instance

##         all the time, so data will be reused even if buckets owned by server changed(maybe cluster has changed),

##  map  : handle to get better balance among all instances. same bucket may be sharded to different instance based

##         on different buckets set(data will be migrated among instances).

ldb_bucket_index_to_instance_strategy=map

## bucket index can be updated. this is useful if the cluster wouldn't change once started

## even server down/up accidently.

ldb_bucket_index_can_update=1

## strategy map will save bucket index statistics into file, this is the file's directory

ldb_bucket_index_file_dir=/home/dataserver1/tair_bin/data/bindex

## memory usage for memtable sharded by bucket when batch-put(especially for FastDump)

ldb_max_mem_usage_for_memtable=3221225472

####

#### leveldb config (Warning: you should know what you're doing.)

## one leveldb instance max open files(actually table_cache_ capacity, consider as working set, see `ldb_table_cache_size)

ldb_max_open_files=655

## whether return fail when occure fail when init/load db, and

## if true, read data when compactiong will verify checksum

ldb_paranoid_check=0

## memtable size

ldb_write_buffer_size=67108864

## sstable size

ldb_target_file_size=8388608

## max file size in each level. level-n (n > 0): (n - 1) * 10 * ldb_base_level_size

ldb_base_level_size=134217728

## sstable's block size

# ldb_block_size=4096

## sstable cache size (override `ldb_max_open_files)

ldb_table_cache_size=1073741824

##block cache size

ldb_block_cache_size=16777216

## arena used by memtable, arena block size

#ldb_arenablock_size=4096

## key is prefix-compressed period in block,

## this is period length(how many keys will be prefix-compressed period)

# ldb_block_restart_interval=16

## specifid compression method (snappy only now)

# ldb_compression=1

## compact when sstables count in level-0 is over this trigger

ldb_l0_compaction_trigger=1

## write will slow down when sstables count in level-0 is over this trigger

## or sstables' filesize in level-0 is over trigger * ldb_write_buffer_size if ldb_l0_limit_write_with_count=0

ldb_l0_slowdown_write_trigger=32

## write will stop(wait until trigger down)

ldb_l0_stop_write_trigger=64

## when write memtable, max level to below maybe

ldb_max_memcompact_level=3

## read verify checksum

ldb_read_verify_checksums=0

## write sync log. (one write will sync log once, expensive)

ldb_write_sync=0

## bits per key when use bloom filter

#ldb_bloomfilter_bits_per_key=10

## filter data base logarithm. filterbasesize=1<<ldb_filter_base_logarithm

#ldb_filter_base_logarithm=12            

複制代碼

  該配置檔案内容很多,紅色标出來的是我修改的部分,其它的采用預設,其中:

  (1) config_server的配置與之前必須完全相同。

  (2)這裡面的port和heartbeat_port是data server的端口号和心跳端口号,必須確定系統能給你使用這些端口号。一般預設的即可,這裡我修改是因為自己的Linux系統隻允許配置設定30000以後的端口号,根據自己情況修改。

  (3)data檔案、log檔案等很重要,與前一樣,最好用絕對路徑

  3.3 配置group資訊

#group name

[group_1]

# data move is 1 means when some data serve down, the migrating will be start. 

# default value is 0

_data_move=0

#_min_data_server_count: when data servers left in a group less than this value, config server will stop serve for this group

#default value is copy count.

_min_data_server_count=1

#_plugIns_list=libStaticPlugIn.so

_build_strategy=1 #1 normal 2 rack 

_build_diff_ratio=0.6 #how much difference is allowd between different rack 

# diff_ratio =  |data_sever_count_in_rack1 - data_server_count_in_rack2| / max (data_sever_count_in_rack1, data_server_count_in_rack2)

# diff_ration must less than _build_diff_ratio

_pos_mask=65535  # 65535 is 0xffff  this will be used to gernerate rack info. 64 bit serverId & _pos_mask is the rack info, 

_copy_count=1    

_bucket_number=1023

# accept ds strategy. 1 means accept ds automatically

_accept_strategy=1

# data center A

_server_list=10.10.7.146:51910

#_server_list=192.168.1.2:5191

#_server_list=192.168.1.3:5191

#_server_list=192.168.1.4:5191

# data center B

#_server_list=192.168.2.1:5191

#_server_list=192.168.2.2:5191

#_server_list=192.168.2.3:5191

#_server_list=192.168.2.4:5191

#quota info

_areaCapacity_list=0,1124000;

複制代碼

  這個檔案我隻配置了data server清單,我隻有一個dataserver,是以隻需配置一個。

  3.4 啟動叢集

  在完成安裝配置之後, 可以啟動叢集了. 啟動的時候需要先啟動data server 然後再啟動cofnig server. 如果是為已有的叢集添加dataserver則可以先啟動dataserver程序然後再修改gruop.conf,如果你先修改group.conf再啟動程序,那麼需要執行touch group.conf;在scripts目錄下有一個腳本 tair.sh 可以用來幫助啟動 tair.sh start_ds 用來啟動data server. tair.sh start_cs 用來啟動config server. 這個腳本比較簡單, 它要求配置檔案放在固定位置, 采用固定名稱. 使用者可以通過執行安裝目錄下的bin下的 tair_server (data server) 和 tair_cfg_svr(config server) 來啟動叢集.

  進入tair_bin目錄後,按順序啟動:

sudo sbin/tair_server -f etc/dataserver.conf     # 在config server端啟動

sudo sbin/tair_cfg_svr -f etc/configserver.conf   # 在data server端啟動

複制代碼

  執行啟動指令後,在兩端通過ps aux | grep tair檢視是否啟動了,這裡啟動起來隻是第一步,還需要測試看是否真的啟動成功,通過下面指令測試:

sudo sbin/tairclient -c 10.10.7.144:51980 -g group_1

TAIR> put k1 v1       

put: success

TAIR> put k2 v2

put: success

TAIR> get k2

KEY: k2, LEN: 2

複制代碼

  其中10.10.7.144:51980是config server IP:PORT,group_1是group name,在group.conf裡配置的。

  部署過程中的錯誤記錄

  如果啟動不成功或測試put/get時出現問題,那麼需要檢視config server端的logs/config.log和data server端的logs/server.log日志檔案,裡面會有具體的報錯資訊。

  3.4.1 Too many open files

  [2014-07-09 10:37:24.863119] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001013.stat] failed: Too many open files

  [2014-07-09 10:37:24.863132] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001014.stat] failed: Too many open files

  [2014-07-09 10:37:24.863145] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001015.stat] failed: Too many open files

  [2014-07-09 10:37:24.863154] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001016.stat] failed: Too many open files

  [2014-07-09 10:37:24.863162] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001017.stat] failed: Too many open files

  由于我的存儲引擎選擇的是ldb,而ldb有一個配置ldb_max_open_files=65535,即預設最多能打開的檔案個數是65535個,但是我的系統不允許,可以通過“ulimit -n”檢視系統運作程式中打開的最多檔案個數,一般為1024個,遠遠小于65535,這時有兩個辦法來解決,一是修改ldb_max_open_files的值,使其小于1024;二是修改系統最多允許打開檔案個數(下面的參考資料有提供修改的方法),由于我是測試使用,是以這裡直接修改了ldb_max_open_files的值。

  3.4.2 data server問題

  dataserver沒配置好會報各種錯誤,下面列舉一些我遇到的錯誤:

  問題1:

  TAIR> put abc a

  put: unknow

  TAIR> put a 11

  put: unknow

  TAIR> put abc 33

  put: unknow

  TAIR> get a

  get failed: data not exists.

  問題2:

  ERROR wakeup_wait_object (../../src/common/wait_object.hpp:302) [140627106383616] [3] packet is null

  這些都是dataserver開始啟動起來了,但是使用put/get時報錯,然後dataserver馬上down掉的情況,這時候就要根據log檢視具體報錯資訊,修改錯誤的配置。

  還有下面這樣的報錯資訊:

[2014-07-09 09:08:11.646430] ERROR rebuild (group_info.cpp:879) [139740048353024] can not get enough data servers. need 1 lef 0

複制代碼

  這是config server在啟動時找不到data server,也就是data server必須要先啟動成功後才能啟動config server。

  3.4.3 端口問題

  start tair_cfg_srv listen port 5199 error

  有時候使用預設的端口号也不一定行,需要根據系統限制進行設定,比如我的系統環境隻能運作普通使用者使用30000以上的端口号,是以這裡我就不能使用預設端口号了,改下即可。

  4. Java用戶端測試

  Tair是一個分布式的key/value存儲系統,資料往往存儲在多個資料節點上。用戶端需要決定資料存儲的具體節點,然後才能完成具體的操作。

  Tair的用戶端通過和configserver互動擷取這部分資訊。configserver會維護一張表,這張表包含hash值與存儲其對應資料的節點的對照關系。用戶端在啟動時,需要先和configserver通信,擷取這張對照表。

  在擷取到對照表後,用戶端便可以開始提供服務。用戶端會根據請求的key的hash值,查找對照表中負責該資料的資料節點,然後通過和資料節點通信完成使用者的請求。

  Tair目前支援Java和c++語言的用戶端。Java用戶端已有相應的實作(可從 這裡 下載下傳到相應的jar包),我們直接使用封裝的接口操作即可,但C++用戶端目前還沒看到實作版本(需要自己實作)。這裡以簡單的Java用戶端為例進行用戶端測試。

  4.1 依賴jar包

  Java測試程式除了需要封裝好的tair相關jar包之外,還需要tair依賴的一些jar包,具體的有下面幾個(不一定是這個版本号):

commons-logging-1.1.3.jar

slf4j-API-1.7.7.jar

slf4j-log4j12-1.7.7.jar

log4j-1.2.17.jar

mina-core-1.1.7.jar

tair-client-2.3.1.jar

複制代碼

  4.2 Java用戶端程式

  首先請參考 Tair使用者指南 裡面的關于java用戶端的接口說明,下面直接給出示例,很容易了解。

package tair.client;

import java.util.ArrayList;

import java.util.List;

import com.taobao.tair.DataEntry;

import com.taobao.tair.Result;

import com.taobao.tair.ResultCode;

import com.taobao.tair.impl.DefaultTairManager;

public class TairClientTest {

  public static void main(String[] args) {

    // 建立config server清單

    List<String> confServers = new ArrayList<String>();

    confServers.add("10.10.7.144:51980"); 

  //        confServers.add("10.10.7.144:51980"); // 可選

    // 建立用戶端執行個體

    DefaultTairManager tairManager = new DefaultTairManager();

    tairManager.setConfigServerList(confServers);

    // 設定組名

    tairManager.setGroupName("group_1");

    // 初始化用戶端

    tairManager.init();

    // put 10 items

    for (int i = 0; i < 10; i++) {

      // 第一個參數是namespace,第二個是key,第三是value,第四個是版本,第五個是有效時間

      ResultCode result = tairManager.put(0, "k" + i, "v" + i, 0, 10);

      System.out.println("put k" + i + ":" + result.isSuccess());

      if (!result.isSuccess())

        break;

    }

    // get one

    // 第一個參數是namespce,第二個是key

    Result<DataEntry> result = tairManager.get(0, "k3");

    System.out.println("get:" + result.isSuccess());

    if (result.isSuccess()) {

      DataEntry entry = result.getValue();

      if (entry != null) {

        // 資料存在

        System.out.println("value is " + entry.getValue().toString());

      } else {

        // 資料不存在

        System.out.println("this key doesn't exist.");

      }

    } else {

      // 異常處理

      System.out.println(result.getRc().getMessage());

    }

  }

}

複制代碼

  運作結果:

log4j:WARN No appenders could be found for logger (com.taobao.tair.impl.ConfigServer).

log4j:WARN Please initialize the log4j system properly.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

put k0:true

put k1:true

put k2:true

put k3:true

put k4:true

put k5:true

put k6:true

put k7:true

put k8:true

put k9:true

get:true

value is v3

複制代碼

  注意事項:測試如果不是在config server或data server上進行,那麼一定要確定測試端系統與config server和data server能互相通信,即ping通。否則有可能會報下面這樣的錯誤:

  Exception in thread "main" java.lang.RuntimeException: init config failed

  at com.taobao.tair.impl.DefaultTairManager.init(DefaultTairManager.java:80)

  at tair.client.TairClientTest.main(TairClientTest.java:27)

  我已将示例程式、需要的jar包及Makefile檔案(我在Linux系統下測試,未用Eclipse跑程式)打包,需要的可以從這裡下載下傳。

繼續閱讀