天天看點

Kafka(二)CentOS7.5搭建Kafka2.11-1.1.0叢集與簡單測試

一、下載下傳

下載下傳位址:

http://kafka.apache.org/downloads.html    我這裡下載下傳的是Scala 2.11對應的 kafka_2.11-1.1.0.tgz

Kafka(二)CentOS7.5搭建Kafka2.11-1.1.0叢集與簡單測試

二、kafka安裝

叢集規劃

IP 節點名稱 Kafka Zookeeper Jdk Scala
192.168.100.21 node21
192.168.100.22 node22
192.168.100.23 node23

Zookeeper叢集安裝參考: CentOS7.5搭建Zookeeper3.4.12叢集與指令行操作

2.1 上傳解壓縮

[admin@node21 software]$ tar zxvf kafka_2.11-1.1.0.tgz -C /opt/module/
      

2.2 建立日志目錄

[admin@node21 software]$ cd /opt/module/kafka_2.11-1.1.0
[admin@node21 kafka_2.11-1.1.0]$ mkdir logs      

2.3 修改配置檔案

進入kafka的安裝配置目錄

[admin@node21 kafka_2.11-1.1.0]$ cd config/
[admin@node21 config]$ vi server.properties      

主要關注:server.properties 這個檔案即可,發現在配置目錄下也有Zookeeper檔案,我們可以根據Kafka内置的zk叢集來啟動,但是建議使用獨立的zk叢集。

server.properties(broker.id和listeners每個節點都不相同)

#是否允許删除topic,預設false不能手動删除
delete.topic.enable=true
#目前機器在叢集中的唯一辨別,和zookeeper的myid性質一樣
broker.id=0
#目前kafka服務偵聽的位址和端口,端口預設是9092
listeners = PLAINTEXT://192.168.100.21:9092
#這個是borker進行網絡處理的線程數
num.network.threads=3
#這個是borker進行I/O處理的線程數
num.io.threads=8
#發送緩沖區buffer大小,資料不是一下子就發送的,先會存儲到緩沖區到達一定的大小後在發送,能提高性能
socket.send.buffer.bytes=102400
#kafka接收緩沖區大小,當資料到達一定大小後在序列化到磁盤
socket.receive.buffer.bytes=102400
#這個參數是向kafka請求消息或者向kafka發送消息的請請求的最大數,這個值不能超過java的堆棧大小
socket.request.max.bytes=104857600
#消息日志存放的路徑
log.dirs=/opt/module/kafka_2.11-1.1.0/logs
#預設的分區數,一個topic預設1個分區數
num.partitions=1
#每個資料目錄用來日志恢複的線程數目
num.recovery.threads.per.data.dir=1
#預設消息的最大持久化時間,168小時,7天
log.retention.hours=168
#這個參數是:因為kafka的消息是以追加的形式落地到檔案,當超過這個值的時候,kafka會新起一個檔案
log.segment.bytes=1073741824
#每隔300000毫秒去檢查上面配置的log失效時間
log.retention.check.interval.ms=300000
#是否啟用log壓縮,一般不用啟用,啟用的話可以提高性能
log.cleaner.enable=false
#設定zookeeper的連接配接端口
zookeeper.connect=node21:2181,node22:2181,node23:2181
#設定zookeeper的連接配接逾時時間
zookeeper.connection.timeout.ms=6000      

2.4分發安裝包到其他節點

[admin@node21 module]$ scp -r kafka_2.11-1.1.0 admin@node22:/opt/module/
[admin@node21 module]$ scp -r kafka_2.11-1.1.0 admin@node23:/opt/module/      

修改node22,node23節點kafka配置檔案conf/server.properties裡面的broker.id和listeners的值。

2.5 添加環境變量

[admin@node21 module]$ vi /etc/profile
#KAFKA_HOME
export KAFKA_HOME=/opt/module/kafka_2.11-1.1.0
export PATH=$PATH:$KAFKA_HOME/bin      

儲存使其立即生效

[admin@node21 module]$  source /etc/profile      

三、kafka叢集啟動

3.1 首先啟動zookeeper叢集

所有zookeeper節點都需要執行

[admin@node21 ~]$ zkServer.sh start
      

3.2 背景啟動Kafka叢集服務

所有Kafka節點都需要執行

[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-server-start.sh config/server.properties &      

四、kafka指令行操作

kafka-broker-list:node21:9092,node22:9092,node23:9092

zookeeper.connect-list: node21:2181,node22:2181,node23:2181

4.1 建立新topic 

[admin@node22 kafka_2.11-1.1.0]$ bin/kafka-topics.sh
Create, delete, describe, or change a topic.
Option                                   Description                            
------                                   -----------                            
--alter                                  Alter the number of partitions,        
                                           replica assignment, and/or           
                                           configuration for the topic.         
--config <String: name=value>            A topic configuration override for the 
                                           topic being created or altered.The   
                                           following is a list of valid         
                                           configurations:                      
                                             cleanup.policy                        
                                             compression.type                      
                                             delete.retention.ms                   
                                             file.delete.delay.ms                  
                                             flush.messages                        
                                             flush.ms                              
                                             follower.replication.throttled.       
                                           replicas                             
                                             index.interval.bytes                  
                                             leader.replication.throttled.replicas 
                                             max.message.bytes                     
                                             message.format.version                
                                             message.timestamp.difference.max.ms   
                                             message.timestamp.type                
                                             min.cleanable.dirty.ratio             
                                             min.compaction.lag.ms                 
                                             min.insync.replicas                   
                                             preallocate                           
                                             retention.bytes                       
                                             retention.ms                          
                                             segment.bytes                         
                                             segment.index.bytes                   
                                             segment.jitter.ms                     
                                             segment.ms                            
                                             unclean.leader.election.enable        
                                         See the Kafka documentation for full   
                                           details on the topic configs.        
--create                                 Create a new topic.                    
--delete                                 Delete a topic                         
--delete-config <String: name>           A topic configuration override to be   
                                           removed for an existing topic (see   
                                           the list of configurations under the 
                                           --config option).                    
--describe                               List details for the given topics.     
--disable-rack-aware                     Disable rack aware replica assignment  
--force                                  Suppress console prompts               
--help                                   Print usage information.               
--if-exists                              if set when altering or deleting       
                                           topics, the action will only execute 
                                           if the topic exists                  
--if-not-exists                          if set when creating topics, the       
                                           action will only execute if the      
                                           topic does not already exist         
--list                                   List all available topics.             
--partitions <Integer: # of partitions>  The number of partitions for the topic 
                                           being created or altered (WARNING:   
                                           If partitions are increased for a    
                                           topic that has a key, the partition  
                                           logic or ordering of the messages    
                                           will be affected                     
--replica-assignment <String:            A list of manual partition-to-broker   
  broker_id_for_part1_replica1 :           assignments for the topic being      
  broker_id_for_part1_replica2 ,           created or altered.                  
  broker_id_for_part2_replica1 :                                                
  broker_id_for_part2_replica2 , ...>                                           
--replication-factor <Integer:           The replication factor for each        
  replication factor>                      partition in the topic being created.
--topic <String: topic>                  The topic to be create, alter or       
                                           describe. Can also accept a regular  
                                           expression except for --create option
--topics-with-overrides                  if set when describing topics, only    
                                           show topics that have overridden     
                                           configs                              
--unavailable-partitions                 if set when describing topics, only    
                                           show partitions whose leader is not  
                                           available                            
--under-replicated-partitions            if set when describing topics, only    
                                           show under replicated partitions     
--zookeeper <String: hosts>              REQUIRED: The connection string for    
                                           the zookeeper connection in the form 
                                           host:port. Multiple hosts can be     
                                           given to allow fail-over.       

在node21節點上建立一個新的Topic

[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-topics.sh --create --zookeeper node21:2181,node22:2181,node23:2181 --replication-factor 3 --partitions 3 --topic TestTopic      
Kafka(二)CentOS7.5搭建Kafka2.11-1.1.0叢集與簡單測試

選項說明:

--topic 定義topic名

--replication-factor  定義副本數

--partitions  定義分區數

4.2 檢視topic副本資訊

[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-topics.sh --describe --zookeeper node21:2181,node22:2181,node23:2181 --topic TestTopic      
Kafka(二)CentOS7.5搭建Kafka2.11-1.1.0叢集與簡單測試

4.3 檢視已經建立的topic資訊

[admin@node21 kafka_2.11-1.1.0]$ kafka-topics.sh --list --zookeeper node21:2181,node22:2181,node23:2181

Kafka(二)CentOS7.5搭建Kafka2.11-1.1.0叢集與簡單測試

4.4 測試生産者發送消息

[admin@node22 kafka_2.11-1.1.0]$ bin/kafka-console-producer.sh 
Read data from standard input and publish it to Kafka.
Option                                   Description                            
------                                   -----------                            
--batch-size <Integer: size>             Number of messages to send in a single 
                                           batch if they are not being sent     
                                           synchronously. (default: 200)        
--broker-list <String: broker-list>      REQUIRED: The broker list string in    
                                           the form HOST1:PORT1,HOST2:PORT2.    
--compression-codec [String:             The compression codec: either 'none',  
  compression-codec]                       'gzip', 'snappy', or 'lz4'.If        
                                           specified without value, then it     
                                           defaults to 'gzip'                   
--key-serializer <String:                The class name of the message encoder  
  encoder_class>                           implementation to use for            
                                           serializing keys. (default: kafka.   
                                           serializer.DefaultEncoder)           
--line-reader <String: reader_class>     The class name of the class to use for 
                                           reading lines from standard in. By   
                                           default each line is read as a       
                                           separate message. (default: kafka.   
                                           tools.                               
                                           ConsoleProducer$LineMessageReader)   
--max-block-ms <Long: max block on       The max time that the producer will    
  send>                                    block for during a send request      
                                           (default: 60000)                     
--max-memory-bytes <Long: total memory   The total memory used by the producer  
  in bytes>                                to buffer records waiting to be sent 
                                           to the server. (default: 33554432)   
--max-partition-memory-bytes <Long:      The buffer size allocated for a        
  memory in bytes per partition>           partition. When records are received 
                                           which are smaller than this size the 
                                           producer will attempt to             
                                           optimistically group them together   
                                           until this size is reached.          
                                           (default: 16384)                     
--message-send-max-retries <Integer>     Brokers can fail receiving the message 
                                           for multiple reasons, and being      
                                           unavailable transiently is just one  
                                           of them. This property specifies the 
                                           number of retires before the         
                                           producer give up and drop this       
                                           message. (default: 3)                
--metadata-expiry-ms <Long: metadata     The period of time in milliseconds     
  expiration interval>                     after which we force a refresh of    
                                           metadata even if we haven't seen any 
                                           leadership changes. (default: 300000)
--old-producer                           Use the old producer implementation.   
--producer-property <String:             A mechanism to pass user-defined       
  producer_prop>                           properties in the form key=value to  
                                           the producer.                        
--producer.config <String: config file>  Producer config properties file. Note  
                                           that [producer-property] takes       
                                           precedence over this config.         
--property <String: prop>                A mechanism to pass user-defined       
                                           properties in the form key=value to  
                                           the message reader. This allows      
                                           custom configuration for a user-     
                                           defined message reader.              
--queue-enqueuetimeout-ms <Integer:      Timeout for event enqueue (default:    
  queue enqueuetimeout ms>                 2147483647)                          
--queue-size <Integer: queue_size>       If set and the producer is running in  
                                           asynchronous mode, this gives the    
                                           maximum amount of  messages will     
                                           queue awaiting sufficient batch      
                                           size. (default: 10000)               
--request-required-acks <String:         The required acks of the producer      
  request required acks>                   requests (default: 1)                
--request-timeout-ms <Integer: request   The ack timeout of the producer        
  timeout ms>                              requests. Value must be non-negative 
                                           and non-zero (default: 1500)         
--retry-backoff-ms <Integer>             Before each retry, the producer        
                                           refreshes the metadata of relevant   
                                           topics. Since leader election takes  
                                           a bit of time, this property         
                                           specifies the amount of time that    
                                           the producer waits before refreshing 
                                           the metadata. (default: 100)         
--socket-buffer-size <Integer: size>     The size of the tcp RECV size.         
                                           (default: 102400)                    
--sync                                   If set message send requests to the    
                                           brokers are synchronously, one at a  
                                           time as they arrive.                 
--timeout <Integer: timeout_ms>          If set and the producer is running in  
                                           asynchronous mode, this gives the    
                                           maximum amount of time a message     
                                           will queue awaiting sufficient batch 
                                           size. The value is given in ms.      
                                           (default: 1000)                      
--topic <String: topic>                  REQUIRED: The topic id to produce      
                                           messages to.                         
--value-serializer <String:              The class name of the message encoder  
  encoder_class>                           implementation to use for            
                                           serializing values. (default: kafka. 
                                           serializer.DefaultEncoder)        

在node21上生産消息

[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-console-producer.sh --broker-list node21:9092,node22:9092,node23:9092 --topic TestTopic

Kafka(二)CentOS7.5搭建Kafka2.11-1.1.0叢集與簡單測試

4.5 測試消費者消費消息

[admin@node22 kafka_2.11-1.1.0]$ bin/kafka-console-consumer.sh
The console consumer is a tool that reads data from Kafka and outputs it to standard output.
Option                                   Description                            
------                                   -----------                            
--blacklist <String: blacklist>          Blacklist of topics to exclude from    
                                           consumption.                         
--bootstrap-server <String: server to    REQUIRED (unless old consumer is       
  connect to>                              used): The server to connect to.     
--consumer-property <String:             A mechanism to pass user-defined       
  consumer_prop>                           properties in the form key=value to  
                                           the consumer.                        
--consumer.config <String: config file>  Consumer config properties file. Note  
                                           that [consumer-property] takes       
                                           precedence over this config.         
--csv-reporter-enabled                   If set, the CSV metrics reporter will  
                                           be enabled                           
--delete-consumer-offsets                If specified, the consumer path in     
                                           zookeeper is deleted when starting up
--enable-systest-events                  Log lifecycle events of the consumer   
                                           in addition to logging consumed      
                                           messages. (This is specific for      
                                           system tests.)                       
--formatter <String: class>              The name of a class to use for         
                                           formatting kafka messages for        
                                           display. (default: kafka.tools.      
                                           DefaultMessageFormatter)             
--from-beginning                         If the consumer does not already have  
                                           an established offset to consume     
                                           from, start with the earliest        
                                           message present in the log rather    
                                           than the latest message.             
--group <String: consumer group id>      The consumer group id of the consumer. 
--isolation-level <String>               Set to read_committed in order to      
                                           filter out transactional messages    
                                           which are not committed. Set to      
                                           read_uncommittedto read all          
                                           messages. (default: read_uncommitted)
--key-deserializer <String:                                                     
  deserializer for key>                                                         
--max-messages <Integer: num_messages>   The maximum number of messages to      
                                           consume before exiting. If not set,  
                                           consumption is continual.            
--metrics-dir <String: metrics           If csv-reporter-enable is set, and     
  directory>                               this parameter isset, the csv        
                                           metrics will be output here          
--new-consumer                           Use the new consumer implementation.   
                                           This is the default, so this option  
                                           is deprecated and will be removed in 
                                           a future release.                    
--offset <String: consume offset>        The offset id to consume from (a non-  
                                           negative number), or 'earliest'      
                                           which means from beginning, or       
                                           'latest' which means from end        
                                           (default: latest)                    
--partition <Integer: partition>         The partition to consume from.         
                                           Consumption starts from the end of   
                                           the partition unless '--offset' is   
                                           specified.                           
--property <String: prop>                The properties to initialize the       
                                           message formatter.                   
--skip-message-on-error                  If there is an error when processing a 
                                           message, skip it instead of halt.    
--timeout-ms <Integer: timeout_ms>       If specified, exit if no message is    
                                           available for consumption for the    
                                           specified interval.                  
--topic <String: topic>                  The topic id to consume on.            
--value-deserializer <String:                                                   
  deserializer for values>                                                      
--whitelist <String: whitelist>          Whitelist of topics to include for     
                                           consumption.                         
--zookeeper <String: urls>               REQUIRED (only when using old          
                                           consumer): The connection string for 
                                           the zookeeper connection in the form 
                                           host:port. Multiple URLS can be      
                                           given to allow fail-over.        

在node22上消費消息(舊指令操作)

[admin@node22 kafka_2.11-1.1.0]$ kafka-console-consumer.sh --zookeeper node21:2181,node22:2181,node23:2181  --from-beginning --topic TestTopic      

--from-beginning:會把TestTopic主題中以往所有的資料都讀取出來。根據業務場景選擇是否增加該配置。

Kafka(二)CentOS7.5搭建Kafka2.11-1.1.0叢集與簡單測試

新消費者指令

[admin@node22 kafka_2.11-1.1.0]$ kafka-console-consumer.sh --bootstrap-server node21:9092,node22:9092,node23:9092  --from-beginning --topic TestTopic      
Kafka(二)CentOS7.5搭建Kafka2.11-1.1.0叢集與簡單測試

4.6删除topic

[admin@node22 kafka_2.11-1.1.0]$ bin/kafka-topics.sh --zookeeper node21:2181,node22:2181,node23:2181  --delete --topic TestTopic      
Kafka(二)CentOS7.5搭建Kafka2.11-1.1.0叢集與簡單測試

需要server.properties中設定delete.topic.enable=true否則隻是标記删除或者直接重新開機。

4.7 停止Kafka服務

[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-server-stop.sh stop      

4.8 編寫kafka啟動腳本

[admin@node21 kafka_2.11-1.1.0]$ cd bin
[admin@node21 bin]$ vi start-kafka.sh
#!/bin/bash
nohup /opt/module/kafka_2.11-1.1.0/bin/kafka-server-start.sh  /opt/module/kafka_2.11-1.1.0/config/server.properties >/opt/module/kafka_2.11-1.1.0/logs/kafka.log 2>&1 &      

賦權限給腳本:chmod +x start-kafka.sh

五、Kafka配置資訊詳解

5.1 Broker配置資訊

屬性 預設值 描述
broker.id 必填參數,broker的唯一辨別
log.dirs /tmp/kafka-logs Kafka資料存放的目錄。可以指定多個目錄,中間用逗号分隔,當新partition被建立的時會被存放到目前存放partition最少的目錄。
port 9092 BrokerServer接受用戶端連接配接的端口号
zookeeper.connect null Zookeeper的連接配接串,格式為:hostname1:port1,hostname2:port2,hostname3:port3。可以填一個或多個,為了提高可靠性,建議都填上。注意,此配置允許我們指定一個zookeeper路徑來存放此kafka叢集的所有資料,為了與其他應用叢集區分開,建議在此配置中指定本叢集存放目錄,格式為:hostname1:port1,hostname2:port2,hostname3:port3/chroot/path 。需要注意的是,消費者的參數要和此參數一緻。
message.max.bytes 1000000 伺服器可以接收到的最大的消息大小。注意此參數要和consumer的maximum.message.size大小一緻,否則會因為生産者生産的消息太大導緻消費者無法消費。
num.io.threads 8 伺服器用來執行讀寫請求的IO線程數,此參數的數量至少要等于伺服器上磁盤的數量。
queued.max.requests 500 I/O線程可以處理請求的隊列大小,若實際請求數超過此大小,網絡線程将停止接收新的請求。
socket.send.buffer.bytes 100 * 1024 The SO_SNDBUFF buffer the server prefers for socket connections.
socket.receive.buffer.bytes The SO_RCVBUFF buffer the server prefers for socket connections.
socket.request.max.bytes 100 * 1024 * 1024 伺服器允許請求的最大值, 用來防止記憶體溢出,其值應該小于 Java heap size.
num.partitions 1 預設partition數量,如果topic在建立時沒有指定partition數量,預設使用此值,建議改為5
log.segment.bytes 1024 * 1024 * 1024 Segment檔案的大小,超過此值将會自動建立一個segment,此值可以被topic級别的參數覆寫。
log.roll.{ms,hours} 24 * 7 hours 建立segment檔案的時間,此值可以被topic級别的參數覆寫。
log.retention.{ms,minutes,hours} 7 days Kafka segment log的儲存周期,儲存周期超過此時間日志就會被删除。此參數可以被topic級别參數覆寫。資料量大時,建議減小此值。
log.retention.bytes -1 每個partition的最大容量,若資料量超過此值,partition資料将會被删除。注意這個參數控制的是每個partition而不是topic。此參數可以被log級别參數覆寫。
log.retention.check.interval.ms 5 minutes 删除政策的檢查周期
auto.create.topics.enable true 自動建立topic參數,建議此值設定為false,嚴格控制topic管理,防止生産者錯寫topic。
default.replication.factor 預設副本數量,建議改為2。
replica.lag.time.max.ms 10000 在此視窗時間内沒有收到follower的fetch請求,leader會将其從ISR(in-sync replicas)中移除。
replica.lag.max.messages 4000 如果replica節點落後leader節點此值大小的消息數量,leader節點就會将其從ISR中移除。
replica.socket.timeout.ms 30 * 1000 replica向leader發送請求的逾時時間。
replica.socket.receive.buffer.bytes 64 * 1024 The socket receive buffer for network requests to the leader for replicating data.
replica.fetch.max.bytes 1024 * 1024 The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader.
replica.fetch.wait.max.ms The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader.
num.replica.fetchers Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker.
fetch.purgatory.purge.interval.requests 1000 The purge interval (in number of requests) of the fetch request purgatory.
zookeeper.session.timeout.ms 6000 ZooKeeper session 逾時時間。如果在此時間内server沒有向zookeeper發送心跳,zookeeper就會認為此節點已挂掉。 此值太低導緻節點容易被标記死亡;若太高,.會導緻太遲發現節點死亡。
zookeeper.connection.timeout.ms 用戶端連接配接zookeeper的逾時時間。
zookeeper.sync.time.ms 2000 H ZK follower落後 ZK leader的時間。
controlled.shutdown.enable 允許broker shutdown。如果啟用,broker在關閉自己之前會把它上面的所有leaders轉移到其它brokers上,建議啟用,增加叢集穩定性。
auto.leader.rebalance.enable If this is enabled the controller will automatically try to balance leadership for partitions among the brokers by periodically returning leadership to the “preferred” replica for each partition if it is available.
leader.imbalance.per.broker.percentage 10 The percentage of leader imbalance allowed per broker. The controller will rebalance leadership if this ratio goes above the configured value per broker.
leader.imbalance.check.interval.seconds 300 The frequency with which to check for leader imbalance.
offset.metadata.max.bytes 4096 The maximum amount of metadata to allow clients to save with their offsets.
connections.max.idle.ms 600000 Idle connections timeout: the server socket processor threads close the connections that idle more than this.
num.recovery.threads.per.data.dir The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
unclean.leader.election.enable Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.
delete.topic.enable false 啟用deletetopic參數,建議設定為true。
offsets.topic.num.partitions 50 The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend using a higher setting for production (e.g., 100-200).
offsets.topic.retention.minutes 1440 Offsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the offsets topic.
offsets.retention.check.interval.ms The frequency at which the offset manager checks for stale offsets.
offsets.topic.replication.factor 3 The replication factor for the offset commit topic. A higher setting (e.g., three or four) is recommended in order to ensure higher availability. If the offsets topic is created when fewer brokers than the replication factor then the offsets topic will be created with fewer replicas.
offsets.topic.segment.bytes 104857600 Segment size for the offsets topic. Since it uses a compacted topic, this should be kept relatively low in order to facilitate faster log compaction and loads.
offsets.load.buffer.size 5242880 An offset load occurs when a broker becomes the offset manager for a set of consumer groups (i.e., when it becomes a leader for an offsets topic partition). This setting corresponds to the batch size (in bytes) to use when reading from the offsets segments when loading offsets into the offset manager’s cache.
offsets.commit.required.acks The number of acknowledgements that are required before the offset commit can be accepted. This is similar to the producer’s acknowledgement setting. In general, the default should not be overridden.
offsets.commit.timeout.ms 5000 The offset commit will be delayed until this timeout or the required number of replicas have received the offset commit. This is similar to the producer request timeout.

5.2 Producer配置資訊

metadata.broker.list 啟動時producer查詢brokers的清單,可以是叢集中所有brokers的一個子集。注意,這個參數隻是用來擷取topic的元資訊用,producer會從元資訊中挑選合适的broker并與之建立socket連接配接。格式是:host1:port1,host2:port2。
request.required.acks 參見3.2節介紹
request.timeout.ms Broker等待ack的逾時時間,若等待時間超過此值,會傳回用戶端錯誤資訊。
producer.type sync 同步異步模式。async表示異步,sync表示同步。如果設定成異步模式,可以允許生産者以batch的形式push資料,這樣會極大的提高broker性能,推薦設定為異步。
serializer.class kafka.serializer.DefaultEncoder 序列号類,.預設序列化成 byte[] 。
key.serializer.class Key的序列化類,預設同上。
partitioner.class kafka.producer.DefaultPartitioner Partition類,預設對key進行hash。
compression.codec none 指定producer消息的壓縮格式,可選參數為: “none”, “gzip” and “snappy”。關于壓縮參見4.1節
compressed.topics 啟用壓縮的topic名稱。若上面參數選擇了一個壓縮格式,那麼壓縮僅對本參數指定的topic有效,若本參數為空,則對所有topic有效。
message.send.max.retries Producer發送失敗時重試次數。若網絡出現問題,可能會導緻不斷重試。
retry.backoff.ms 100 Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata.
topic.metadata.refresh.interval.ms 600 * 1000 The producer generally refreshes the topic metadata from brokers when there is a failure (partition missing, leader not available…). It will also poll regularly (default: every 10min so 600000ms). If you set this to a negative value, metadata will only get refreshed on failure. If you set this to zero, the metadata will get refreshed after each message sent (not recommended). Important note: the refresh happen only AFTER the message is sent, so if the producer never sends a message the metadata is never refreshed
queue.buffering.max.ms 啟用異步模式時,producer緩存消息的時間。比如我們設定成1000時,它會緩存1秒的資料再一次發送出去,這樣可以極大的增加broker吞吐量,但也會造成時效性的降低。
queue.buffering.max.messages 采用異步模式時producer buffer 隊列裡最大緩存的消息數量,如果超過這個數值,producer就會阻塞或者丢掉消息。
queue.enqueue.timeout.ms 當達到上面參數值時producer阻塞等待的時間。如果值設定為0,buffer隊列滿時producer不會阻塞,消息直接被丢掉。若值設定為-1,producer會被阻塞,不會丢消息。
batch.num.messages 200 采用異步模式時,一個batch緩存的消息數量。達到這個數量值時producer才會發送消息。
send.buffer.bytes Socket write buffer size
client.id “” The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request.

5.3 Consumer配置資訊

group.id Consumer的組ID,相同goup.id的consumer屬于同一個組。
Consumer的zookeeper連接配接串,要和broker的配置一緻。
consumer.id 如果不設定會自動生成。
socket.timeout.ms 網絡請求的socket逾時時間。實際逾時時間由max.fetch.wait + socket.timeout.ms 确定。
The socket receive buffer for network requests.
fetch.message.max.bytes 查詢topic-partition時允許的最大消息大小。consumer會為每個partition緩存此大小的消息到記憶體,是以,這個參數可以控制consumer的記憶體使用量。這個值應該至少比server允許的最大消息大小大,以免producer發送的消息大于consumer允許的消息。
num.consumer.fetchers The number fetcher threads used to fetch data.
auto.commit.enable 如果此值設定為true,consumer會周期性的把目前消費的offset值儲存到zookeeper。當consumer失敗重新開機之後将會使用此值作為新開始消費的值。
auto.commit.interval.ms 60 * 1000 Consumer送出offset值到zookeeper的周期。
queued.max.message.chunks 2 用來被consumer消費的message chunks 數量, 每個chunk可以緩存fetch.message.max.bytes大小的資料量。
fetch.min.bytes The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request.
fetch.wait.max.ms The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy fetch.min.bytes.
rebalance.backoff.ms Backoff time between retries during rebalance.
refresh.leader.backoff.ms Backoff time to wait before trying to determine the leader of a partition that has just lost its leader.
auto.offset.reset largest What to do when there is no initial offset in ZooKeeper or if an offset is out of range ;smallest : automatically reset the offset to the smallest offset; largest : automatically reset the offset to the largest offset;anything else: throw exception to the consumer
consumer.timeout.ms 若在指定時間内沒有消息消費,consumer将會抛出異常。
exclude.internal.topics Whether messages from internal topics (such as offsets) should be exposed to the consumer.
ZooKeeper session timeout. If the consumer fails to heartbeat to ZooKeeper for this period of time it is considered dead and a rebalance will occur.
The max time that the client waits while establishing a connection to zookeeper.
How far a ZK follower can be behind a ZK leader