前言
書接前文,這一篇筆記記錄一下Kafka如何配置(總體而言)。本篇文章主要是結合Kafka的quickstart的文章來了解,打算中英文混排--盡管這樣做,是很多如何學好英語的建議裡面所極力反對的--這樣做,是為了簡化書寫,抓住重點進行記錄。
正文
Step 1: Download the code
Download the 2.1.0 release and un-tar it.
> tar -xzf kafka_2.11-2.1.0.tgz
> cd kafka_2.11-2.1.0
Step 2: Start the server
Kafka uses ZooKeeper so you need to first start a ZooKeeper server if you don't already have one. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node ZooKeeper instance.
> bin/zookeeper-server-start.sh config/zookeeper.properties
[2013-04-22 15:01:37,495] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
...
這裡要使用到ZooKeeper,這是一個單節點的ZooKeeper啟動執行個體。剛好借這個機會,研究一下這個腳本,把以前有些一直沒有搞明白的問題搞明白。
if [ $# -lt 1 ];
then
echo "USAGE: $0 [-daemon] zookeeper.properties"
exit 1
fi
# $#的意思是所有參數的數目,參考https://unix.stackexchange.com/questions/122343/what-does-mean-in-shell
base_dir=$(dirname $0)
# 擷取運作指令的目錄作為base
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
fi
# 配置KAFKA_LOG4J_OPTS參數。這種前面加“x”的用法是shell的一種技巧,判斷KAFKA_LOG4J_OPTS參數是否為空。
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx512M -Xms512M"
fi
# 配置KAFKA_HEAP_OPTS參數。
EXTRA_ARGS=${EXTRA_ARGS-'-name zookeeper -loggc'}
# 這裡有點不太了解
COMMAND=$1
case $COMMAND in
-daemon)
|EXTRA_ARGS="-daemon "$EXTRA_ARGS
|shift
|;;
*)
|;;
esac
# 判斷第一個參數是不是“daemon”,如果是的話,編入EXTRA_ARGS變量,下面要送給新的指令,然後使用shift從入參棧中移除掉這個參數,後面會使用“$@”來使用剩餘的入參
exec $base_dir/kafka-run-class.sh $EXTRA_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain "$@"
# 啟動kafka-run-class.sh腳本,把上述參數送進去
Now start the Kafka server:
> bin/kafka-server-start.sh config/server.properties
[2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)
[2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)
...
Step 3: Create a topic
Let's create a topic named "test" with a single partition and only one replica:
1 | |
We can now see that topic if we run the list topic command:
1 2 | |
Alternatively, instead of manually creating topics you can also configure your brokers to auto-create topics when a non-existent topic is published to.
Step 4: Send some messages
Kafka comes with a command line client that will take input from a file or from standard input and send it out as messages to the Kafka cluster. By default, each line will be sent as a separate message.
Run the producer and then type a few messages into the console to send to the server.
1 2 3 | |
Step 5: Start a consumer
Kafka also has a command line consumer that will dump out messages to standard output.
1 2 3 | |
If you have each of the above commands running in a different terminal then you should now be able to type messages into the producer terminal and see them appear in the consumer terminal.
All of the command line tools have additional options; running the command with no arguments will display usage information documenting them in more detail.
上面是一個簡單的啟動流程,沒有做太多的配置,語言也比較簡單。下面要進行多broker的叢集配置。Kafka的server被稱為broker,經紀人。
Step 6: Setting up a multi-broker cluster
So far we have been running against a single broker, but that's no fun. For Kafka, a single broker is just a cluster of size one, so nothing much changes other than starting a few more broker instances. But just to get feel for it, let's expand our cluster to three nodes (still all on our local machine).
First we make a config file for each of the brokers (on Windows use the
copy
command instead):
1 2 | |
Now edit these new files and set the following properties:
1 2 3 4 5 6 7 8 9 | |
The
broker.id
property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from all trying to register on the same port or overwrite each other's data.
broker.id這個屬性,在這個叢集中必須是唯一并且持久的名字,來辨別這個節點。
We already have Zookeeper and our single node started, so we just need to start the two new nodes:
1 2 3 4 | |
Now create a new topic with a replication factor of three:
1 | |
Okay but now that we have a cluster how can we know which broker is doing what? To see that run the "describe topics" command:
1 2 3 | |
Here is an explanation of output. The first line gives a summary of all the partitions, each additional line gives information about one partition. Since we have only one partition for this topic there is only one line.
對上面的輸出做一下解釋。第一行給出了所有分區的概要,後面增加的行給出了關于每一個分區的資訊,我們隻有一個分區,是以隻有一行。
- "leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
- “leader”是負責對于給定的分區來進行所有的讀寫操作的。每一個節點都被分區的挑選出來的部分來随機地挑選成為leader。
- "replicas" is the list of nodes that the log for this partition regardless of whether they are the leader or even if they are currently alive.
- “replicas”是來記錄這個分區的節點清單(備份節點),無論他們是否是leader甚至它們目前是否存活。
- "isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.
- “isr”是"in-sync"的replica的集合。這是replicas的子集,是那些目前或者的,并且趕上leader的replica。
Note that in my example node 1 is the leader for the only partition of the topic.
We can run the same command on the original topic we created to see where it is:
1 2 3 | |
So there is no surprise there—the original topic has no replicas and is on server 0, the only server in our cluster when we created it.
Let's publish a few messages to our new topic:
1 2 3 4 5 | |
Now let's consume these messages:
1 2 3 4 5 | |
Now let's test out fault-tolerance. Broker 1 was acting as the leader so let's kill it:
1 2 3 | |
On Windows use:
1 2 3 4 | |
Leadership has switched to one of the slaves and node 1 is no longer in the in-sync replica set:
1 2 3 | |
But the messages are still available for consumption even though the leader that took the writes originally is down:
1 2 3 4 5 | |
Step 7: Use Kafka Connect to import/export data
Writing data from the console and writing it back to the console is a convenient place to start, but you'll probably want to use data from other sources or export data from Kafka to other systems. For many systems, instead of writing custom integration code you can use Kafka Connect to import or export data.
Kafka Connect is a tool included with Kafka that imports and exports data to Kafka. It is an extensible tool that runs connectors, which implement the custom logic for interacting with an external system. In this quickstart we'll see how to run Kafka Connect with simple connectors that import data from a file to a Kafka topic and export data from a Kafka topic to a file.
First, we'll start by creating some seed data to test with:
1 | |
Or on Windows:
1 2 | |
Next, we'll start two connectors running in standalone mode, which means they run in a single, local, dedicated process. We provide three configuration files as parameters. The first is always the configuration for the Kafka Connect process, containing common configuration such as the Kafka brokers to connect to and the serialization format for data. The remaining configuration files each specify a connector to create. These files include a unique connector name, the connector class to instantiate, and any other configuration required by the connector.
1 | |
These sample configuration files, included with Kafka, use the default local cluster configuration you started earlier and create two connectors: the first is a source connector that reads lines from an input file and produces each to a Kafka topic and the second is a sink connector that reads messages from a Kafka topic and produces each as a line in an output file.
During startup you'll see a number of log messages, including some indicating that the connectors are being instantiated. Once the Kafka Connect process has started, the source connector should start reading lines from
test.txt
and producing them to the topic
connect-test
, and the sink connector should start reading messages from the topic
connect-test
and write them to the file
test.sink.txt
. We can verify the data has been delivered through the entire pipeline by examining the contents of the output file:
1 2 3 | |
Note that the data is being stored in the Kafka topic
connect-test
, so we can also run a console consumer to see the data in the topic (or use custom consumer code to process it):
1 2 3 4 | |
The connectors continue to process data, so we can add data to the file and see it move through the pipeline:
1 | |
You should see the line appear in the console consumer output and in the sink file.
這裡講了Kafka的連接配接功能,可以從别的源導入資料,或者導出資料到别的地方。
Step 8: Use Kafka Streams to process data
Kafka Streams is a client library for building mission-critical real-time applications and microservices, where the input and/or output data is stored in Kafka clusters. Kafka Streams combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's server-side cluster technology to make these applications highly scalable, elastic, fault-tolerant, distributed, and much more. This quickstart example will demonstrate how to run a streaming application coded in this library.
這裡很簡單的講了Kafka的流化處理能力。
總結
這裡主要是介紹了如何快速搭建Kafka的方法,比較簡單,同時總結了一下Shell的一些用法,關于這些用法,一直用的模模糊糊,這次好好總結一下,不要讓它一直像夾生飯一樣存在。
參考
https://kafka.apache.org/quickstart
https://unix.stackexchange.com/questions/254494/how-does-bash-differentiate-between-brace-expansion-and-command-grouping 介紹了Shell下大括号的使用
https://unix.stackexchange.com/questions/174566/what-is-the-purpose-of-using-shift-in-shell-scripts 介紹了shift的用法
https://unix.stackexchange.com/questions/122343/what-does-mean-in-shell 介紹了$#的用法