建立目錄用于存放
Docker Compose
部署
Kafka
叢集的
yaml
檔案:
mkdir -p /root/composefile/kafka/
寫入該
yaml
檔案:
vim /root/composefile/kafka/kafka.yaml
内容如下:
version: '3'
networks:
kafka-networks:
driver: bridge
services:
kafka1:
image: wurstmeister/kafka
container_name: kafka1
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.1.9
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.9:9092
KAFKA_ZOOKEEPER_CONNECT: "192.168.1.9:9001,192.168.1.9:9002,192.168.1.9:9003"
KAFKA_ADVERTISED_PORT: 9092
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
networks:
- kafka-networks
kafka2:
image: wurstmeister/kafka
container_name: kafka2
ports:
- "9093:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.1.9
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.9:9093
KAFKA_ZOOKEEPER_CONNECT: "192.168.1.9:9001,192.168.1.9:9002,192.168.1.9:9003"
KAFKA_ADVERTISED_PORT: 9093
KAFKA_BROKER_ID: 2
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
networks:
- kafka-networks
kafka3:
image: wurstmeister/kafka
container_name: kafka3
ports:
- "9094:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.1.9
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.9:9094
KAFKA_ZOOKEEPER_CONNECT: "192.168.1.9:9001,192.168.1.9:9002,192.168.1.9:9003"
KAFKA_ADVERTISED_PORT: 9094
KAFKA_BROKER_ID: 3
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
networks:
- kafka-networks
192.168.1.9
是主控端的
IP
位址,
192.168.1.9:9001,192.168.1.9:9002,192.168.1.9:9003
是已經部署好的
ZooKeeper
叢集。
- ZooKeeper :Docker Compose部署ZooKeeper叢集
KAFKA_BROKER_ID
用于區分叢集中的節點。
部署
Kafka
叢集:
docker compose -f /root/composefile/kafka/kafka.yaml up -d
[+] Running 8/8
⠿ kafka3 Pulled 68.6s
⠿ 540db60ca938 Pull complete 6.2s
⠿ f0698009749d Pull complete 25.2s
⠿ d67ee08425e3 Pull complete 25.3s
⠿ 1a56bfced4ac Pull complete 62.7s
⠿ dccb9e5a402a Pull complete 62.8s
⠿ kafka1 Pulled 68.7s
⠿ kafka2 Pulled 68.6s
[+] Running 3/3
⠿ Container kafka3 Started 1.9s
⠿ Container kafka1 Started 1.9s
⠿ Container kafka2 Started 2.0s
檢視容器狀态。
docker compose ls
Kafka
叢集和
ZooKeeper
叢集都在運作。
NAME STATUS
kafka running(3)
zookeeper running(3)
Kafka
叢集的資訊已經注冊到了
ZooKeeper
叢集。
[zk: 192.168.1.9:9001(CONNECTED) 0] ls /
[admin, brokers, cluster, config, consumers, controller, controller_epoch, feature, isr_change_notification, latest_producer_id_block, log_dir_event_notification, zookeeper]
[zk: 192.168.1.9:9001(CONNECTED) 1] ls -R /brokers
/brokers
/brokers/ids
/brokers/seqid
/brokers/topics
/brokers/ids/1
/brokers/ids/2
/brokers/ids/3
[zk: 192.168.1.9:9001(CONNECTED) 2] get /brokers/ids/1
{"features":{},"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://192.168.1.9:9092"],"jmx_port":-1,"port":9092,"host":"192.168.1.9","version":5,"timestamp":"1644751714958"}
[zk: 192.168.1.9:9001(CONNECTED) 3] get /brokers/ids/2
{"features":{},"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://192.168.1.9:9093"],"jmx_port":-1,"port":9093,"host":"192.168.1.9","version":5,"timestamp":"1644751714877"}
[zk: 192.168.1.9:9001(CONNECTED) 4] get /brokers/ids/3
{"features":{},"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://192.168.1.9:9094"],"jmx_port":-1,"port":9094,"host":"192.168.1.9","version":5,"timestamp":"1644751714887"}
用代碼來測試一下,項目
pom.xml
:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.kaven</groupId>
<artifactId>kafka</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<maven.compiler.source>8</maven.compiler.source>
<maven.compiler.target>8</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>3.0.0</version>
</dependency>
</dependencies>
</project>
測試代碼:
package com.kaven.kafka.admin;
import org.apache.kafka.clients.admin.*;
import org.apache.kafka.common.KafkaFuture;
import java.util.Collections;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutionException;
public class Admin {
// 基于Kafka服務位址與請求逾時時間來建立AdminClient執行個體
private static final AdminClient adminClient = Admin.getAdminClient("192.168.1.9:9092", "40000");
public static void main(String[] args) throws InterruptedException, ExecutionException {
Admin admin = new Admin();
// 建立Topic,Topic名稱為new-topic,分區數為2,複制因子為1
admin.createTopic("new-topic", 2, (short) 1);
}
public static AdminClient getAdminClient(String address, String requestTimeoutMS) {
Properties properties = new Properties();
properties.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, address);
properties.setProperty(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, requestTimeoutMS);
return AdminClient.create(properties);
}
public void createTopic(String name, int numPartitions, short replicationFactor) throws InterruptedException {
CountDownLatch latch = new CountDownLatch(1);
CreateTopicsResult topics = adminClient.createTopics(
Collections.singleton(new NewTopic(name, numPartitions, replicationFactor))
);
Map<String, KafkaFuture<Void>> values = topics.values();
values.forEach((name__, future) -> {
future.whenComplete((a, throwable) -> {
if(throwable != null) {
System.out.println(throwable.getMessage());
}
System.out.println(name__);
latch.countDown();
});
});
latch.await();
}
}
輸出:
new-topic
[zk: 192.168.1.9:9001(CONNECTED) 9] ls -R /brokers/topics
/brokers/topics
/brokers/topics/new-topic
/brokers/topics/new-topic/partitions
/brokers/topics/new-topic/partitions/0
/brokers/topics/new-topic/partitions/1
/brokers/topics/new-topic/partitions/0/state
/brokers/topics/new-topic/partitions/1/state