最近寫完storm實時統計pv.uv.ip的項目後,前端伺服器用flume進行收集,逐漸把所有的伺服器都增加上。增加的差不多時。Kafka報了如下類似錯誤:
查了各種資料,發現是由于Kafka走的預設配置。發送的消息超過了最大位元組導緻的。修改如下參數即可:
<code>replica.fetch.max.bytes</code> - this will allow
for the replicas in the brokers to send messages within the cluster and
make sure the messages are replicated correctly. If this is too small,
then the message will never be replicated, and therefore, the consumer
will never see the message because the message will never be committed
(fully replicated).
<code>message.max.bytes</code> - this is the largest size of the message that can be received by the broker from a producer.
修改完成之後一切正常。