天天看點

ActiveMQ叢集部署方案

ActiveMQ叢集部署方案

一. 利用zookeeper+levelDB的方法

​ 此方法适合5.9以上ActiveMQ版本,我們版本為5.14.4,符合版本要求。此方法利用zookerper控制broker的主從,每個broker有自己的一套levelDB模式的存儲檔案,3套情況下每次發送和消費隻有當2套都完成更新的時候才算成功,當有一套broker挂掉的時候會控制最新更新的一套broker成為主機。

1. 先配置zookeeper叢集

  1. 安裝解壓zookerper
    tar -xvf zookeeper-3.4.9.tar.gz
               
  2. 修改配置
    更改配置檔案名
    mv zoo_sample.cfg  zoo.cfg
               

​ vim zoo.cfg,内容如下(寫注意的為主要不同):

# 1tickTime是用戶端與zk服務端的心跳時間,2tickTime是用戶端會話的逾時時間。
tickTime的預設值為2000毫秒,更低的tickTime值可以更快地發現逾時問題,但也會導緻更高的網絡流量(心跳消息)和更高的CPU使用率(會話的跟蹤處理)。 
  tickTime=2000  
  # The number of ticks that the initial   
  # 此配置表示允許follower連接配接并同步到leader的初始化時間,它以tickTime的倍數來表示。當超過設定倍數的tickTime時間,則連接配接失敗。 
  initLimit=10  
  # The number of ticks that can pass between   
  # Leader伺服器與follower伺服器之間資訊同步允許的最大時間間隔,如果超過次間隔,預設follower伺服器與leader伺服器之間斷開連結。  
  syncLimit=5  
  # the directory where the snapshot is stored.  
 # do not use /tmp for storage, /tmp here is just   
 # example sakes.  
 
 # 注意!!!!!!!!!!!!!!!!!!!!!!
# 無預設配置,必須配置,用于配置存儲快照檔案的目錄。如果沒有配置dataLogDir,那麼事務日志也會存儲在此目錄。
 dataDir=/home/raptor/runtime/mqtestdata/activemq-1/zkdir/data
 dataLogDir=/home/raptor/runtime/mqtestdata/activemq-1/zkdir/log  
 # zk服務程序監聽的TCP端口,預設情況下,服務端會監聽2181端口。實際生産中如果zk的布置ip不同則這個可以相同
 clientPort=2181  
 
 # the maximum number of client connections.  
 # 限制連接配接到zookeeper伺服器用戶端的數量  
 #maxClientCnxns=60  
 #  
 # Be sure to read the maintenance section of the   
 # administrator guide before turning on autopurge.  
 #  
 # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance  
 #  
 # The number of snapshots to retain in dataDir  
 #autopurge.snapRetainCount=3  
 # Purge task interval in hours  
 # Set to "0" to disable auto purge feature  
 #autopurge.purgeInterval=1  
   
 # 注意!!  
 # three servers of this cluster  以下是僞叢集設定(配了三個就要把三個zookerper都啟起來),實際生産中應該是IP不同的
 server.1=192.168.199.23:2888:3888  
 server.2=192.168.199.23:2888:3889 
 server.3=192.168.199.23:2888:3890  
           

3 在以上配置檔案裡配的對應位置建立zkdir

mkdir zkdir
           

4 zkdir下建立log和data

mkdir log                 mkdir data
           

5 在zkdir/data下,建立myid檔案,并寫入與ip位址相稱的伺服器編号,比如寫入 1; 用來存對應zk服務的資料,跟mq無關。

echo 1 > myid;
           

6 啟動zookerper

./zkServer.sh start
           
//檢查zk啟動情況
[[email protected] bin]# tail -f zookeeper.out 
           

配置注意事項

1、 zoo.cfg文檔開頭和結尾不能留白格

2、 myid前後不能留換行和空格

3、 zookeeper 程序名為QuorumPeerMain

4、 zoo.cfg配置檔案寫法需要注意,不能重寫同一個變量

2. 配置ActiveMQ叢集

1 conf裡activemq.xml裡 persistenceAdapter關于levelDB的配置(原先的jdbc持久化相關資訊全部注釋掉)

<persistenceAdapter>
    <replicatedLevelDB
      directory="${activemq.data}/leveldb"
      replicas="3"
      bind="tcp://0.0.0.0:0"
      zkAddress="ip1:2181,ip2:2181,ip3:2181"
      hostname="192.168.199.23"
      sync="local_disk"
      zkPath="/activemq/leveldb-stores"
      />
</persistenceAdapter>
           

注解:

  • replicas:叢集中節點的數量,這裡配置了三台就是3。(如果你配置了replicas=3,那麼法定大小是(3/2)+1=2。Master 将會存儲并更新然後等待 (2-1)=1 個Slave存儲和更新完成,才彙報 success。至于為什麼是 2-1,熟悉 Zookeeper 的應該知道,有一個 node要作為觀擦者存在。當一個新的Master 被選中,你需要至少保障一個法定node 線上以能夠找到擁有最新 狀态的node。這個node 可以成為新的Master。是以,推薦運作至少3 個replica nodes,以防止一個node失敗了,服務中斷。)

  • bind:當這個節點成為主節點後,就會預設綁定目前IP和端口(bind屬性指明了當本節點成為一個Master節點後,通過哪一個通訊位置進行和其它Salve節點的消息複制操作。注意這裡配置的監聽位址和端口不能在transportConnectors标簽中進行重複配置,否則節點在啟動時會報錯。),tcp://0.0.0.0:0為随機的。

  • zkAddress:三台zookeeper的服務ip和端口,用逗号隔開

  • zkPath:預設它在zk内的尋址節點為“/default”。放mq master slave節點的節點目錄。加入到某一組master slave中的mq的執行個體中的zkPath必須完全比對。

  • hostname:這裡應該填寫三台zookeeper可以互相通路的位址,我因為用的是一台機器,是以寫的192.168.199.23,如果你部署的是三台内網,這裡就寫相應的那台機器的内網IP。就是寫目前機子ip

2 acitvemq.xml裡broker标簽裡的屬性

xmlns="http://activemq.apache.org/schema/core" 
        brokerName="activemq-1"
        useJmx="true"
        advisorySupport="true"
        dataDirectory="${activemq.data}"

           

注意:

brokerName 配置主備的broker的此項必須寫一樣的名字!

useJmx true,開放jmx

advisorySupport,過去為false,使用靜态連結的叢集方式必須為true。

3 activemq.xml裡面關于空間的配置

<systemUsage>
            <systemUsage sendFailIfNoSpace="true">
                <memoryUsage>
                    <memoryUsage percentOfJvmHeap="70" />
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="100 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="50 gb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>

           

注意:

這個是用來控制持久化儲存空間上限大小的控制
<storeUsage>
   <storeUsage limit="100 gb"/>
</storeUsage>
           

4 activemq.xml裡關于transportConnectors

<transportConnectors>
       <transportConnector name="openwire" uri="nio://0.0.0.0:61636?maximumConnections=1000&amp;wireFormat.maxFrameSize=524288000"/>
   </transportConnectors>

           

注釋:

uri裡,61636位置是broker的背景端口,用于連接配接mq。maximumConnections,最大連結數。maxFrameSize每次最大傳輸大小。

5 ActiveMQ其他配置

​ jetty.xml裡的端口,主要是amq控制台登入端口。

​ env裡的jmx端口,必須要做相應更改,不可重複。

再按以上配置2套備機,至此,一套主備備配好。

6 靜态連結的配置

主備備叢集保證了高可用,靜态連接配接則使broker與broker之間可以互相通信。

進入activemq.xml,配置networkConnectors,配置的位置必須在transportConnectors之前。

<networkConnectors>
     <networkConnector name="network1to4" uri="static:(nio://192.168.199.23:61639)" duplex="false" conduitSubscriptions="false" prefetchSize="1">
       </networkConnector>
  
       <networkConnector name="network1to5" uri="static:(nio://192.168.199.23:61640)" duplex="false" conduitSubscriptions="false" prefetchSize="1">
         </networkConnector>
  
         <networkConnector name="network1to6" uri="static:(nio://192.168.199.23:61641)" duplex="false" conduitSubscriptions="false" prefetchSize="1">
          </networkConnector>
</networkConnectors>

           

以上是配置了兩套主備備共六個broker後的broker1裡面的配置,6個broker都要配,參照這個。

注意點:

duplex ,是否使用雙攻,預設是false,我們使用false,避免配的broker太多的時候混亂。

conduitSubscriptions,預設true,是否把每個broker上連接配接的所有consumer當成一個。我們選擇false,因為我們用了過濾器,true會對過濾器造成影響,且不利于我們平均消費。

prefetchSize,預期值,預設1000,我們選擇1,因為我們之前的需求配置就是1,這裡1000意義不大,且會造成消息小于1000的時候一個消費者消費,其他的不工作。

以下是對于network的屬性的官方解釋,采用預設值的就不配出來了:

property default description
name bridge name of the network - for more than one network connector between the same two brokers - use different names
dynamicOnly false if true, only activate a networked durable subscription when a corresponding durable subscription reactivates, by default they are activated on startup.
decreaseNetworkConsumerPriority false if true, starting at priority -5, decrease the priority for dispatching to a network Queue consumer the further away it is (in network hops) from the producer. When false all network consumers use same default priority(0) as local consumers
networkTTL 1 the number of brokers in the network that messages and subscriptions can pass through (sets both message&consumer -TTL)
messageTTL 1 (version 5.9) the number of brokers in the network that messages can pass through
consumerTTL 1 (version 5.9) the number of brokers in the network that subscriptions can pass through (keep to 1 in a mesh)
conduitSubscriptions true multiple consumers subscribing to the same destination are treated as one consumer by the network
excludedDestinations empty destinations matching this list won’t be forwarded across the network (this only applies todynamicallyIncludedDestinations)
dynamicallyIncludedDestinations empty destinations that match this list will be forwarded across the network n.b. an empty list means all destinations not in the exluded list will be forwarded
useVirtualDestSubs false if true, the network connection will listen to advisory messages for virtual destination consumers
staticallyIncludedDestinations empty destinations that match will always be passed across the network - even if no consumers have ever registered an interest
duplex false if true, a network connection will be used to both produce AND Consume messages. This is useful for hub and spoke scenarios when the hub is behind a firewall etc.
prefetchSize 1000 Sets the prefetch size on the network connector’s consumer. It must be > 0 because network consumers do not poll for messages
suppressDuplicateQueueSubscriptions false (from 5.3) if true, duplicate subscriptions in the network that arise from network intermediaries will be suppressed. For example, given brokers A,B and C, networked via multicast discovery. A consumer on A will give rise to a networked consumer on B and C. In addition, C will network to B (based on the network consumer from A) and B will network to C. When true, the network bridges between C and B (being duplicates of their existing network subscriptions to A) will be suppressed. Reducing the routing choices in this way provides determinism when producers or consumers migrate across the network as the potential for dead routes (stuck messages) are eliminated. networkTTL needs to match or exceed the broker count to require this intervention.
bridgeTempDestinations true Whether to broadcast advisory messages for created temp destinations in the network of brokers or not. Temp destinations are typically created for request-reply messages. Broadcasting the information about temp destinations is turned on by default so that consumers of a request-reply message can be connected to another broker in the network and still send back the reply on the temporary destination specified in the JMSReplyTo header. In an application scenario where most/all messages use request-reply pattern, this will generate additional traffic on the broker network as every message typically sets a unique JMSReplyTo address (which causes a new temp destination to be created and broadcasted via an advisory message in the network of brokers). When disabling this feature such network traffic can be reduced but then producer and consumers of a request-reply message need to be connected to the same broker. Remote consumers (i.e. connected via another broker in your network) won’t be able to send the reply message but instead raise a “temp destination does not exist” exception.
alwaysSyncSend false (version 5.6) When true, non persistent messages are sent to the remote broker using request/reply in place of a oneway. This setting treats both persistent and non-persistent messages the same.
staticBridge false (version 5.6) If set to true, broker will not dynamically respond to new consumers. It will only use

staticallyIncludedDestinations

to create demand subscriptions
userName The username to authenticate against the remote broker
password The password for the username to authenticate against the remote broker

7 回流配置

​ 靜态連結後必須配置回流,不然會導緻發送到broker1的消息通過network後到了broker4,這時候如果broker4上的所有consumer挂掉後,消息将不會回到broker1。

相關配置:

<policyEntry queue=">" usePrefetchExtension="false"  enableAudit="true">
		
	<networkBridgeFilterFactory>
		<conditionalNetworkBridgeFilterFactory replayWhenNoConsumers ="true"/>
	</networkBridgeFilterFactory>
		
	<deadLetterStrategy>
		<individualDeadLetterStrategy  processExpired="false" queuePrefix="DLQ." useQueueForQueueMessages="true"/>
	</deadLetterStrategy>
  
</policyEntry>
           

注釋:

​ 回流的主要配置就是允許消息回流到原來的broker。配置conditionalNetworkBridgeFilterFactory 标簽的屬性replayWhenNoConsumers=true。如果是activemq版本小于5.9,需要關閉遊标複制檢測,enableAudit=false。(我們版本5.14.4,就不用關閉了).

8 對于client端而言,仍然需要使用failover協定,需要把所有的broker都配進去,這部配置設定置配置中心裡,不在amq中配置。

activemq.xml裡的配置順序要注意:

  1. Networks——必須在消息存儲之前建立
  2. Message store——必須在傳輸配置好之前配置完
  3. Transports——必須在broker配置的最後

二. 利用kahaDB共用檔案鎖的方法配ActiveMQ叢集

​ 這也能達到master slave + broker cluster的混用。不同的是這個是利用kahaDB做持久化,在主備切換的時候,是主備公用一個存儲檔案,通過争奪檔案鎖的方式進行主備切換。KahaDB是從ActiveMQ 5.4開始預設的持久化插件,持久化機制是基于日志檔案,索引和緩存。LevelDB的持久化引擎是在ActiveMQ 5.6版本之後推出的,LevelDB與KahaDB的持久化方式相似,在ActiveMQ 5.9版本提供了基于LevelDB和Zookeeper的資料複制方式,用于Master-slave方式的首選資料複制方案。

​ KahaDB方式在配置上與LevelDB+zookerper方式大體相同,主要差別在持久化友善的配置,和不需要配置zookerper。

​ KahaDB如果要在不同伺服器之間共享檔案,可以使用挂載NFS服務的方式。

轉載注明:https://blog.csdn.net/renhuan28/article/details/79769758