mqserver / activemq cluster / activemq-community-deprecates-leveldb-what-you-need-know

s

ActiveMQ集群搭建

https://www.cnblogs.com/arjenlee/p/9303229.html

https://www.cnblogs.com/carriezhangyan/p/11492513.html

https://www.cnblogs.com/wangmingshun/p/7745808.html

 

APP Q

 10.47.217.115 ZooKeeper 3.5.1
 10.47.217.113 ActiveMQ 5.11.2
 10.47.217.112 ActiveMQ 5.11.2
 10.47.217.111 ActiveMQ 5.11.2

设备Q
 10.47.204.189 ActiveMQ 5.11.2
 10.47.204.187 ActiveMQ 5.11.2
 10.47.204.186 ActiveMQ 5.11.2
 10.47.204.180 ZooKeeper 3.5.1

 

问题1:切换普通用户报 -bash: fork: retry: No child processes

https://www.cnblogs.com/zhaojingyu/p/10929712.html

ssh 连接普通用户 报这个错误
-bash: fork: retry: No child processes

解决1:
更改vi /etc/security/limits.d/20-nproc.conf
有的是更改vi /etc/security/limits.d/90-nproc.conf
* soft nproc 4096
root soft nproc unlimited
把4096更改为更大或者unlimited

 

 

https://www.openlogic.com/blog/activemq-community-deprecates-leveldb-what-you-need-know

为什么ActiveMQ官方不再推荐使用LevelDB
最近在学习mq,虽然已经在使用,但是却未深入的了解,于是阅读官方文档的时候发现ActiveMQ官方不再推荐使用LevelDB。ActiveMQ在5.8.0 版本后引入了LevelDB的,并且LevelDB存储是基于文件的持久性数据库,可提供比KahaDB更快的持久性。为什么ActiveMQ官方不再支持或建议使用levelDB?在网上搜了一大堆终于发现了一篇英文博客给出了原因。以下是其大致内容:

ACTIVEMQ社区弃用LEVELDB - 您需要了解的内容
令人惊讶的是,ActiveMQ社区不再赞成将LevelDB用作其broker的持久性存储。Christopher Shannon 于2016年11月15日发表以下声明:

The main reason is that KahaDB continues to be the main focus where bugs are fixed and not much attention is paid to LevelDB. There seems to be several issues with corruption (especially with replication) so I don’t think it should be a recommended store unless the stability is sorted out. Unfortunately nearly every JIRA reported against LevelDB goes ignored.

大概意思是:KahaDB仍然是BUG修复的主要关注点,并且没有对LevelDB给予太多关注。它似乎有一些问题(特别是在replication),所以并不推荐它作为存储方式,除非稳定性能够得到保证。

为此ActiveMQ社区迅速作出反应并同意接下来对LevelDB进行进一步的开发工作。

A quick history
LevelDB持久存储,最初来源于谷歌的BigTable项目,正在积极在许多前沿的应用中使用,其中包括谷歌Chrome和Chromium浏览器。它被添加到ActiveMQ5.8版本以解决默认持久性存储KahaDB的问题。其中大多数问题都与KahaDB使用B-Tree索引和清理效率低下有关。LevelDB的key-caching索引在checkpoint过程中更可靠地执行数据清理。并且在5.9中通过 Zookeeper-powered persistence存储复制,可提供更快的故障转移和高可用性模型,而不会出现单点故障。

ActiveMQ的用户热烈地接受了这一加强,许多人将他们的持久性配置转换为LevelDB。的采用受到额外基础设施要求的阻碍(实现它需要至少六台机器),但是大量企业确实利用了改进的HA模型。

采用LevelDB / Zookeeper高可用(HA)模型基础设施要求很高(实现它需要至少六台机器),但是大多数公司采用了改进过的HA模型。

Why the change?
尽管有其优点,但LevelDB是一个在ActiveMQ社区之外维护的项目,因此依赖于第三方更新。尽管LevelDB本身是一个可靠的解决方案,但ActiveMQ必须维护自己的客户端库以包装LevelDB,而且在这部分工作的核心提交者不再积极开发ActiveMQ 5.x分支。如果没有人来改进客户端适配器,社区就无法充分解决BUG和优化需求。所以社区决定弃用LevelDB并专注于改进KahaDB和ActiveMQ 6.0(Artemis),而不是让这些问题继续存在。

What should you do now?
当某个功能被OSS社区弃用时,该功能改进的可能性就会大大降低。ActiveMQ是针对旧版本的LevelDB编写的,现在它不太可能发生更新。我们建议您尽快迁移LevelDB(包括LevelDB / Zookeeper)。你有一些选择:

回到KahaDB 如果您要从LevelDB / Zookeeper脱机并且需要更快的HA模型故障转移,请不要忘记您绝不仅限于一个被动实例。您可以让几个被动实例争夺锁定状态,这将统计地减少被动代理将变为活动状态所需的时间。请记住,自5.11发布以来,KahaDB已经进行了大量的工作和改进,因此过去遇到的问题可能不再是您的问题。

为KahaDB HA的共享存储添加冗余 虽然从技术上讲,共享存储不会消除单点故障,但是为你的基础设施使用共享存储,您可以显着降低数据丢失的可能性。

确保KahaDB正确使用。不要在CIFS / SMB上运行共享存储,也不要将其保存在任何类型的基于NTFS的文件系统上。通过使用iSCSI协议和GFS2之类的多用户文件系统,可以获得最佳吞吐量。如果您在维护精准的锁定状态时遇到问题,请务必查看 JDBC Pluggable Storage Locker ,它可以提供更可靠的锁机制。

不要花哨。许多人尝试使用像GlusterFS,CephFS或DRDB这样的复制文件系统来复制持久性,但仍然使用主动/被动锁定机制。虽然在纸面上很好,但这些解决方案在负载下表现不佳,经常失去锁定状态并导致无活动或更糟糕的多活动代理可能破坏持久性存储。

请记住,JDBC持久性仍然存在可以使用熟悉的 RDBMS replication concepts集群,并且可以通过使用连接池(如c3p0)大大提高性能。

密切关注Artemis,并准备好在生产就绪时进行切换。

结论
开源解决方案可以快速发展,虽然我们会错过LevelDB,但社区已经发言,我们必须准备好将我们的关键基础架构向同一方向移动。您仍然可以使用KahaDB实现可靠的故障转移,社区已经致力于改善KahaDB用户过去遇到的许多问题。Artemis拥有自己的集群解决方案,当它准备好时,您可以通过其实现与LevelDB / Zookeeper相同级别的高可用性。

 

/opt/apache-activemq-5.9.0/conf/activemq.xml

<!--
    Licensed to the Apache Software Foundation (ASF) under one or more
    contributor license agreements.  See the NOTICE file distributed with
    this work for additional information regarding copyright ownership.
    The ASF licenses this file to You under the Apache License, Version 2.0
    (the "License"); you may not use this file except in compliance with
    the License.  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
-->
<!-- START SNIPPET: example -->
<beans
  xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
  http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">

    <!-- Allows us to use system properties as variables in this configuration file -->
    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="locations">
            <value>file:${activemq.conf}/credentials.properties</value>
        </property>
    </bean>

   <!-- Allows accessing the server log -->
    <bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
          lazy-init="false" scope="singleton"
          init-method="start" destroy-method="stop">
    </bean>

    <!--
        The <broker> element is used to configure the ActiveMQ broker.
    -->
    <!-- tangxje 20200212 -->
    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="LINDOWS.DEV.TESTRUNNER.MQ" dataDirectory="${activemq.data}" useJmx="true" schedulePeriodForDestinationPurge="10000">

        <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry topic=">" >
                    <!-- The constantPendingMessageLimitStrategy is used to prevent
                         slow topic consumers to block producers and affect other consumers
                         by limiting the number of messages that are retained
                         For more information, see:

                         http://activemq.apache.org/slow-consumer-handling.html

                    -->
                  <pendingMessageLimitStrategy>
                    <constantPendingMessageLimitStrategy limit="1000"/>
                  </pendingMessageLimitStrategy>
                </policyEntry>
              </policyEntries>
            </policyMap>
        </destinationPolicy>


        <!--
            The managementContext is used to configure how ActiveMQ is exposed in
            JMX. By default, ActiveMQ uses the MBean server that is started by
            the JVM. For more information, see:

            http://activemq.apache.org/jmx.html
        -->
    <!-- tangxje 20200212 -->
    <!-- 
        <managementContext>
            <managementContext connectorPort="1095" createConnector="false"/>
        </managementContext>
    -->

        <managementContext connectorPort="1095" createConnector="true" 
                jmxDomainName="org.apache.activemq">
                <property xmlns="http://www.springframework.org/schema/beans" name="environment">
                  <map xmlns="http://www.springframework.org/schema/beans">
                        <entry xmlns="http://www.springframework.org/schema/beans"
                               key="jmx.remote.x.password.file"
                               value="${activemq.conf}/jmx.password"/>
                        <entry xmlns="http://www.springframework.org/schema/beans"
                               key="jmx.remote.x.access.file"
                               value="${activemq.conf}/jmx.access"/>
                    </map>
                </property>
        </managementContext>


        <!--
            Configure message persistence for the broker. The default persistence
            mechanism is the KahaDB store (identified by the kahaDB tag).
            For more information, see:

            http://activemq.apache.org/persistence.html
        -->
    <!-- tanxje 20200212 -->
    <persistenceAdapter>
            <!-- kahaDB directory="${activemq.data}/kahadb"/ -->
                <replicatedLevelDB
                directory="${activemq.data}/leveldb"
                replicas="3"
                bind="tcp://0.0.0.0:0"
                zkAddress="zookeeper机器1:2181,zookeeper机器2:2181,zookeeper机器3:2181"
                hostname="本机ip地址"                
          sync
="local_disk" zkPath="/lindows/activemq/leveldb-stores"/> </persistenceAdapter> <!-- The systemUsage controls the maximum amount of space the broker will use before disabling caching and/or slowing down producers. For more information, see: http://activemq.apache.org/producer-flow-control.html --> <systemUsage> <systemUsage> <memoryUsage> <memoryUsage percentOfJvmHeap="70" /> </memoryUsage> <storeUsage> <storeUsage limit="100 gb"/> </storeUsage> <tempUsage> <tempUsage limit="50 gb"/> </tempUsage> </systemUsage> </systemUsage> <!-- The transport connectors expose ActiveMQ over a given protocol to clients and other brokers. For more information, see: http://activemq.apache.org/configuring-transports.html --> <!-- tangxje 20200112 --> <transportConnectors> <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB --> <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=4000&amp;wireFormat.maxFrameSize=104857600"/> <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/> <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/> <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/> <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/> </transportConnectors> <!-- destroy the spring context on shutdown to stop jetty --> <shutdownHooks> <bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" /> </shutdownHooks> <!-- add by lindows 20160413 start, tangxje --> <destinationPolicy> <policyMap> <policyEntries> <!-- 20180428 lindows add dipatchPolicy --> <policyEntry queue=">" consumersBeforeDispatchStarts="2" timeBeforeDispatchStarts="2000" gcInactiveDestinations="true" inactiveTimoutBeforeGC="30000" prioritizedMessages="true" maxPageSize="2000"/> </policyEntries> </policyMap> </destinationPolicy> <!-- add by lindows 20160413 end, tangxje --> </broker> <!-- Enable web consoles, REST and Ajax APIs and demos The web consoles requires by default login, you can disable this in the jetty.xml file Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details --> <import resource="jetty.xml"/> </beans> <!-- END SNIPPET: example -->

 

end

 

 

zookeeper机器1
posted @ 2019-12-15 15:13  siemens800  阅读(192)  评论(0编辑  收藏  举报