一.通过zk、kafka集群了解ansible-playbook --配置安装部分
安装ansible(只做简单安装,不做说明了)
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install -y ansible

规划:
mkdir -pv /etc/ansible/{packages,playbooks/{stage,user,common}}
packages目录:以后批量安装插件官方下载的源码包存放路径(例:zookeeper、kafka、tomcat、jdk)
playbooks/stage目录:以后用多个roles合并使用的yml文件存放,如,这里准备的安装的zk/kafka集群[roles:jdk、zookeeper、kafka]

----------------------------------------------------------------------------------------------------------------------------------
Playbook简介:
Playbooks
·tasks:即调用模块完成的操作
·variables:变量
·templates:模板
·handlers:触发器,由某子任务触发执行操作
·roles:角色
案例:zk/kafka集群部署
准备3台测试机(192.168.109.138-140),并与ansible主机(137)建立秘钥认证(免密即可)
分析:
1.添加apprun用户
2.安装jdk
3.安装zookeeper
4.安装kafka
一.USER【apprun】
mkdir -pv /etc/ansible/roles/user/{tasks,handlers,templates,vars}
touch /etc/ansible/roles/user/readme.md #解释说明文件,良好习惯的养成

#Description user用于创建用户,并添加相应的key,同时选择是否添加到sudo中 #Args * package_path: 存放ssh_pubkey的目录,本例中为/etc/ansible/packages * state: present创建用户,absent删除用户 * username: 用户名 * ssh_pubkey: pubkey文件名,具体路径为{{ package_path }}/user/{{ ssh_pubkey }} * is_sudo: yes或者no #Usage ``` roles: - { role: user, username: xiaoming, ssh_pubkey: xiaoming.pub, is_sudo: "yes", tags: user } ```
--- # The role for user - name: useradd {{ username }} user: name={{ username }} state={{ state }} remove=yes tags: user - name: install {{ username }} key authorized_key: user={{ username }} key="{{ item }}" with_file: - "{{package_path}}/user/{{ ssh_pubkey }}" when: state == "present" tags: user - name: visudo template: src=sudoers dest=/etc/sudoers.d/{{ username }} mode=0440 when: state == "present" and is_sudo is defined and is_sudo == "yes" tags: user - name: del {{ username }} file: dest=/etc/sudoers.d/{{ username }} state=absent when: state == "absent" tags: user
{{ username }} ALL=(ALL) NOPASSWD:ALL
--- #default variables package_path: /etc/ansible/packages #install or uninstall(choices:presnet,latest,absent) state: present
运维公钥存放:/etc/ansible/packages/user/xiaoming.pub
二.JDK
mkdir -pv /etc/ansible/roles/jdk/{tasks,handlers,templates,vars}
touch /etc/ansible/roles/jdk/readme.md #解释说明文件,良好习惯的养成

#Description jdk模块 #Args * package_path: 本例中为/etc/ansible/packages * state: present或者absent * jdk_file: 要同步的jdk文件名,在{{ package_path }}/jdk/{{ jdk_file }} * state: present或者absent #Usage ``` roles: - { role: jdk, jdk_file: jdk1.7.0_15, tags: jdk } ```
--- # The playbook for jdk deploy - name: Deploy jdk synchronize: src={{ package_path }}/jdk/{{ jdk_file }} dest={{ install_path }} when: state != "absent" tags: jdk - name: ln -s {{ install_path }}/{{ jdk_file }} {{ install_path }}/jdk file: src={{ install_path }}/{{ jdk_file }} dest={{ install_path }}/jdk state=link when: state != "absent" tags: jdk - name: Directory property change file: dest=/apprun owner=apprun group=apprun recurse=yes tags: jdk - name: rm jdk symbolic link file: dest={{ install_path }}/jdk state=absent when: state == "absent" tags: jdk - name: rm jdk file: dest={{ install_path }}/{{ jdk_file }} state=absent when: state == "absent" tags: jdk
--- # default variables #jdk install path of remote server install_path: /usr/local #jdk version jdk_file: jdk1.8.0_73 #package path package_path: /etc/ansible/packages #install or uninstall state: present
官网下载jdk的版本包到/etc/ansible/packages/jdk目录,这里我下载了2个版本(jdk1.7.0_15和jdk1.8.0_73)

准备测试批量安装jdk:
1.vim /etc/ansible/hosts 定义组[jdk-zookeeper-kafka]的主机,并测试连通性ansible ceshi_jdk -m ping
[jdk-zookeeper-kafka] kafka01 ansible_ssh_host=192.168.109.138 kafka02 ansible_ssh_host=192.168.109.139 kafka03 ansible_ssh_host=192.168.109.140
2.写一个jdk的playbook的yml文件(stage-jdk-zookeeper-kafka.yml)
--- - hosts: jdk-zookeeper-kafka roles: - { role: user, username: apprun, ssh_pubkey: xiaoming.pub, is_sudo: "yes", state: present, tags: user } - { role: jdk, install_path: /apprun, jdk_file: jdk1.8.0_73, state: present, tags: jdk }
修改一下:state: present 改为 state=absent 为卸载
三、zookeeper
mkdir -pv /etc/ansible/roles/zookeeper/{tasks,handlers,templates,vars}
touch /etc/ansible/roles/zookeeper/readme.md #解释说明文件,良好习惯的养成
zookeeper我先安装起来,再讲解
1.role部分配置

#Description zookeeper模块 #Args * package_path: 本例为/etc/ansible/packages * install_path: zookeeper部署目录,{{ install_path }}/zookeeper * version: zookeeper版本 * state: present或者absent * clusters: zookeeper集群配置 * dataDir: zookeeper数据目录 * dataLogDir: zookeeper日志目录 #Usage ``` roles: - { role: zookeeper, install_path: /usr/local, dataDir: /opt/zookeeper, dataLogDir: /opt/zookeeper_log, version: 3.4.8, XMN: 1024, XMS: 2048, XMX: 2048, clusters: [{host: kafka01, id: 1, ip: 192.168.109.38, port: 2181}, {host: kafka02, id: 2, ip: 192.168.109.39, port: 2181},{host: kafka03, id: 3, ip: 192.168.109.40, port: 2181}], state: present, tags: zookeeper } ```
---
- name: restart zookeeper
service: name=zookeeper state=restarted
--- # The playbook for zookeeper - name: rsync zookeeper-{{ version }} synchronize: src={{ package_path }}/zookeeper/zookeeper-{{ version }} dest={{ install_path }} when: state != "absent" tags: zookeeper - name: ln -s {{ install_path }}/zookeeper-{{ version }} {{ install_path }}/zookeeper file: src={{ install_path }}/zookeeper-{{ version }} dest={{ install_path }}/zookeeper state=link when: state != "absent" tags: zookeeper - name: create zookeeper {{ dataDir }} {{ dataLogDir }} dir file: dest={{ item }} state=directory with_items: - "{{ dataDir }}" - "{{ dataLogDir }}" when: state != "absent" - name: conf the zookeeper configure file template: src=zoo.cfg dest="{{install_path}}/zookeeper/conf/" mode=0644 with_items: "{{ clusters }}" notify: restart zookeeper when: state != "absent" tags: zookeeper - name: install init script template: src={{item.src}} dest={{item.dest}} mode=0644 with_items: - { src: zookeeper.service, dest: /usr/lib/systemd/system/} when: state != "absent" tags: zookeeper - name: Manage services systemd: daemon_reload=yes tags: zookeeper - name: install init script template: src={{item.src}} dest={{item.dest}} mode=0755 with_items: - { src: zookeeper, dest: /etc/init.d/} when: state != "absent" - name: configure hosts lineinfile: dest=/etc/hosts line='{{item.ip}} {{item.host}}' backup=yes state={{ state }} with_items: "{{ clusters }}" tags: zookeeper - name: configure myid template: src=myid dest={{ dataDir }} with_items: "{{ clusters }}" when: state != "absent" tags: zookeeper - name: Directory property change file: dest=/apprun owner=apprun group=apprun recurse=yes tags: zookeeper - name: ensure zookeeper is running service: name=zookeeper enabled=yes state=started when: state != "absent" tags: zookeeper - name: stop zookeeper service: name=zookeeper enabled=no state=stopped when: state == "absent" tags: zookeeper - name: rm zookeeper file: dest={{ item }} state=absent with_items: - "{{ install_path }}/zookeeper" - "{{ install_path }}/zookeeper-{{ version }}" - "/etc/init.d/zookeeper" - "{{ dataDir }}" - "{{ dataLogDir }}" - "/usr/lib/systemd/system/zookeeper.service" when: state == "absent" tags: zookeeper
{% for item in clusters %}
{% if item.ip == ansible_ssh_host %}
{{ item.id }}
{% endif %}
{% endfor %}
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial initLimit=10 # The number of ticks that can pass between syncLimit=5 # the directory where the snapshot is stored. dataDir={{ dataDir }} #the location of the log file dataLogDir={{ dataLogDir }} # the port at which the clients will connect {% for item in clusters %} {% if item.ip == ansible_ssh_host %} clientPort={{ item.port }} {% endif %} {% endfor %} # master-slave config {% for item in clusters %} server.{{item.id}}={{item.host}}:2888:3888 {% endfor %}
#!/usr/bin/env bash # chkconfig: 2345 20 90 # description: zookeeper # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # If this scripted is run out of /usr/bin or some other system bin directory # it should be linked to and not copied. Things like java jar files are found # relative to the canonical path of this script. # # See the following page for extensive details on setting # up the JVM to accept JMX remote management: # http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html # by default we allow local JMX connections export JAVA_HOME="{{ install_path }}/jdk" export ZOO_LOG_DIR="{{dataLogDir}}" if [ "x$JMXLOCALONLY" = "x" ] then JMXLOCALONLY=false fi if [ "x$JMXDISABLE" = "x" ] then echo "JMX enabled by default" >&2 # for some reason these two options are necessary on jdk6 on Ubuntu # accord to the docs they are not necessary, but otw jconsole cannot # do a local attach ZOOMAIN="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=$JMXLOCALONLY org.apache.zookeeper.server.quorum.QuorumPeerMain" else echo "JMX disabled by user request" >&2 ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain" fi # use POSTIX interface, symlink is followed automatically ZOOBIN="${BASH_SOURCE-$0}" ZOOBIN="$(dirname "${ZOOBIN}")" ZOOBINDIR="{{install_path}}/zookeeper/bin" if [ -e "$ZOOBIN/../libexec/zkEnv.sh" ]; then . "$ZOOBINDIR/../libexec/zkEnv.sh" else . "$ZOOBINDIR/zkEnv.sh" fi if [ "x$SERVER_JVMFLAGS" != "x" ] then JVMFLAGS="$SERVER_JVMFLAGS $JVMFLAGS" fi if [ "x$2" != "x" ] then ZOOCFG="$ZOOCFGDIR/$2" fi # if we give a more complicated path to the config, don't screw around in $ZOOCFGDIR if [ "x$(dirname "$ZOOCFG")" != "x$ZOOCFGDIR" ] then ZOOCFG="$2" fi if $cygwin then ZOOCFG=`cygpath -wp "$ZOOCFG"` # cygwin has a "kill" in the shell itself, gets confused KILL=/bin/kill else KILL=kill fi echo "Using config: $ZOOCFG" >&2 if [ -z "$ZOOPIDFILE" ]; then ZOO_DATADIR="$(grep "^[[:space:]]*dataDir" "$ZOOCFG" | sed -e 's/.*=//')" if [ ! -d "$ZOO_DATADIR" ]; then mkdir -p "$ZOO_DATADIR" fi ZOOPIDFILE="$ZOO_DATADIR/zookeeper_server.pid" else # ensure it exists, otw stop will fail mkdir -p "$(dirname "$ZOOPIDFILE")" fi if [ ! -w "$ZOO_LOG_DIR" ] ; then mkdir -p "$ZOO_LOG_DIR" fi _ZOO_DAEMON_OUT="$ZOO_LOG_DIR/zookeeper.out" case $1 in start) echo -n "Starting zookeeper ... " if [ -f "$ZOOPIDFILE" ]; then if kill -0 `cat "$ZOOPIDFILE"` > /dev/null 2>&1; then echo $command already running as process `cat "$ZOOPIDFILE"`. exit 0 fi fi nohup "$JAVA" "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \ -cp "$CLASSPATH" $JVMFLAGS $ZOOMAIN "$ZOOCFG" > "$_ZOO_DAEMON_OUT" 2>&1 < /dev/null & if [ $? -eq 0 ] then if /bin/echo -n $! > "$ZOOPIDFILE" then sleep 1 echo STARTED else echo FAILED TO WRITE PID exit 1 fi else echo SERVER DID NOT START exit 1 fi ;; start-foreground) ZOO_CMD=(exec "$JAVA") if [ "${ZOO_NOEXEC}" != "" ]; then ZOO_CMD=("$JAVA") fi "${ZOO_CMD[@]}" "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \ -cp "$CLASSPATH" $JVMFLAGS $ZOOMAIN "$ZOOCFG" ;; print-cmd) echo "\"$JAVA\" -Dzookeeper.log.dir=\"${ZOO_LOG_DIR}\" -Dzookeeper.root.logger=\"${ZOO_LOG4J_PROP}\" -cp \"$CLASSPATH\" $JVMFLAGS $ZOOMAIN \"$ZOOCFG\" > \"$_ZOO_DA EMON_OUT\" 2>&1 < /dev/null" ;; stop) echo -n "Stopping zookeeper ... " if [ ! -f "$ZOOPIDFILE" ] then echo "no zookeeper to stop (could not find file $ZOOPIDFILE)" else $KILL -9 $(cat "$ZOOPIDFILE") rm "$ZOOPIDFILE" echo STOPPED fi exit 0 ;; upgrade) shift echo "upgrading the servers to 3.*" "$JAVA" "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \ -cp "$CLASSPATH" $JVMFLAGS org.apache.zookeeper.server.upgrade.UpgradeMain ${@} echo "Upgrading ... " ;; restart) shift "$0" stop ${@} sleep 3 "$0" start ${@} ;; status) # -q is necessary on some versions of linux where nc returns too quickly, and no stat result is output clientPortAddress=`grep "^[[:space:]]*clientPortAddress[^[:alpha:]]" "$ZOOCFG" | sed -e 's/.*=//'` if ! [ $clientPortAddress ] then clientPortAddress="localhost" fi clientPort=`grep "^[[:space:]]*clientPort[^[:alpha:]]" "$ZOOCFG" | sed -e 's/.*=//'` STAT=`"$JAVA" "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \ -cp "$CLASSPATH" $JVMFLAGS org.apache.zookeeper.client.FourLetterWordMain \ $clientPortAddress $clientPort srvr 2> /dev/null \ | grep Mode` if [ "x$STAT" = "x" ] then echo "Error contacting service. It is probably not running." exit 1 else echo $STAT exit 0 fi ;; *) echo "Usage: $0 {start|start-foreground|stop|restart|status|upgrade|print-cmd}" >&2 esac
[Unit] Description=zookeeper After=network.target [Service] User=apprun ZOO_LOG_DIR="{{dataLogDir}}" Environment="JAVA_HOME={{ install_path }}/jdk" "SERVER_JVMFLAGS=-Xmn{{XMN}}M -Xms{{XMS}}M -Xmx{{XMX}}M" ExecStart={{install_path}}/zookeeper/bin/zkServer.sh start-foreground Restart=on-failure LimitNOFILE=1048576 LimitNPROC=1048576 [Install] WantedBy=multi-user.target
--- # the directory where the snapshot is stored. dataDir: /var/lib/zookeeper/data dataLogDir: /var/lib/zookeeper/logs # the install zookeeper version version: 3.4.6 # SERVER_JVMFLAGS=-Xmn{{XMN}}M -Xms{{XMS}}M -Xmx{{XMX}}M XMN: 1024 XMS: 2048 XMX: 2048
2.官网下载安装包到/etc/ansible/packages/zookeeper目录

3.定义组
[jdk-zookeeper-kafka] kafka01 ansible_ssh_host=192.168.109.138 kafka02 ansible_ssh_host=192.168.109.139 kafka03 ansible_ssh_host=192.168.109.140
4.书写zookeeper的playbook文件,(zk 需要jdk环境)
--- - hosts: jdk-zookeeper-kafka roles: - { role: user, username: apprun, ssh_pubkey: xiaoming.pub, is_sudo: "yes", state: present, tags: user } - { role: jdk, install_path: /apprun, jdk_file: jdk1.8.0_73, state: present, tags: jdk } - { role: zookeeper, install_path: /apprun, dataDir: /apprun/data/zookeeper, dataLogDir: /apprun/data/zookeeper_log, version: 3.4.8, XMN: 213, XMS: 452, XMX: 452, clusters: [{host: kafka01, id: 1, ip: 192.168.109.138, port: 2181}, {host: kafka02, id: 2, ip: 192.168.109.139, port: 2181},{host: kafka03, id: 3, ip: 192.168.109.140, port: 2181}], state: present, tags: zookeeper }
5.安装jdk以及zk集群:ansible-playbook stage-jdk-zookeeper.yml
四、kafka
mkdir -pv /etc/ansible/roles/kafka/{tasks,handlers,templates,vars}
touch /etc/ansible/roles/kafka/readme.md #解释说明文件,良好习惯的养成
kafka我先安装起来,再讲解
1.role部分配置

#Description kafka模块 #Args * package_path: 本例为/etc/ansible/packages * install_path: kafka部署目录,{{ install_path }}/kafka * version: kafka版本 * state: present或者absent * clusters: kafka集群配置 * zk_clusters: zookeeper集群地址 * log_path: kafka日志目录 #Usage ``` roles: - { role: kafka, install_path: /usr/local, version: 2.11-0.10.1.1, log_path: /data/kafka_log, XMN: 213, XMS: 452, XMX: 452, clusters: [{id: 5, ip: 192.168.109.38, port: 9092}, {id: 6, ip: 192.168.109.39, port: 9092},{id: 7, ip: 192.168.109.40, port: 9092}], zk_clusters: "192.168.109.38:2181,192.168.109.39:2181,192.168.109.40:2181", state: present,tags: kafka } ```
--- - name: rsync files to dest host synchronize: src={{ package_path }}/kafka/kafka_{{ version }} dest={{ install_path }} when: state != "absent" tags: kafka - name: ln -s {{ install_path }}/kafka_{{ version }} {{ install_path }}/kafka file: src={{ install_path }}/kafka_{{ version }} dest={{ install_path }}/kafka state=link when: state != "absent" tags: kafka - name: create {{ log_path }}/kafka-logs file: dest={{ log_path }}/kafka-logs state=directory recurse=yes when: state != "absent" tags: kafka - name: conf the kafka configure file template: src=server.properties dest="{{install_path}}/kafka/config/" mode=0644 with_dict: "{{ clusters }}" notify: restart kafka when: state != "absent" tags: kafka - name: install init script template: src={{item.src}} dest={{item.dest}} mode=0644 with_items: - { src: kafka.service, dest: "/usr/lib/systemd/system/"} when: state != "absent" tags: kafka - name: Manage services systemd: daemon_reload=yes tags: kafka - name: install init script template: src={{item.src}} dest={{item.dest}} mode=0755 with_items: - { src: kafka, dest: "/etc/init.d/"} when: state != "absent" tags: kafka - name: Directory property change file: dest=/apprun owner=apprun group=apprun recurse=yes tags: kafka - name: ensure zookeeper is running service: name=zookeeper state=started enabled=yes when: state != "absent" tags: kafka - name: stop kafka service: name=kafka state=stopped enabled=no when: state == "absent" tags: kafka - name: rm {{ install_path }}/kafka /etc/init.d/kafka file: dest={{ item }} state=absent with_items: - "{{ install_path }}/kafka" - "/etc/init.d/kafka" - "{{ log_path }}/kafka-logs" - "/usr/lib/systemd/system/kafka.service" - "{{ install_path }}/kafka_{{ version }}" when: state == "absent" tags: kafka
#!/bin/sh # # chkconfig: 345 99 01 # description: Kafka # # File : Kafka # # Description: Starts and stops the Kafka server # source /etc/rc.d/init.d/functions export JAVA_HOME={{ install_path }}/jdk KAFKA_HOME={{install_path}}/kafka KAFKA_USER=root # See how we were called. start(){ echo -n "Starting Kafka:" /sbin/runuser $KAFKA_USER -c "nohup $KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties > {{ log_path }}/kafka-logs/server.out 2> {{ log_path }}/kafka-logs/server.err &" echo " done." } stop(){ echo -n "Stopping Kafka: " /sbin/runuser $KAFKA_USER -c "ps -ef | grep kafka.Kafka | grep -v grep | awk '{print \$2}' | xargs kill" echo " done." } case "$1" in start) start ;; stop) stop ;; hardstop) echo -n "Stopping (hard) Kafka: " /sbin/runuser $KAFKA_USER -c "ps -ef | grep kafka.Kafka | grep -v grep | awk '{print \$2}' | xargs kill -9" echo " done." exit 0 ;; status) c_pid=`ps -ef | grep kafka.Kafka | grep -v grep | awk '{print $2}'` if [ "$c_pid" = "" ] ; then echo "Stopped" exit 3 else echo "Running $c_pid" exit 0 fi ;; restart) stop start ;; *) echo "Usage: cassandra {start|stop|hardstop|status|restart}" exit 1 ;; esac
[Unit] Description=kafka After=network.target [Service] User=apprun Environment="JAVA_HOME={{ install_path }}/jdk" "KAFKA_HEAP_OPTS=-Xmn{{XMN}}M -Xms{{XMS}}M -Xmx{{XMX}}M" ExecStart={{ install_path }}/kafka/bin/kafka-server-start.sh {{ install_path }}/kafka/config/server.properties Restart=on-failure LimitNOFILE=1048576 LimitNPROC=1048576 [Install] WantedBy=multi-user.target
############################# Server Basics ############################# {% for item in clusters %} {% if item.ip == ansible_ssh_host %} # The id of the broker. This must be set to a unique integer for each broker. broker.id={{item.id}} ############################# Socket Server Settings ############################# # The port the socket server listens on port={{item.port}} # Hostname the broker will bind to. If not set, the server will bind to all interfaces host.name={{item.ip}} {% endif %} {% endfor %} # Hostname the broker will advertise to producers and consumers. If not set, it uses the #advertised.host.name=<hostname routable by clients> # The port to publish to ZooKeeper for clients to use. If this is not set, # it will publish the same port that the broker binds to. #advertised.port=<port accessible by clients> # The number of threads handling network requests num.network.threads=3 # The number of threads doing disk I/O num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files log.dirs={{ log_path }}/kafka-logs # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions=1 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. num.recovery.threads.per.data.dir=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log. # The minimum age of a log file to be eligible for deletion log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining # segments don't drop below log.retention.bytes. #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms=300000 # By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires. # If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction. log.cleaner.enable=false ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect={{zk_clusters}} # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000
--- # kafka clusters kafka_clusters: slave-01: id: 39 ip: 192.168.109.39 port: 9092 slave-02: id: 40 ip: 192.168.109.40 port: 9092 # zookeeper connection zookeeper_clusters_connections: "192.168.109.38:2181,192.168.109.39:2181,192.168.109.40:2181" #kafka version version: 2.11-0.10.1.1 # SERVER_JVMFLAGS=-Xmn{{XMN}}M -Xms{{XMS}}M -Xmx{{XMX}}M XMN: 1024 XMS: 2048 XMX: 2048
2.官网下载安装包到/etc/ansible/packages/kafka目录

3.定义组
[jdk-zookeeper-kafka] kafka01 ansible_ssh_host=192.168.109.138 kafka02 ansible_ssh_host=192.168.109.139 kafka03 ansible_ssh_host=192.168.109.140
4.书写jdk-zk-kafka集群的playbook文件
--- - hosts: jdk-zookeeper-kafka roles: - { role: user, username: apprun, ssh_pubkey: xiaoming.pub, is_sudo: "yes", state: present, tags: user } - { role: jdk, install_path: /apprun, jdk_file: jdk1.8.0_73, state: present, tags: jdk } - { role: zookeeper, install_path: /apprun, dataDir: /apprun/data/zookeeper, dataLogDir: /apprun/data/zookeeper_log, version: 3.4.8, XMN: 213, XMS: 452, XMX: 452, clusters: [{host: kafka01, id: 1, ip: 192.168.109.138, port: 2181}, {host: kafka02, id: 2, ip: 192.168.109.139, port: 2181},{host: kafka03, id: 3, ip: 192.168.109.140, port: 2181}], state: present, tags: zookeeper } - { role: kafka, install_path: /apprun, version: 2.11-0.10.1.1, log_path: /apprun/data/kafka_log, XMN: 213, XMS: 452, XMX: 452, clusters: [{id: 5, ip: 192.168.109.138, port: 9092}, {id: 6, ip: 192.168.109.139, port: 9092},{id: 7, ip: 192.168.109.140, port: 9092}], zk_clusters: "192.168.109.138:2181,192.168.109.139:2181,192.168.109.140:2181", state: present,tags: kafka }
5.安装jdk以及zk集群:ansible-playbook stage-jdk-zookeeper-kafka.yml
6.卸载,只需将stage-jdk-zookeeper-kafka.yml复制一份改为del-stage-jdk-zookeeper-kafka.yml 并修改其中的state:状态为state: absent,并调整一下role的顺序;注(user的state最好不用修改,)
--- - hosts: jdk-zookeeper-kafka roles: - { role: user, username: apprun, ssh_pubkey: xiaoming.pub, is_sudo: "yes", state: present, tags: user } - { role: jdk, install_path: /apprun, jdk_file: jdk1.8.0_73, state: absent, tags: jdk } - { role: zookeeper, install_path: /apprun, dataDir: /apprun/data/zookeeper, dataLogDir: /apprun/data/zookeeper_log, version: 3.4.8, XMN: 213, XMS: 452, XMX: 452, clusters: [{host: kafka01, id: 1, ip: 192.168.109.138, port: 2181}, {host: kafka02, id: 2, ip: 192.168.109.139, port: 2181},{host: kafka03, id: 3, ip: 192.168.109.140, port: 2181}], state: absent, tags: zookeeper } - { role: kafka, install_path: /apprun, version: 2.11-0.10.1.1, log_path: /apprun/data/kafka_log, XMN: 213, XMS: 452, XMX: 452, clusters: [{id: 5, ip: 192.168.109.138, port: 9092}, {id: 6, ip: 192.168.109.139, port: 9092},{id: 7, ip: 192.168.109.140, port: 9092}], zk_clusters: "192.168.109.138:2181,192.168.109.139:2181,192.168.109.140:2181", state: absent,tags: kafka }
7.加密一些必要的配置文件;如:ansible-vault encrypt wbc@123 hosts
这里的变量配置是基于我测试机,线上生产环境的对应修改,注意一下(XMN: 213, XMS: 452, XMX: 452,)
浙公网安备 33010602011771号