saltstack
saltstack初识
saltstack是一个新的基础平台管理工具,只需要花费数分钟即可运行起来,(minion与master之间的通信通过ZeroMQ)
可以支撑管理上万台服务器的规模,数秒钟即可完成数据传递 saltstack是使用python语言开发,同时提供Rest API方便二次开发以及和其他平台进行集成,同时官方也发布了一个web管理界面halite saltstack常用地址 官方网站:http://www.saltstack.com 官方文档:https://docs.saltstack.com/en/latest/contents.html Github:https://github.com/saltstack 中国saltstack用户组:http://www.saltstack.cn https://docs.saltstack.com/en/latest/contents.html http://docs.saltstack.cn/contents.html saltstack运行方式 1、local 2、Master/Minion 3、Salt SSH 不装客户端的情况 saltstack三大功能 1、远程执行 2、配置管理 3、云管理
saltstack安装配置
epel源 for rhel/centos5 rpm -ivh http://mirrors.ustc.edu.cn/fedora/epel/5/x86_64/epel-release-5-4.noarch.rpm for rhel/centos6 rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
本次测试服务器
192.168.0.83 node83 salt-master
192.168.0.84 node84 salt-minion
192.168.0.85 node85 salt-minion
192.168.0.86 node86 salt-minion
salt-master
#yum install -y salt-master
#sed -i /^#master.*salt$/c"master: 192.168.0.83" /etc/salt/minion
#/etc/init.d/salt-master start
#chkconfig salt-master on
salt-minion
#yum install -y salt-minion
#sed -i /^#master.*salt$/c"master: 192.168.0.83" /etc/salt/minion 设置salt agent端server指向
#/etc/init.d/salt-minion start
#chkconfig salt-minion on
Master与Minion的连接
SaltStack master启动后默认监听4505和4506两个端口。
4505(publish_port)为saltstack的消息发布系统,4506(ret_port)为saltstack客户端与服务端通信的端口。
如果使用lsof 查看4505端口,会发现所有的minion在4505端口持续保持在ESTABLISHED状态。(建立了认证才会有)
Master与Minion加密key认证
(1)、minion在第一次启动时,会在/etc/salt/pki/minion/(minion配置文件)下自动生成minion.pem(private key)和 minion.pub(public key),然后将 minion.pub发送给master。
(2)、master在接收到minion的public key后,通过salt-key命令accept minion public key,这样在master的/etc/salt/pki/master/minions下的将会存放以minion id命名的 public key
然后master就能对minion发送指令了。(master的public key也被发送到客户端)
#提示id是 设置在master上显示的名称,最高优先级,其次默认按理是默认的主机名,我估计需要特定参数才用hosts文件,但是lsof -i时用了hosts解析,默认是ip对应
[root@node83 ~]# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
node84
node85
node86
Proceed? [n/Y] Y
Key for minion node84 accepted.
Key for minion node85 accepted.
Key for minion node86 accepted.
[root@node83 ~]# ls /etc/salt/pki/master/minions
node84 node85 node86
[root@node83 ~]# lsof -i:4505
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
salt-mast 7894 root 12u IPv4 46153 0t0 TCP *:4505 (LISTEN)
salt-mast 7894 root 14u IPv4 47773 0t0 TCP node83:4505->192.168.0.85:57959 (ESTABLISHED)
salt-mast 7894 root 15u IPv4 47888 0t0 TCP node83:4505->192.168.0.86:48412 (ESTABLISHED)
salt-mast 7894 root 16u IPv4 47889 0t0 TCP node83:4505->192.168.0.84:49118 (ESTABLISHED)长连接
salt-key命令
Salt key is used to manage Salt authentication keys不加参数,显示全部主机认证情况
-a Accept the specified public key
-A Accept all pending keys
-r Reject the specified public key
-R Reject all pending keys
-d Delete the specified key
-D Delete all keys
-L 查看
==>-d参数 minion端重启后会出现在Unaccepted Keys中,为防止-A时添加,可以reject,再次使用时在/etc/salt/pki/master/minions_rejected/下删除对应的key
salt-run命令
DESCRIPTION
salt-run is the frontend command for executing Salt Runners. Salt runners are simple modules used to execute
convenience functions on the master
-d Display documentation for runners, pass a module or a runner to see documentation on only that module/runner.模块帮助文件
# salt-run manage.status 显示所有minion状态
# salt-run manage.up
# salt-run manage.down
# salt-run manage.versions 显示minion 版本
#salt '*' test.ping 查看与minion端的连通性
-v 显示jid
#salt '*' saltutil.running 查看running的job
#salt '*' saltutil.kill_job [jid] 杀job
salt-master全局配置
[root@node83 ~]# vi /etc/salt/master 405 # Example: 406 # file_roots: 407 # base: 408 # - /srv/salt/ 409 # dev: 410 # - /srv/salt/dev/services 411 # - /srv/salt/dev/states 412 # prod: 413 # - /srv/salt/prod/services 414 # - /srv/salt/prod/states
修改如下(根据实际修改)
file_roots:
base:
- /etc/salt/states
prod:
- /etc/salt/states/prod
# mkdir -p /etc/salt/states/prod && /etc/init.d/salt-master restart
实战配置salt安装软件
[root@node83 ~]# vi /etc/salt/states/top.sls base: 'node84': - init.pkg ==>执行基于base目录下init目录下的pkg状态文件
[root@node83 ~]# mkdir /etc/salt/states/init
pkg.init: ==>自定义的名称
pkg.installed: ==>模块.方法
- names:
- lrzsz
- nmap
- unix2dos
执行
[root@node83 ~]# salt 'node84' state.sls init.pkg ==>使用state(sls)模块方法执行top.sls文中指定的init.pkg,返回执行情况
实战配置配置文件管理/etc/security/limits.conf
[root@node83 ~]# vi /etc/salt/states/init/limit.sls
limit-conf-config: file.managed: - name: /etc/security/limits.conf - source: salt://init/files/limits.conf - user: root - group: root - mode: 644
[root@node83 ~]# mkdir /etc/salt/states/init/files
[root@node83 ~]# cp /etc/security/limits.conf /etc/salt/states/init/files 并修改文件内容
[root@node83 ~]# vi /etc/salt/states/top.sls
base:
'192.168.0.85':
- init.pkg
- init.limit
执行
[root@node83 ~]# salt 'node85' state.highstate #salt '*' state.highstate test=True 不会实际执行
==>使用state模块highstate方法执行全局top文件中指定的目标及文件
如果使用salt '*' state.highstate,有多个主机,而top.sls中只定义了一个,会报错Comment: No Top file or external nodes data matches found.,即因为不匹配而执行失败
#salt 'node85' state.sls init.limit 对node85执行top.sls中的init.limit,node85必须在top.sls目标内
salt远程执行
三大功能
1、目标 2、模块 3、返回 salt命令 sage: salt [options] '<target>' <function> [arguments] -d Return the documentation for the specified module or for all modules if none are specified. #salt 'node84' test.ping -v 加-v 更详细,显示jid #salt '*' saltutil.running 查看正在执行的job #salt '*' saltutil.kill_job 加上上述命令显示的jid 模块 [root@node83 ~]# salt node84 cmd.run "df -h"
node84:
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 5.7G 1.5G 3.9G 28% /
tmpfs 491M 12K 491M 1% /dev/shm
/dev/sda1 190M 36M 145M 20% /boot
[root@node83 ~]# salt '*' service.restart sshd
node86:
True
node85:
True
node84: target目标 (salt --help | sed -n '/Target Options/,$p' |head -40) http://docs.saltstack.cn/topics/tutorials/modules.html#target 目标 http://docs.saltstack.cn/ref/modules/all/index.html 模块 target部分允许你指定哪些minion应该运行执行. 默认的规则是使用glob匹配minion id. 例如: salt '*' test.ping
salt 'node*' test.ping
salt 'node8?' test.ping
salt 'node8[4-6]' test.ping
salt -S '192.168.0.0/24' test.ping -S使用子网匹配
salt -S '192.168.0.0' test.ping
salt -C 'G@os:Ubuntu and webser* or E@database.*' test.ping -C 使用混合方式匹配
salt -L 'node84,node85' test.ping -L 使用列表
salt -G 'os:Ubuntu' test.ping -G 使用Grains匹配
salt -E 'virtmach[0-9]' test.ping -E 使用正则匹配
salt -E 'node[84|85]' test.ping salt -E 'node(84|85)' test.ping
状态执行中的目标正则写法 base: 'node84|node85': - match: pcre #grain grains匹配,其他方法参照- match: grain - init.pkg
返回(minion直接写)
[root@node83 ~]# yum install mysql-server && /etc/init.d/mysqld start
[root@node83 ~]# yum install MySQL-python &&/etc/init.d/salt-master restart
[root@node83 ~]# vi /etc/salt/master
#return: mysql
mysql.host: 'localhost' 此处若用ip有问题,使用master_job_cache: mysql时有问题
mysql.user: 'salt'
mysql.pass: 'salt'
mysql.db: 'salt'
mysql.port: 3306
master_job_cache: mysql #加入此条,客户端无需其他操作,执行时不用加--return mysql
授权
mysql>grant all on salt.* to 'salt'@'192.168.0.%' identified by 'salt';
mysql>grant all on salt.* to 'salt'@'localhost' identified by 'salt';

CREATE DATABASE `salt` DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; USE `salt`; -- -- Table structure for table `jids` -- DROP TABLE IF EXISTS `jids`; CREATE TABLE `jids` ( `jid` varchar(255) NOT NULL, `load` mediumtext NOT NULL, UNIQUE KEY `jid` (`jid`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE INDEX jid ON jids(jid) USING BTREE; -- -- Table structure for table `salt_returns` -- DROP TABLE IF EXISTS `salt_returns`; CREATE TABLE `salt_returns` ( `fun` varchar(50) NOT NULL, `jid` varchar(255) NOT NULL, `return` mediumtext NOT NULL, `id` varchar(255) NOT NULL, `success` varchar(10) NOT NULL, `full_ret` mediumtext NOT NULL, `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP, KEY `id` (`id`), KEY `jid` (`jid`), KEY `fun` (`fun`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- -- Table structure for table `salt_events` -- DROP TABLE IF EXISTS `salt_events`; CREATE TABLE `salt_events` ( `id` BIGINT NOT NULL AUTO_INCREMENT, `tag` varchar(255) NOT NULL, `data` mediumtext NOT NULL, `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP, `master_id` varchar(255) NOT NULL, PRIMARY KEY (`id`), KEY `tag` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
不使用master_job_cache则需要在所有minion端做如下操作
[root@test84 ~]# yum install MySQL-python
[root@test84 ~]# vi /etc/salt/minion
#return: mysql
mysql.host: '192.168.0.83'
mysql.user: 'salt'
mysql.pass: 'salt'
mysql.db: 'salt'
mysql.port: 3306
测试(使用job_cache)
root@node83 ~]# salt 'node84' cmd.run "df -h"
node84:
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 5.7G 1.5G 3.9G 28% /
tmpfs 491M 12K 491M 1% /dev/shm
/dev/sda1 190M 36M 145M 20% /boot
mysql> select * from jids\G
*************************** 1. row ***************************
jid: 20170408082551740905 load: {"tgt_type": "glob", "jid": "20170408082551740905", "cmd": "publish", "tgt": "node84", "kwargs": {"delimiter": ":", "show_timeout": true, "show_jid": false}, "ret": "", "user": "root", "arg": ["df -h"], "fun": "cmd.run"} 1 row in set (0.00 sec)
mysql> select * from salt_returns\G
*************************** 1. row ***************************
fun: cmd.run
jid: 20170408082551740905
return: "Filesystem Size Used Avail Use% Mounted on\n/dev/sda2 5.7G 1.5G 3.9G 28% /\ntmpfs 491M 12K 491M 1% /dev/shm\n/dev/sda1 190M 36M 145M 20% /boot"
id: node84
success: 1
full_ret: {"fun_args": ["df -h"], "jid": "20170408082551740905", "return": "Filesystem Size Used Avail Use% Mounted on\n/dev/sda2 5.7G 1.5G 3.9G 28% /\ntmpfs 491M 12K 491M 1% /dev/shm\n/dev/sda1 190M 36M 145M 20% /boot", "retcode": 0, "success": true, "cmd": "_return", "_stamp": "2017-04-08T00:25:51.824766", "fun": "cmd.run", "id": "node84"}
alter_time: 2017-04-08 08:25:51
1 row in set (0.00 sec)
saltstack数据系统 Grains & Pillar
区别
1、grains是静态、不常变化的;pillar则是动态的 2、grains是存储在minion本地,而pillar存储在master本地 3、minion有权限操作自己的grains值,如增加、删除,但minion只能查看自己的pillar,无权修改 那么我们就可以得到一个大致的判断,如果你想定义的属性值是经常变化的,那请采用pillar,如果是很固定、不易变的那请用grains 通常会把pillar或grains看成是一个大的字典,字典名就是pillar或grains;在grains字典中会有许多个键值对,每个键对应一个值,如:{{ grains['os'] }} 的值就是CentOS
名称 | 存储位置 | 数据类型 | 数据采集更新方式 | 应用 |
Grains | Minion端 | 静态数据 | Minion启动时收集,也可以server端使用saltutil.sync_grains刷新 | 存储基本数据,比如用于匹配minion,自身数据可以用来做资产管理等 |
Pillar | Master端 | 动态数据 | 在master端定义,指定给对应的minion,可以server端使用saltutil.refresh_pillar刷新 | 存储master指定的数据,只有指定的minion可以看到,用于存储敏感数据 |
Grains
Grains又被成为静态数据。Minion启动的时候收集的minion本地的相关信息(操作系统版本,内核版本,CPU,内存,硬盘,设备型号,序列号)。
Grains功能
(1)资产管理;信息查询
(2)用于目标选择,匹配
(3)配置管理中使用cmdb
[root@node83 ~]# salt -S '192.168.0.84' grains.items 获取所有grains数据 [grains.ls显示出所有item的键] [root@node83 ~]# salt -S '192.168.0.84' grains.get os
node84:
CentOS
[root@node83 ~]# salt -S '192.168.0.84' grains.item os
node84:
----------
os:
CentOS
[root@test83 ~]# salt -G 'os:CentOS' test.ping #利用grains数据匹配主机 test84: True [root@test83 ~]# salt -G 'host:test84' test.ping test84: True
[root@node83 ~]# salt -G 'fqdn:node84' grains.item osarch
node84:
----------
osarch:
x86_64
自定义grains [root@test84 ~]# vi /etc/salt/minion grains: roles: nginx env: prod [root@test83 ~]# salt -G 'roles:nginx' test.ping 匹配测试 192.168.0.84: True
或grains单独配置文件 [root@test84 ~]# vi /etc/salt/grains cloud: blue [root@test83 ~]# salt -G 'cloud:blue' test.ping 192.168.0.84: True ==>#salt '*' saltutil.sync_grains 配置grains后无需重启minion(刷新grains,master端操作)
grains在top.sls的使用
[root@test83 ~]# vi /etc/salt/states/top.sls
base:
'192.168.0.84':
- init.pkg
- init.limit
'roles:nginx':
- match: grain 使用grain匹配
- init.pkg
Pillars
在Pillar中存储静态数据 http://docs.saltstack.cn/topics/pillar/index.html
Salt在0.9.8版本中引入了Pillar。Pillar是Salt非常重要的一个组件,它用于给特定的minion定义任何你需要的数据,这些数据可以被Salt的其他组件使用。
Pillar在解析完成后,是一个嵌套的dict结构;最上层的key是minion ID,其value是该minion所拥有的Pillar数据;每一个value也都是key/value。
这里可以看出Pillar的一个特点,Pillar数据是与特定minion关联的,也就是说每一个minion都只能看到自己的数据,
所以Pillar可以用来传递敏感数据(在Salt的设计中,Pillar使用独立的加密session,也是为了保证敏感数据的安全性)。
[root@node83 ~]# salt '*' pillar.items ===>显示设置的pillar值,默认是没有的 打开pillar配置 [root@node83 ~]# vi /etc/salt/master
#pillar_roots: # base: # - /srv/pillar ====>更改为(按需) pillar_roots: base: - /etc/salt/pillar [root@node83 ~]# mkdir /etc/salt/pillar && cd /etc/salt/pillar # vi top.sls base: '*': - init.rsyslog [root@node83 pillar]# mkdir /etc/salt/pillar/init [root@node83 pillar]# cd /etc/salt/pillar/init
[root@node83 pillar]# vi rsyslog.sls {% if grains['osfinger'] == 'CentOS-6' %} syslog: rsyslog {% elif grains['osfinger'] == 'CentOS-5' %} syslog: syslog {% endif %}
# service salt-master restart [root@node83 init]# salt '*' saltutil.refresh_pillar
node86:
True
node84:
True
node85: [root@test83 init]# salt '*' pillar.item syslog test85: ---------- syslog: rsyslog test84: ---------- syslog: rsyslog test93: ---------- syslog: rsyslog # salt -I 'syslog:rsyslog' pillar.items 效果同上 # salt '*' pillar.items 效果同上
saltstack配置管理
salt安装zabbix-agent实战
[root@test83 states]# vi /etc/salt/states/top.sls
base:
'node86':
# - init.pkg
# - init.limit
- init.zabbix_agent
[root@node83 ~]# vi /etc/salt/states/init/zabbix_agent.sls zabbix_agent: cmd.run: - name: rpm -ivh http://repo.zabbix.com/zabbix/2.4/rhel/6/x86_64/zabbix-release-2.4-1.el6.noarch.rpm
pkg.installed: - name: zabbix-agent file.managed: - name: /etc/zabbix/zabbix_agentd.conf - source: salt://init/files/zabbix_agentd.conf - user: root - group: root - mode: 644
============================================================
在一个ID中只能有一个状态模块file.managed,如果需要多个,可以如下改写
/etc/zabbix/zabbix_agentd.conf
File.managed:
- source: salt://init/files/zabbix_agentd.conf
=========================================================== service.running: - name: zabbix-agent #/etc/init.d/zabbix-agent名 - enable: True #是否启动 - reload: True #配置文件改变时是否reload,去掉此行,默认是restart - watch: - file: zabbix_agent # ID名字,这个ID下只能有一个file.managed,即指/etc/zabbix/zabbix_agentd.conf
[root@node83 ~]# mkdir /etc/salt/states/init/
[root@node83 ~]# cp /etc/zabbix/zabbix_agentd.conf /etc/salt/states/init/files/
执行
[root@node83 ~]# salt 'node86' state.highstate
或[root@node83 ~]# salt 'node86' state.sls init.zabbix_agent
=============
改用pillar模式
=============
将salt://init/files/zabbix_agentd.conf改成模板
Server={{Zabbix_Server}}
[root@node83 ~]# vi /etc/salt/states/init/zabbix_agent.sls 修改file.managed部分
file.managed:
- name: /etc/zabbix_agentd.conf
- source: salt://init/files/zabbix_agent.conf
- user: root
- group: root
- mode: 644
- template: jinja #使用jinja模板
- defaults:
Zabbix_Server: {{ pillar['zabbix-agent']['Zabbix_Server'] }} #从pillar状态文件中取ID为'zabbix-agent'的'Zabbix_Server'的值
#变化一、 ==> Zabbix_Server: 192.168.0.88就不用定义pillar状态文件
#变化二、 Zabbix_Server: {{ pillar['Zabbix_Server'] }} 如果这样定义
Zabbix_Server: 192.168.0.88 在pillar的zabbix_agent.sls中直接这样一行就可以
pillar配置
[root@test83 pillar]# vi /etc/salt/pillar/init/zabbix_agent.sls
zabbix-agent:
Zabbix_Server: 192.168.0.88
# Zabbix_Server1: 192.168.0.99 #可以设置多个
[root@test83 pillar]# vi /etc/salt/pillar/top.sls
base:
'node86':
# - init.rsyslog
- init.zabbix_agent
执行
[root@node83 ~]# salt 'node86' state.highstate
salt安装php实战
[root@node83 ~]# vi /etc/salt/states/init/pkg.sls php-pkg: pkg.installed: - names: - gcc - gcc-c++ - glibc - make - autoconf - libjpeg-turbo - libjpeg-turbo-devel - libpng - libpng-devel - freetype - freetype-devel - libxml2 - libxml2-devel - zlib - zlib-devel - libcurl - libcurl-devel - openssl - openssl-devel - re2c - libmcrypt-devel - libxslt - libxslt-devel [root@node83 ~]# mkdir -p /etc/salt/states/php/files [root@node83 ~]# cp /home/tools/php-5.6.22.tar.gz /etc/salt/states/php/files/ [root@node83 ~]# vi /etc/salt/states/php/php_fastcgi.sls include: - init.pkg php-install: file.managed: - name: /home/tools/php-5.6.22.tar.gz - source: salt://php/files/php-5.6.22.tar.gz - user: root - group: root - mode: 644 cmd.run: - name: cd /home/tools && tar xf php-5.6.22.tar.gz && cd php-5.6.22 && ./configure --prefix=/usr/local/php5.6.22 --with-mysql=mysqlnd --with-iconv-dir=/usr/local/libiconv --with-freetype-dir --with-jpeg-dir --with-png-dir --with-zlib --with-libxml-dir=/usr --enable-xml --disable-rpath --enable-bcmath --enable-shmop --enable-sysvsem --enable-inline-optimization --with-curl --enable-mbregex --enable-fpm --enable-mbstring --with-mcrypt --with-gd --enable-gd-native-ttf --with-openssl --with-mhash --enable-pcntl --enable-sockets --with-xmlrpc --enable-soap --enable-short-tags --enable-static --with-xsl --with-fpm-user=nginx --with-fpm-group=nginx --enable-ftp && make && make install - unless: test -d /usr/local/php5.6.26 # 如果目录存在就不会再装了 require: - file: php-install 文件依赖 - pkg.installed: php-pkg 包依赖 [root@node83 ~]# vi /etc/salt/states/top.sls base: '192.168.0.86': - init.pkg - php.php_fastcgi 执行 salt '*' state.highstate 因为不同的业务可能使用不同的php.ini,建议单独在states下另起一个project [root@test83 init]# vi /etc/salt/states/web/lnmpconf include: - php.php_fastcgi /usr/local/php/lib/php.ini: file.managed: - source: salt://php/files/php.ini /usr/local/php/etc/php-fpm.conf file.managed: - source: salt://php/files/php-fpm.conf
salt-ssh
如果不便装minion,甚至不用master。如果哪天minion down了,可以使用satl-ssh [root@node83 ~]# yum install salt-ssh -y [root@node83 ~]# vi /etc/salt/roster # Sample salt-ssh config file #web1: # host: 192.168.42.1 # The IP addr or DNS hostname # user: fred # Remote executions will be executed as user fred # passwd: foobarbaz # The password to use for login, if omitted, keys are used # sudo: True # Whether to sudo to root, not enabled by default #web2: # host: 192.168.42.2 test84: host: 192.168.0.84 user: root passwd: 123456 需要停止master_job_cache [root@node83 ~]# salt-ssh '*' test.ping node84: True node85: True node86: True [root@node83 ~]# salt-ssh '*' cmd.run 'df -h' [root@node83 ~]# salt-ssh '*' cmd.run 'ifconfig'