pxe:
网卡支持网络引导:
dhcp, filename, next-server
tftp-server
pxelinux.0
vmlinuz, initrd.img
menu.c32,
pxelinux.cfg/default
system-config-kickstart
ksvalidator
ansible:
os provision:
物理机:pxe, cobbler
虚拟机:image file template
configuration:
程序包管理、用户管理、配置文件、服务管理、cron任务等等;
puppet, saltstack, chef, cfengine
task exec
command and control
func, fabric
程序发布:灰度模型
agent
agentless:
ssh服务:
ansible:
ansible <host-pattern> [-f forks] [-m module_name] [-a args]
args:
key=value
注意: command模块要执行命令无须为key=value格式,而是直接给出要执行的命令即可;
常用模块:
command
-a 'COMMAND'
user
-a 'name= state={present|absent} system= uid='
group
-a 'name= gid= state= system='
cron
-a 'name= minute= hour= day= month= weekday= job= user= state='
copy
-a 'dest= src= mode= owner= group='
file
-a 'path= mode= owner= group= state={directory|link|present|absent} src='
ping
没有参数
yum
-a 'name= state={present|latest|absent}'
service
-a 'name= state={started|stopped|restarted} enabled='
shell
-a 'COMMAND'
script
-a '/path/to/script'
setup
playbook的核心元素:
tasks: 任务
variables: 变量
templates: 模板
handlers: 处理器
roles: 角色
变量:
facts
--extra-vars "name=value name=value"
role定义
Inventory中的变量:
主机变量
hostname name=value name=value
组变量
[groupname:vars]
name=value
name=value
Inventory的高级用法:
ansible plybooks
Structure
Inventory
Modules
Ad Hoc Commands
Playbooks
Tasks
Variables
Templates
Handlers
Roles
Playbooks
Contain one or more plays
Written in YAML
declarative config
not code
Executed in the order it is written (aka Imperative)
‐‐‐
‐ name: deploy web server
user: foouser
sudo: True
hosts: all
tasks:
‐ name: install apache
apt: pkg=apache2‐mpm‐prefork state=latest
实验环境:
node1:
主机名:node1.smoke.com
操作系统:centos 7.5
内核版本:3.10.0-862.el7.x86_64
网卡1:vmnet8 172.16.100.67/24
node2:
主机名:node1.smoke.com
操作系统:centos 7.5
内核版本:3.10.0-862.el7.x86_64
网卡1:vmnet8 172.16.100.68/24
node3:
主机名:node1.smoke.com
操作系统:centos 7.5
内核版本:3.10.0-862.el7.x86_64
网卡1:vmnet8 172.16.100.69/24
node4:
主机名:node1.smoke.com
操作系统:centos 6.10
内核版本:2.6.32-754.el6.x86_64
网卡1:vmnet8 172.16.100.6/24
node1:
[root@node1 ~]# hostname
node1.smoke.com
[root@node1 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:e1:8a:5b brd ff:ff:ff:ff:ff:ff
inet 172.16.100.67/24 brd 172.16.100.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::ec26:6bfb:12af:4133/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@node1 ~]# ip route show
default via 172.16.100.2 dev ens33 proto static metric 100
172.16.100.0/24 dev ens33 proto kernel scope link src 172.16.100.67 metric 100
[root@node1 ~]# vim /etc/hosts
172.16.100.67 node1.smoke.com node1
172.16.100.68 node2.smoke.com node2
172.16.100.69 node3.smoke.com node3
172.16.100.6 node4.smoke.com node4
[root@node1 ~]# ntpdate ntp1.aliyun.com
[root@node1 ~]# crontab -l
*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null
[root@node1 ~]# setenforce 0
[root@node1 ~]# vim /etc/selinux/config
SELINUX=permissive
node2:
[root@node2 ~]# hostname
node2.smoke.com
[root@node2 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:93:0e:b9 brd ff:ff:ff:ff:ff:ff
inet 172.16.100.68/24 brd 172.16.100.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::4cee:1a2b:a54f:674b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@node2 ~]# ip route show
default via 172.16.100.2 dev ens33 proto static metric 100
172.16.100.0/24 dev ens33 proto kernel scope link src 172.16.100.68 metric 100
[root@node2 ~]# vim /etc/hosts
172.16.100.67 node1.smoke.com node1
172.16.100.68 node2.smoke.com node2
172.16.100.69 node3.smoke.com node3
172.16.100.6 node4.smoke.com node4
[root@node2 ~]# ntpdate ntp1.aliyun.com
[root@node2 ~]# crontab -l
*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null
[root@node2 ~]# setenforce 0
[root@node2 ~]# vim /etc/selinux/config
SELINUX=permissive
node3:
[root@node3 ~]# hostname
node3.smoke.com
[root@node3 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:18:c0:fe brd ff:ff:ff:ff:ff:ff
inet 172.16.100.69/16 brd 172.16.255.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::fa2:ce2c:70ab:eb7c/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@node3 ~]# ip route show
default via 172.16.100.2 dev ens33 proto static metric 100
172.16.0.0/16 dev ens33 proto kernel scope link src 172.16.100.69 metric 100
[root@node3 ~]# vim /etc/hosts
172.16.100.67 node1.smoke.com node1
172.16.100.68 node2.smoke.com node2
172.16.100.69 node3.smoke.com node3
172.16.100.6 node4.smoke.com node4
[root@node3 ~]# ntpdate ntp1.aliyun.com
[root@node3 ~]# crontab -l
*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null
[root@node3 ~]# setenforce 0
[root@node3 ~]# vim /etc/selinux/config
SELINUX=permissive
node4:
[root@node4 ~]# hostname
node4.smoke.com
[root@node4 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:e7:1c:38 brd ff:ff:ff:ff:ff:ff
inet 172.16.100.6/24 brd 172.16.100.255 scope global eth0
inet6 fe80::20c:29ff:fee7:1c38/64 scope link
valid_lft forever preferred_lft forever
[root@node4 ~]# ip route show
172.16.100.0/24 dev eth0 proto kernel scope link src 172.16.100.6
169.254.0.0/16 dev eth0 scope link metric 1002
default via 172.16.100.2 dev eth0
[root@node4 ~]# vim /etc/hosts
172.16.100.67 node1.smoke.com node1
172.16.100.68 node2.smoke.com node2
172.16.100.69 node3.smoke.com node3
172.16.100.6 node4.smoke.com node4
[root@node4 ~]# ntpdate ntp1.aliyun.com
[root@node4 ~]# crontab -l
*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null
[root@node4 ~]# setenforce 0
[root@node4 ~]# vim /etc/selinux/config
SELINUX=permissive
node1:
[root@node1 ~]# mkdir -pv /etc/yum.repos.d/OldMirrorFile [root@node1 ~]# mv /etc/yum.repos.d/CentOS-* /etc/yum.repos.d/OldMirrorFile/ [root@node1 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo [root@node1 ~]# curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
node2:
[root@node2 ~]# mkdir -pv /etc/yum.repos.d/OldMirrorFile [root@node2 ~]# mv /etc/yum.repos.d/CentOS-* /etc/yum.repos.d/OldMirrorFile/ [root@node2 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo [root@node2 ~]# curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
node3:
[root@node3 ~]# mkdir -pv /etc/yum.repos.d/OldMirrorFile [root@node3 ~]# mv /etc/yum.repos.d/CentOS-* /etc/yum.repos.d/OldMirrorFile/ [root@node3 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo [root@node3 ~]# curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
node4:
[root@node4 ~]# mkdir -pv /etc/yum.repos.d/OldMirrorFile [root@node4 ~]# mv /etc/yum.repos.d/CentOS-* /etc/yum.repos.d/OldMirrorFile/ [root@node4 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-6.repo [root@node4 ~]# curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo
node1:
[root@node1 ~]# yum -y install ansible
172.16.100.67
[websrvs]
172.16.100.68
172.16.100.69
[dbsrvs]
172.16.100.6
[root@node1 ~]# ssh-keygen -t rsa -P ''
[root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.100.67
[root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.100.68
[root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.100.69
[root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.100.6
[root@node1 ansible]# for i in 67 68 69 6; do ssh 172.16.100.$i 'date'; done
2020年 09月 28日 星期一 22:56:47 CST
2020年 09月 28日 星期一 22:56:47 CST
2020年 09月 28日 星期一 22:56:48 CST
2020年 09月 28日 星期一 22:56:48 CST
[root@node1 ansible]# ansible-doc -l
[root@node1 ansible]# ansible-doc -s user
[root@node1 ansible]# man ansible
[root@node1 ansible]# ansible-doc -s command
[root@node1 ansible]# ansible all -a 'ping'
172.16.100.68 | FAILED | rc=2 >>
Usage: ping [-aAbBdDfhLnOqrRUvV64] [-c count] [-i interval] [-I interface]
[-m mark] [-M pmtudisc_option] [-l preload] [-p pattern] [-Q tos]
[-s packetsize] [-S sndbuf] [-t ttl] [-T timestamp_option]
[-w deadline] [-W timeout] [hop1 ...] destination
Usage: ping -6 [-aAbBdDfhLnOqrRUvV] [-c count] [-i interval] [-I interface]
[-l preload] [-m mark] [-M pmtudisc_option]
[-N nodeinfo_option] [-p pattern] [-Q tclass] [-s packetsize]
[-S sndbuf] [-t ttl] [-T timestamp_option] [-w deadline]
[-W timeout] destinationnon-zero return code
172.16.100.69 | FAILED | rc=2 >>
Usage: ping [-aAbBdDfhLnOqrRUvV64] [-c count] [-i interval] [-I interface]
[-m mark] [-M pmtudisc_option] [-l preload] [-p pattern] [-Q tos]
[-s packetsize] [-S sndbuf] [-t ttl] [-T timestamp_option]
[-w deadline] [-W timeout] [hop1 ...] destination
Usage: ping -6 [-aAbBdDfhLnOqrRUvV] [-c count] [-i interval] [-I interface]
[-l preload] [-m mark] [-M pmtudisc_option]
[-N nodeinfo_option] [-p pattern] [-Q tclass] [-s packetsize]
[-S sndbuf] [-t ttl] [-T timestamp_option] [-w deadline]
[-W timeout] destinationnon-zero return code
172.16.100.6 | FAILED | rc=2 >>
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destinationnon-zero return code
[root@node1 ansible]# ansible all -a 'ip addr show'
172.16.100.6 | CHANGED | rc=0 >>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:e7:1c:38 brd ff:ff:ff:ff:ff:ff
inet 172.16.100.6/24 brd 172.16.100.255 scope global eth0
inet6 fe80::20c:29ff:fee7:1c38/64 scope link
valid_lft forever preferred_lft forever
172.16.100.69 | CHANGED | rc=0 >>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:18:c0:fe brd ff:ff:ff:ff:ff:ff
inet 172.16.100.69/16 brd 172.16.255.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::fa2:ce2c:70ab:eb7c/64 scope link noprefixroute
valid_lft forever preferred_lft forever
172.16.100.68 | CHANGED | rc=0 >>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:93:0e:b9 brd ff:ff:ff:ff:ff:ff
inet 172.16.100.68/24 brd 172.16.100.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::4cee:1a2b:a54f:674b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@node1 ansible]# cd
[root@node1 ~]# ansible websrvs -a 'wget -O /tmp/elasticsearch-1.7.2.noarch.rpm http://192.168.254.10/web_working_directory/elasticsearch-1.7.2.noarch.rpm'
node2:
[root@node2 ~]# ll /tmp/ 总用量 26668 -rw-r--r--. 1 root root 27304727 10月 3 22:15 elasticsearch-1.7.2.noarch.rpm drwx------. 3 root root 17 10月 3 21:11 systemd-private-4872bb579fdf467cbe1e5dd79f58f4e6-chronyd.service-xhl6ql
node3:
[root@node3 ~]# ll /tmp/ 总用量 0 -rw-r--r--. 1 root root 0 10月 3 23:05 elasticsearch-1.7.2.noarch.rpm drwx------. 3 root root 17 10月 3 21:11 systemd-private-fd95ff5a2f644b7aa4f880f0918d8193-chronyd.service-nKcFmR
node1:
[root@node1 ~]# ansible-doc -s user #管理用户
[root@node1 ~]# ansible websrvs -m user -a "name=hacluster state=present" #创建hacluster用户,state=present用户存在
172.16.100.68 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"comment": "",
"create_home": true,
"group": 1000,
"home": "/home/hacluster",
"name": "hacluster",
"shell": "/bin/bash",
"state": "present",
"system": false,
"uid": 1000
}
172.16.100.69 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"comment": "",
"create_home": true,
"group": 1000,
"home": "/home/hacluster",
"name": "hacluster",
"shell": "/bin/bash",
"state": "present",
"system": false,
"uid": 1000
}
node2:
[root@node2 ~]# id hacluster uid=1000(hacluster) gid=1000(hacluster) 组=1000(hacluster)
node1:
[root@node1 ~]# ansible websrvs -m user -a "name=hacluster state=present"
172.16.100.68 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"append": false,
"changed": false,
"comment": "",
"group": 1000,
"home": "/home/hacluster",
"move_home": false,
"name": "hacluster",
"shell": "/bin/bash",
"state": "present",
"uid": 1000
}
172.16.100.69 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"append": false,
"changed": false,
"comment": "",
"group": 1000,
"home": "/home/hacluster",
"move_home": false,
"name": "hacluster",
"shell": "/bin/bash",
"state": "present",
"uid": 1000
}
[root@node1 ~]# ansible websrvs -m user -a "name=hacluster state=absent" #state=absent必须不存在,删除用户
172.16.100.68 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"force": false,
"name": "hacluster",
"remove": false,
"state": "absent"
}
172.16.100.69 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"force": false,
"name": "hacluster",
"remove": false,
"state": "absent"
}
node2:
[root@node2 ~]# id hacluster id: hacluster: no such user
node1:
[root@node1 ~]# ansible websrvs -m user -a "name=hacluster state=present system=yes" #system=yes创建系统用户
172.16.100.69 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"comment": "",
"create_home": true,
"group": 995,
"home": "/home/hacluster",
"name": "hacluster",
"shell": "/bin/bash",
"state": "present",
"stderr": "useradd:警告:此主目录已经存在。\n不从 skel 目录里向其中复制任何文件。\n",
"stderr_lines": [
"useradd:警告:此主目录已经存在。",
"不从 skel 目录里向其中复制任何文件。"
],
"system": true,
"uid": 997
}
172.16.100.68 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"comment": "",
"create_home": true,
"group": 995,
"home": "/home/hacluster",
"name": "hacluster",
"shell": "/bin/bash",
"state": "present",
"stderr": "useradd:警告:此主目录已经存在。\n不从 skel 目录里向其中复制任何文件。\n",
"stderr_lines": [
"useradd:警告:此主目录已经存在。",
"不从 skel 目录里向其中复制任何文件。"
],
"system": true,
"uid": 997
}
[root@node1 ~]# ansible-doc -s group
[root@node1 ~]# ansible-doc -s cron
[root@node1 ~]# ansible all -m cron -a 'name="sync time from ntpserver" minute="*/10" job="/sbin/ntpdate ntp2.aliyun.com &> /dev/null"' #创建crontab任务
172.16.100.6 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"envs": [],
"jobs": [
"sync time from ntpserver"
]
}
172.16.100.68 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"envs": [],
"jobs": [
"sync time from ntpserver"
]
}
172.16.100.69 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"envs": [],
"jobs": [
"sync time from ntpserver"
]
}
node2:
[root@node2 ~]# crontab -l */5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null #Ansible: sync time from ntpserver */10 * * * * /sbin/ntpdate ntp2.aliyun.com &> /dev/null
node1:
[root@node1 ~]# ansible all -m cron -a 'name="sync time from ntpserver" state=absent' #删除crontab任务
172.16.100.69 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"envs": [],
"jobs": []
}
172.16.100.6 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"msg": "Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!"
}
172.16.100.68 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"envs": [],
"jobs": []
}
node2:
[root@node2 ~]# crontab -l */5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null
node1:
[root@node1 ~]# ansible-doc -s copy
[root@node1 ~]# ansible websrvs -m copy -a 'src=/etc/fstab dest=/tmp/fstab.tmp mode=600'
172.16.100.68 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "4c6e816d61b471c0551a8d25b4d80094c3ffeb3e",
"dest": "/tmp/fstab.tmp",
"gid": 0,
"group": "root",
"md5sum": "3380e0b2267902bdb5a776ab1c2c88f2",
"mode": "0600",
"owner": "root",
"secontext": "unconfined_u:object_r:admin_home_t:s0",
"size": 541,
"src": "/root/.ansible/tmp/ansible-tmp-1601822030.64-5378-91367418844651/source",
"state": "file",
"uid": 0
}
172.16.100.69 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "4c6e816d61b471c0551a8d25b4d80094c3ffeb3e",
"dest": "/tmp/fstab.tmp",
"gid": 0,
"group": "root",
"md5sum": "3380e0b2267902bdb5a776ab1c2c88f2",
"mode": "0600",
"owner": "root",
"secontext": "unconfined_u:object_r:admin_home_t:s0",
"size": 541,
"src": "/root/.ansible/tmp/ansible-tmp-1601822030.65-5380-6080344480195/source",
"state": "file",
"uid": 0
}
node2:
[root@node2 ~]# ll /tmp/ 总用量 26672 -rw-r--r--. 1 root root 27304727 10月 3 22:15 elasticsearch-1.7.2.noarch.rpm -rw-------. 1 root root 541 10月 4 22:33 fstab.tmp drwx------. 3 root root 17 10月 3 21:11 systemd-private-4872bb579fdf467cbe1e5dd79f58f4e6-chronyd.service-xhl6ql
node1:
[root@node1 ~]# ansible-doc -s file #设置文件属性
[root@node1 ~]# ansible all -m file -a 'path=/tmp/testdir state=directory'
172.16.100.6 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"gid": 0,
"group": "root",
"mode": "0755",
"owner": "root",
"path": "/tmp/testdir",
"size": 4096,
"state": "directory",
"uid": 0
}
172.16.100.69 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"gid": 0,
"group": "root",
"mode": "0755",
"owner": "root",
"path": "/tmp/testdir",
"secontext": "unconfined_u:object_r:user_tmp_t:s0",
"size": 6,
"state": "directory",
"uid": 0
}
172.16.100.68 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"gid": 0,
"group": "root",
"mode": "0755",
"owner": "root",
"path": "/tmp/testdir",
"secontext": "unconfined_u:object_r:user_tmp_t:s0",
"size": 6,
"state": "directory",
"uid": 0
}
node2:
[root@node2 ~]# ll /tmp/ 总用量 26672 -rw-r--r--. 1 root root 27304727 10月 3 22:15 elasticsearch-1.7.2.noarch.rpm -rw-------. 1 root root 541 10月 4 22:33 fstab.tmp drwx------. 3 root root 17 10月 3 21:11 systemd-private-4872bb579fdf467cbe1e5dd79f58f4e6-chronyd.service-xhl6ql drwxr-xr-x. 2 root root 6 10月 4 22:38 testdir
node1:
[root@node1 ~]# ansible all -m file -a 'path=/tmp/fstab.symlink state=link src=/tmp/fstab.tmp'
172.16.100.69 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"dest": "/tmp/fstab.symlink",
"gid": 0,
"group": "root",
"mode": "0777",
"owner": "root",
"secontext": "unconfined_u:object_r:user_tmp_t:s0",
"size": 14,
"src": "/tmp/fstab.tmp",
"state": "link",
"uid": 0
}
172.16.100.6 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"msg": "src file does not exist, use \"force=yes\" if you really want to create the link: /tmp/fstab.tmp",
"path": "/tmp/fstab.symlink",
"src": "/tmp/fstab.tmp"
}
172.16.100.68 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"dest": "/tmp/fstab.symlink",
"gid": 0,
"group": "root",
"mode": "0777",
"owner": "root",
"secontext": "unconfined_u:object_r:user_tmp_t:s0",
"size": 14,
"src": "/tmp/fstab.tmp",
"state": "link",
"uid": 0
}
node2:
[root@node2 ~]# ll /tmp/ 总用量 26672 -rw-r--r--. 1 root root 27304727 10月 3 22:15 elasticsearch-1.7.2.noarch.rpm lrwxrwxrwx. 1 root root 14 10月 4 22:40 fstab.symlink -> /tmp/fstab.tmp -rw-------. 1 root root 541 10月 4 22:33 fstab.tmp drwx------. 3 root root 17 10月 3 21:11 systemd-private-4872bb579fdf467cbe1e5dd79f58f4e6-chronyd.service-xhl6ql drwxr-xr-x. 2 root root 6 10月 4 22:38 testdir
node4:
[root@node4 ~]# ll /tmp/ 总用量 4 drwxr-xr-x. 2 root root 4096 10月 4 22:38 testdir -rw-------. 1 root root 0 9月 27 07:34 yum.log
node1:
[root@node1 ~]# ansible all -m file -a 'path=/tmp/fstab.symlink state=link src=/tmp/fstab.tmp force=yes' #force=yes强制创建
172.16.100.69 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"dest": "/tmp/fstab.symlink",
"gid": 0,
"group": "root",
"mode": "0777",
"owner": "root",
"secontext": "unconfined_u:object_r:user_tmp_t:s0",
"size": 14,
"src": "/tmp/fstab.tmp",
"state": "link",
"uid": 0
}
172.16.100.68 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"dest": "/tmp/fstab.symlink",
"gid": 0,
"group": "root",
"mode": "0777",
"owner": "root",
"secontext": "unconfined_u:object_r:user_tmp_t:s0",
"size": 14,
"src": "/tmp/fstab.tmp",
"state": "link",
"uid": 0
}
[WARNING]: Cannot set fs attributes on a non-existent symlink target. follow should be set to False to avoid this.
172.16.100.6 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"dest": "/tmp/fstab.symlink",
"src": "/tmp/fstab.tmp"
}
node2:
[root@node2 ~]# ll /tmp/ 总用量 26672 -rw-r--r--. 1 root root 27304727 10月 3 22:15 elasticsearch-1.7.2.noarch.rpm lrwxrwxrwx. 1 root root 14 10月 4 22:40 fstab.symlink -> /tmp/fstab.tmp -rw-------. 1 root root 541 10月 4 22:33 fstab.tmp drwx------. 3 root root 17 10月 3 21:11 systemd-private-4872bb579fdf467cbe1e5dd79f58f4e6-chronyd.service-xhl6ql drwxr-xr-x. 2 root root 6 10月 4 22:38 testdir
node4:
[root@node4 ~]# ll /tmp/ 总用量 4 lrwxrwxrwx. 1 root root 14 10月 4 22:46 fstab.symlink -> /tmp/fstab.tmp drwxr-xr-x. 2 root root 4096 10月 4 22:38 testdir -rw-------. 1 root root 0 9月 27 07:34 yum.log
node1:
[root@node1 ~]# ansible all -m file -a 'path=/tmp/fstab.symlink state=absent force=yes' #强制删除文件
172.16.100.6 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"path": "/tmp/fstab.symlink",
"state": "absent"
}
172.16.100.69 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"path": "/tmp/fstab.symlink",
"state": "absent"
}
172.16.100.68 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"path": "/tmp/fstab.symlink",
"state": "absent"
}
node4:
[root@node4 ~]# ll /tmp/ 总用量 4 drwxr-xr-x. 2 root root 4096 10月 4 22:38 testdir -rw-------. 1 root root 0 9月 27 07:34 yum.log
node1:
[root@node1 ~]# ansible-doc -s ping
[root@node1 ~]# ansible all -m ping
172.16.100.69 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
172.16.100.68 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
172.16.100.6 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
[root@node1 ~]# ansible-doc -s yum
[root@node1 ~]# ansible websrvs -m yum -a 'name=nginx state=latest'
node2:
[root@node2 ~]# rpm -q nginx nginx-1.16.1-2.el7.x86_64
node1:
[root@node1 ~]# ansible-doc -l [root@node1 ~]# ansible-doc -s service [root@node1 ~]# vim /etc/ansible/hosts [websrvs] 172.16.100.68 172.16.100.69 172.16.100.6 [dbsrvs] 172.16.100.68 172.16.100.6 [root@node1 ~]# ansible websrvs -m yum -a 'name=nginx state=latest' [root@node1 ~]# ansible websrvs -m service -a 'name=nginx state=started enabled=yes'
node4:
[root@node4 ~]# ps aux | grep nginx root 5213 0.0 0.1 108936 2168 ? Ss 22:13 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 5214 0.0 0.1 109360 2920 ? S 22:13 0:00 nginx: worker process nginx 5215 0.0 0.1 109360 2896 ? S 22:13 0:00 nginx: worker process root 5226 0.0 0.0 103336 908 pts/0 S+ 22:14 0:00 grep nginx
node3:
[root@node3 ~]# ps aux | grep nginx root 9656 0.0 0.1 120888 2096 ? Ss 22:13 0:00 nginx: master process /usr/sbin/nginx nginx 9657 0.0 0.1 121272 3132 ? S 22:13 0:00 nginx: worker process nginx 9658 0.0 0.1 121272 3132 ? S 22:13 0:00 nginx: worker process root 9681 0.0 0.0 112720 972 pts/0 S+ 22:24 0:00 grep --color=auto nginx
node4:
[root@node4 ~]# chkconfig --list nginx nginx 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭
node3:
[root@node3 ~]# systemctl list-units
[root@node3 ~]# systemctl status nginx.service
● nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: active (running) since 一 2020-10-05 22:13:40 CST; 18min ago
Process: 9654 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
Process: 9651 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
Process: 9649 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Main PID: 9656 (nginx)
CGroup: /system.slice/nginx.service
├─9656 nginx: master process /usr/sbin/nginx
├─9657 nginx: worker process
└─9658 nginx: worker process
10月 05 22:13:39 node3.smoke.com systemd[1]: Starting The nginx HTTP and reverse proxy server...
10月 05 22:13:39 node3.smoke.com nginx[9651]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
10月 05 22:13:39 node3.smoke.com nginx[9651]: nginx: configuration file /etc/nginx/nginx.conf test is successful
10月 05 22:13:40 node3.smoke.com systemd[1]: Started The nginx HTTP and reverse proxy server.
node1:
[root@node1 ~]# ansible websrvs -m service -a 'name=nginx state=stopped enabled=no'
node3:
[root@node3 ~]# systemctl status nginx.service ● nginx.service - The nginx HTTP and reverse proxy server Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled) Active: inactive (dead) 10月 05 22:00:40 node3.smoke.com systemd[1]: Unit nginx.service cannot be reloaded because it is inactive. 10月 05 22:13:39 node3.smoke.com systemd[1]: Starting The nginx HTTP and reverse proxy server... 10月 05 22:13:39 node3.smoke.com nginx[9651]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok 10月 05 22:13:39 node3.smoke.com nginx[9651]: nginx: configuration file /etc/nginx/nginx.conf test is successful 10月 05 22:13:40 node3.smoke.com systemd[1]: Started The nginx HTTP and reverse proxy server. 10月 05 22:52:54 node3.smoke.com systemd[1]: Stopping The nginx HTTP and reverse proxy server... 10月 05 22:52:54 node3.smoke.com systemd[1]: Stopped The nginx HTTP and reverse proxy server.
node4:
[root@node4 ~]# service nginx status nginx 已停 [root@node4 ~]# chkconfig --list nginx nginx 0:关闭 1:关闭 2:关闭 3:关闭 4:关闭 5:关闭 6:关闭
node1:
[root@node1 ~]# ansible websrvs -m user -a 'name=centos state=present'
172.16.100.68 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"comment": "",
"create_home": true,
"group": 1000,
"home": "/home/centos",
"name": "centos",
"shell": "/bin/bash",
"state": "present",
"system": false,
"uid": 1000
}
172.16.100.69 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"comment": "",
"create_home": true,
"group": 1000,
"home": "/home/centos",
"name": "centos",
"shell": "/bin/bash",
"state": "present",
"system": false,
"uid": 1000
}
172.16.100.6 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"comment": "",
"create_home": true,
"group": 500,
"home": "/home/centos",
"name": "centos",
"shell": "/bin/bash",
"state": "present",
"system": false,
"uid": 500
}
[root@node1 ~]# ansible websrvs -m command -a 'echo centos | passwd --stdin centos'
172.16.100.6 | CHANGED | rc=0 >>
centos | passwd --stdin centos
172.16.100.68 | CHANGED | rc=0 >>
centos | passwd --stdin centos
172.16.100.69 | CHANGED | rc=0 >>
centos | passwd --stdin centos
node3:
[root@node3 ~]# tail /etc/shadow #没有密码,把echo命令不是在目标主机echo,实在当前主机echo,把前一个命令通过管道送给后一个命令command命令无法实现 systemd-network:!!:18425:::::: dbus:!!:18425:::::: polkitd:!!:18425:::::: sshd:!!:18425:::::: postfix:!!:18425:::::: chrony:!!:18425:::::: ntp:!!:18531:::::: hacluster:!!:18539:::::: nginx:!!:18540:::::: centos:!!:18540:0:99999:7:::
node1:
[root@node1 ~]# ansible-doc -s shell [root@node1 ~]# ansible websrvs -m shell -a 'echo centos | passwd --stdin centos' 172.16.100.69 | CHANGED | rc=0 >> 更改用户 centos 的密码 。 passwd:所有的身份验证令牌已经成功更新。 172.16.100.68 | CHANGED | rc=0 >> 更改用户 centos 的密码 。 passwd:所有的身份验证令牌已经成功更新。 172.16.100.6 | CHANGED | rc=0 >> 更改用户 centos 的密码 。 passwd: 所有的身份验证令牌已经成功更新。
node1:
[root@node1 ~]# ansible-doc -s script
[root@node1 ~]# vim /tmp/test.sh
#!/bin/bash
#
echo "$(hostname) ansible is good." > /tmp/ansible.txt
[root@node1 ~]# ansible websrvs -m script -a '/tmp/test.sh'
172.16.100.6 | CHANGED => {
"changed": true,
"rc": 0,
"stderr": "Shared connection to 172.16.100.6 closed.\r\n",
"stderr_lines": [
"Shared connection to 172.16.100.6 closed."
],
"stdout": "",
"stdout_lines": []
}
172.16.100.68 | CHANGED => {
"changed": true,
"rc": 0,
"stderr": "Shared connection to 172.16.100.68 closed.\r\n",
"stderr_lines": [
"Shared connection to 172.16.100.68 closed."
],
"stdout": "",
"stdout_lines": []
}
172.16.100.69 | CHANGED => {
"changed": true,
"rc": 0,
"stderr": "Shared connection to 172.16.100.69 closed.\r\n",
"stderr_lines": [
"Shared connection to 172.16.100.69 closed."
],
"stdout": "",
"stdout_lines": []
}
node3:
[root@node3 ~]# ll /tmp/ 总用量 8 -rw-r--r--. 1 root root 33 10月 5 23:10 ansible.txt -rw-r--r--. 1 root root 0 10月 3 23:05 elasticsearch-1.7.2.noarch.rpm -rw-------. 1 root root 541 10月 4 22:33 fstab.tmp drwx------. 3 root root 17 10月 3 21:11 systemd-private-fd95ff5a2f644b7aa4f880f0918d8193-chronyd.service-nKcFmR drwxr-xr-x. 2 root root 6 10月 4 22:38 testdir [root@node3 ~]# cat /tmp/ansible.txt node3.smoke.com ansible is good.
node2:
[root@node2 ~]# cat /tmp/ansible.txt node2.smoke.com ansible is good.
node1:
[root@node1 ~]# ansible-doc -s setup
[root@node1 ~]# ansible websrvs -m setup #获取主机信息
[root@node1 ~]# vim test.yml
tasks:
- name: install a pkg
yum: name=nginx state=latest
- name: copy conf file
copy: src= dest= state=
- name: start nginx service
service: name= state=
[root@node1 ~]# vim /etc/ansible/hosts
[websrvs]
172.16.100.68
172.16.100.69
172.16.100.6
[dbsrvs]
172.16.100.68
172.16.100.6
浙公网安备 33010602011771号