openstack-train双节点
OpenStack Train版搭建详解
阿星0364
于 2021-09-30 08:37:10 发布
阅读量4.3k
收藏 42
点赞数 19
分类专栏: openstack 文章标签: 运维 centos linux
版权
openstack
专栏收录该内容
2 篇文章
订阅专栏
注:本篇文章根据openstack官网进行编写,编写不易。有兴趣的可以直接去官网进行搭建
一、准备环境
01-基础环境
#本次测试为双网卡(都为net模式)和双节点部署,根据自己实际情况进行更改
controller
eth33 192.168.200.10
eth34 none
compute
eth33 192.168.200.20
eth34 none
主机名 ip地址 系统版本
controller 192.168.200.10 centos7.5.1804
compute 192.168.200.20 centos7.5.1804
02-host解析(控制,计算)
配置网卡后
[root@controller ~]# cat /etc/hosts
192.168.200.10 controller
192.168.200.20 compute
scp /etc/hosts 192.168.200.20:/etc/hosts
1
2
3
4
5
6
7
03-关闭防火墙和selinux(控制,计算)
##########controller#############
[root@controller ~]# systemctl stop firewalld
[root@controller ~]# systemctl disable firewalld
[root@controller ~]# vim /etc/sysconfig/selinux
SELINUX=disabled
[root@controller ~]# setenforce 0
----------------------------------------------------------------
##########compute###################
[root@compute ~]# systemctl stop firewalld
[root@compute ~]# systemctl disable firewalld
[root@compute ~]# vim /etc/sysconfig/selinux #重启生效
SELINUX=disabled
[root@compute ~]# setenforce 0 #可以直接生效Permissive
1
2
3
4
5
6
7
8
9
10
11
12
13
04-准备yum源(控制,计算)
rm -rf /etc/yum.repos.d/*
------------------上外网的情况-------------------------
curl http://mirrors.163.com/.help/CentOS7-Base-163.repo >> /etc/yum.repos.d/centos.repo
yum repolist #验证
####修改yum配置文件 来获取缓存rpm包,第一次做就需要用到,来实现【第十三、扩展】的用法
vim /etc/yum.conf
keepcache=1
-------------------内网------------------------------------
将openstack-trian.tar.gz包上传到/opt/目录,解压
vi /etc/yum.repos.d/openstack.repo
[openstack]
name=openstack
baseurl=file:///opt/openstack-train/
gpgcheck=0
enabled=1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
二、安装chrony服务
01-控制节点准备
yum install -y chrony
vim /etc/chrony.conf #添加以下
server controller iburst #配置ntp服务器
allow 192.168.200.0/16 #允许200网段的进行时间同步
systemctl restart chronyd
systemctl enable chronyd
#验证
chronyc sources
1
2
3
4
5
6
7
8
9
10
02-计算节点准备
yum install -y chrony
vim /etc/chrony.conf #添加以下
server controller iburst
systemctl restart chronyd
systemctl enable chronyd
#验证
chronyc sources
1
2
3
4
5
6
7
8
9
三、安装openstack客户端
#############控制,计算节点###################
01-安装openstack库
yum install centos-release-openstack-train -y
#更新所有包
yum upgrade -y
#安装必要命令
yum install -y lsof net-tools vim wget
02-安装openstack客户端
yum install python-openstackclient openstack-selinux openstack-utils -y
centos-release-openstack-train 保证安装更新openstack版本为最新版本t版
python-openstackclient openstack的python客户端
因为openstack中的API大多数是python编写的,并且连接数据库,也需要python
openstack-selinux openstack核心安全防护
openstack-utils openstack其它util工具
四、部署数据库服务
################controller############3
01-安装mariadb服务
yum install mariadb mariadb-server python2-PyMySQL -y
02-创建并编辑配置文件
vi /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.200.10 #绑定ip地址,跟随自己实际情况
default-storage-engine = innodb #将默认引擎改为innodb,来支持高级数据库功能
innodb_file_per_table = on #修改InnoDB为独立表空间模式
max_connections = 4096 #最大连接数
collation-server = utf8_general_ci #
character-set-server = utf8 #
1
2
3
4
5
6
7
8
03-启动并且开机自启
systemctl enable mariadb && systemctl start mariadb
04-运行安全脚本
mysql_secure_installation #运行命令
Enter current password for root (enter for none): #直接回车
Set root password? [Y/n] y #设置密码
New password: #设置密码000000
Re-enter new password: #再次设置密码
Remove anonymous users? [Y/n] y #是否删除匿名用户
Disallow root login remotely? [Y/n] n #不允许远程登录
Remove test database and access to it? [Y/n] y #删除测试数据库并访问
Reload privilege tables now? [Y/n] y #是否重新加载特权表
五、部署消息队列
#################controller#############
01-安装rabbitmq服务
yum install rabbitmq-server -y
02-设置开机自启并启动
systemctl enable rabbitmq-server && systemctl start rabbitmq-server #端口5672
03-创建openstack账户和密码
rabbitmqctl add_user openstack 000000 #密码为000000
04-授予权限
rabbitmqctl set_permissions -p / openstack '.*' '.*' '.*' #配置,读,写权限
05-设置角色
rabbitmqctl set_user_tags openstack administrator #赋予管理员角色
06-安装web插件(可执行也可不执行)
rabbitmq-plugins enable rabbitmq_management #启动web管理界面
07-验证,可访问网址
netstat -nltp | grep 5672 #5672是默认端口,25672是测试工具cli端口
[http://192.168.200.10:15672](http://192.168.1.92:15672) #用户与密码为guest
六、部署memcached服务
####################controller##################
01-安装服务
yum install memcached python-memcached -y
02-修改配置文件
vim /etc/sysconfig/memcached
OPTIONS="-l 127.0.0.1,::1,controller"
03-设置开机自启并启动
systemctl enable memcached && systemctl start memcached
七、部署etcd服务
####################controller#############
01-安装服务
yum install etcd -y
02-修改配置文件
vim /etc/etcd/etcd.conf
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.200.10:2380" #将localhost改成自己的ip
ETCD_LISTEN_CLIENT_URLS="http://192.168.200.10:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.200.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.200.10:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.200.10:2380" #将default改为上面的ETCD_NAME
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
#可以使用:%s/localhost/192.168.200.10/g 实现转换
1
2
3
4
5
6
7
8
9
10
11
12
13
14
03-设置开机自启并启动
systemctl enable etcd && systemctl start etcd
八、部署keystone服务
#######################controller#############
01-创建数据库
MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '000000';
MariaDB [(none)]> flush privileges;
1
2
3
4
02-安装keystone服务
yum install openstack-keystone httpd mod_wsgi -y
03-修改配置文件
\cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak #备份
grep -Ev ^'(#|$)' /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf 去除#和空行
vim /etc/keystone/keystone.conf
[database] #数据库连接
connection = mysql+pymysql://keystone:000000@controller/keystone
[token]
provider = fernet
1
2
3
4
5
6
7
8
04-同步导入数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
#验证是否同步:
mysql -uroot -p000000 -e “use keystone;show tables;”
05-初始化密钥存储库
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
06-引导认证服务
#admin_pass 设置为000000
keystone-manage bootstrap --bootstrap-password 000000 --bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
1
2
3
4
5
07-配置apache服务
vim /etc/httpd/conf/httpd.conf
ServerName controller
08-创建链接文件
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
09-设置开机自启并启动服务
systemctl enable httpd && systemctl start httpd
10-配置管理员环境变量
export OS_USERNAME=admin
export OS_PASSWORD=000000
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
1
2
3
4
5
6
7
11-创建相关域,项目,用户和角色
#创建demo域,不创建也行,默认存在default域,admin项目和admin用户和角色 <<<<看清楚,不需要创建
openstack domain create --description "An Example Domain" demo
#创建admin项目
openstack project create --domain demo --description "Admin Project" admin
#创建admin用户
openstack user create --domain demo --password 000000 admin
#创建admin角色
openstack role create admin
#将admin用户和角色添加到admin项目
openstack role add --project admin --user admin admin
=
默认存在 以上不需要创建admin项目和其他
===
#创建service项目,供nova,glance等组件使用
openstack project create --domain default --description "Service Project" service
#创建demo项目
openstack project create --domain default --description "Demo Project" demo
#创建demo用户
openstack user create --domain default --password-prompt demo #密码为000000
#创建user角色
openstack role create user
#将demo用户和角色添加到demo项目
openstack role add --project demo --user demo user
12-验证
1)解除环境变量
unset OS_AUTH_URL OS_PASSWORD
2)创建admin用户的环境变量脚本
vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=000000
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
1
2
3
4
5
6
7
8
9
3)创建demo用户的环境变量脚本
vim demo-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=000000
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
1
2
3
4
5
6
7
8
9
4)验证
#验证admin
source admin-openrc
openstack token issue #获取token
#验证demo
source demo-openrc
openstack token issue
九、部署glance服务
01-创建数据库
mysql -u root -p000000
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '000000';
MariaDB [(none)]> flush privileges;
1
2
3
4
5
6
02-创建glance服务的keystone的认证
#生效环境变量
source admin-openrc
#创建glance用户
openstack user create --domain default --password 000000 glance
#将glance用户和角色添加到service项目
openstack role add --project service --user glance admin
1
2
3
4
5
6
7
03-创建glance服务和api端口
#创建glance服务
openstack service create --name glance --description "OpenStack Image" image
#公共端点
openstack endpoint create --region RegionOne image public http://controller:9292
#私有端点
openstack endpoint create --region RegionOne image internal http://controller:9292
#admin管理端点
openstack endpoint create --region RegionOne image admin http://controller:9292
1
2
3
4
5
6
7
8
9
04-安装glance服务并配置
#安装glance服务
yum install openstack-glance -y
#备份并且将开头#和空行去掉
\cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
grep -Ev ^'(#|$)' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf
#修改配置文件
#/etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:000000@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 000000
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http #支持存储方式
default_store = file #默认使用存储方式
filesystem_store_datadir = /var/lib/glance/images/
#备份并且将开头#和空行去掉
\cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak
grep -Ev ^'(#|$)' /etc/glance/glance-registry.conf.bak >/etc/glance/glance-registry.conf
#修改配置文件
#/etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:000000@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 000000
[paste_deploy]
flavor = keystone
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
05-同步数据库并启动服务
#同步数据库
su -s /bin/sh -c "glance-manage db_sync" glance
#启动glance服务并设置开机自启
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
1
2
3
4
5
06-验证
#下载测试镜像或者本地上传
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
#上传到glance
openstack image create "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public
#查看镜像
openstack image list
1
2
3
4
5
6
十、部署nova服务
###########################controller#########################
01-创建nova,placement数据库
mysql -uroot -p000000
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '000000';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '000000';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '000000';
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '000000';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '000000';
flush privileges;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
02-创建nova服务的keystone认证
#生效环境变量
source admin-openrc
#创建nova用户
openstack user create --domain default --password 000000 nova
#将nova用户和角色添加到项目
openstack role add --project service --user nova admin
1
2
3
4
5
6
7
03-创建nova服务和api端口
#创建nova服务
openstack service create --name nova --description "OpenStack Compute" compute
#公共端点
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
#私有端点
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
#admin管理端点
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
1
2
3
4
5
6
7
8
04-创建placement服务的keystone认证
#创建placement用户
openstack user create --domain default --password 000000 placement
#将palcement用户和角色添加到项目
openstack role add --project service --user placement admin
1
2
3
4
05-创建placement服务和api端口
#创建placement服务
openstack service create --name placement --description "Placement API" placement
#公共端点
openstack endpoint create --region RegionOne placement public http://controller:8778
#私有端点
openstack endpoint create --region RegionOne placement internal http://controller:8778
#admin管理端点
openstack endpoint create --region RegionOne placement admin http://controller:8778
1
2
3
4
5
6
7
8
06-安装nova、placement服务并配置
#安装nova服务和palcement服务
yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-placement-api
#备份并将#和空行去掉
\cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
grep -Ev ^'(#|$)' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
#配置nova服务
1)/etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:000000@controller
my_ip = 192.168.200.10
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[database]
connection = mysql+pymysql://nova:000000@controller/nova
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:000000@controller/nova_api
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 000000
[vnc]
enabled = true
server_listen = 192.168.200.10
server_proxyclient_address = 192.168.200.10
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
region_name = RegionOne
project_domain_name = default
project_name = service
auth_type = password
user_domain_name = default
auth_url = http://controller:5000/v3
username = placement
password = 000000
#备份并去除#和空行
cp /etc/placement/placement.conf /etc/placement/placement.conf.bak
grep -Ev '^$|#' /etc/placement/placement.conf.bak > /etc/placement/placement.conf
#配置文件
vi /etc/placement/placement.conf
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = 000000
[placement_database]
connection = mysql+pymysql://placement:000000@controller/placement
#因为软件包的bug,要修改配置文件添加以下内容
vim /etc/httpd/conf.d/00-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
#重启httpd服务
systemctl restart httpd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
07-同步数据库并启动服务
#同步初始化noa-api数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
#初始化填充cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
#创建cell1
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
#初始化nova数据库
su -s /bin/sh -c "nova-manage db sync" nova
#同步placement数据库
su -s /bin/sh -c "placement-manage db sync" placement
#/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1280, u"Name 'alembic_version_pkc' ignored for PRIMARY key.")
result = self._query(query)
#以上为正常现象
#验证是否注册成功
nova-manage cell_v2 list_cells
#启动nova服务并设置开机自启
systemctl enable openstack-nova-api.service openstack-nova-console.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-console.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
08-安装nova计算服务并配置(计算节点)
#安装nova计算服务
yum install openstack-nova-compute -y
#备份并将#和空行去掉
\cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
grep -Ev ^'(#|$)' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
#修改配置文件
/etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:000000@controller
my_ip = 192.168.200.20
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 000000
[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = default
project_name = service
auth_type = password
user_domain_name = default
auth_url = http://controller:5000/v3 #端口一定要与之对应
username = placement
password = 000000
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
AI写代码
09-检查是否支持虚拟化
#判断计算节点是否支持虚拟机的硬件加速
virt_num=`egrep -c '(vmx|svm)' /proc/cpuinfo`
if [ $virt_num = '0' ];then 如果不是
crudini --set /etc/nova/nova.conf libvirt virt_type qemu 更改virt的类型为qemu
10-启动服务并加入计算节点
#启动服务并设置开机自启
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl restart libvirtd.service openstack-nova-compute.service
#连接控制节点
ssh controller
source admin-openrc
#列出计算节点服务
openstack compute service list --service nova-compute
#验证
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
#退出连接
exit
1
2
3
4
5
6
7
8
9
10
11
12
13
11-验证
#在控制节点查看状态
openstack compute service list
#列出认证服务的端口
openstack catalog list
#列出镜像信息
openstack image list
#确认cells和placement的api成功运行
nova-status upgrade check #报错
1
2
3
4
5
6
7
8
十一、部署neutron服务
################################controller#######################
===============================================================
01-创建数据库
mysql -uroot -p000000
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '000000';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '000000';
flush privileges;
1
2
3
4
5
6
02-创建neutron服务的keystone认证
#生效环境变量
source admin-openrc
#创建neutron用户
openstack user create --domain default --password 000000 neutron
#将neutron用户和角色添加到服务
openstack role add --project service --user neutron admin
1
2
3
4
5
6
7
03-创建neutron服务和api端口
#创建neutron服务
openstack service create --name neutron --description "OpenStack Networking" network
#公共端点
openstack endpoint create --region RegionOne network public http://controller:9696
#私有端点
openstack endpoint create --region RegionOne network internal http://controller:9696
#admin管理端点
openstack endpoint create --region RegionOne network admin http://controller:9696
1
2
3
4
5
6
7
8
04-安装neutron服务并配置
#安装neutron服务 选择L3代理
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
#配置第二张网卡
vim /etc/sysconfig/network-scripts/ifcfg-ens34
#只保留这四行
DEVICE=ens34
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
systemctl restart network
#备份并将#和空行去掉
\cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
grep -Ev ^'(#|$)' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
#修改配置文件
vim /etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:000000@controller/neutron
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 000000
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 000000
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
#备份并将#和空行删除
\cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
grep -Ev ^'(#|$)' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
#修改配置文件配置二层插件
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true
#备份并将#和空行删除
\cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
grep -Ev ^'(#|$)' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#修改配置文件配置linux bridge插件
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34 #第二张网卡名字
[vxlan]
enable_vxlan = true
local_ip = 192.168.200.10
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
#备份并将#和空行删除
\cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak
grep -Ev ^'(#|$)' /etc/neutron/l3_agent.ini.bak >/etc/neutron/l3_agent.ini
#修改配置文件配置三层插件
vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
\cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
grep -Ev ^'(#|$)' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini
#修改配置文件配置dhcp插件
vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
\cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
grep -Ev ^'(#|$)' /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini
#修改配置文件配置metadata插件
vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 000000
#修改nova配置文件来使用网络服务
vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 000000
service_metadata_proxy = true
metadata_proxy_shared_secret = 000000
#创建软链接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
#重启nova服务
systemctl restart openstack-nova-api
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
05-同步数据库并启动服务
#同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
#启动neutron服务并设置开机自启
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
1
2
3
4
5
6
06-安装neutron网络服务并配置(计算节点)
#安装neutron服务
yum install openstack-neutron-linuxbridge ebtables ipset -y
#配置第二张网卡
vi /etc/sysconfig/network-scripts/ifcfg-ens34
#只保留这四行
DEVICE=ens34
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
systemctl restart network
#备份并将#和空行去掉
\cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
grep -Ev ^'(#|$)' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
#修改配置文件
vi /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 000000
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
#备份并将#和空行删除
\cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
grep -Ev ^'(#|$)' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#修改配置文件
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34 #第二张网卡名字
[vxlan]
enable_vxlan = true
local_ip = 192.168.200.20
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
#修改nova配置文件来使用网络服务
vi /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 000000
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
07-启动服务
#重启nova服务
systemctl restart openstack-nova-compute.service
#启动服务并设置开机自启
systemctl start neutron-linuxbridge-agent.service
systemctl enable neutron-linuxbridge-agent.service
1
2
3
4
5
08-验证
#查看网络插件是否启动成功
openstack network agent list
1
2
十二、部署horizon服务
01-安装服务并配置
#安装dashboard服务
yum install openstack-dashboard -y
#修改配置文件
vim /etc/openstack-dashboard/local_settings
……
OPENSTACK_HOST = "controller" ##配置界面在控制节点使用
……
ALLOWED_HOSTS = ['*'] ##允许所有主机访问
……
SESSION_ENGINE = 'django.contrib.sessions.backends.cache' ##配置memcached存储服务
……
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
},
}
……
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST ##启动v3的认证api
……
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True ##启用domain支持
……
OPENSTACK_API_VERSIONS = { ##配置api版本
"data-processing": 1.1,
"identity": 3,
"image": 2,
"volume": 2,
"compute": 2,
}
……
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default' ##配置Default为默认域
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" ##配置user角色为默认角色
TIME_ZONE = "Asia/Shanghai" ##配置时区
#修改配置文件
/etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}' #第四行添加
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
02-重启服务
#重启httpd服务和memcached服务
systemctl restart httpd.service memcached.service
1
2
03-验证
#浏览器输入ip
http://192.168.200.10/dashboard
=============================================================================
#搭建openstack The requested URL /auth/login/ was not found on this server报错
vi /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py
WEBROOT = '/dashboard'
==============================================================================
1
2
3
4
5
6
7
8
9
十三、扩展
01-将rpm包打包,可实现离线安装
###############控制节点################
yum -y install createrepo
mkdir -p /mnt/openstack/openstack_Train
cd /var/cache/yum/x86_64/7/
find ./* -name "*.rpm" -exec cp {} /mnt/openstack/openstack_Train/ \;
###############计算节点################
mkdir -p /mnt/openstack/openstack_Train_compute
cd /var/cache/yum/x86_64/7/
find ./* -name "*.rpm" -exec cp {} /mnt/openstack/openstack_Train_compute/ \;
scp /mnt/openstack/openstack_Train_compute/* 192.168.200.10:/mnt/openstack/openstack_Train_compute
#到控制节点上把所有的rpm包都放到一起
cd /mnt/
mv -f openstack_Train_compute/*.rpm openstack_Train/
#创建yum
cd /mnt/openstack/openstack_Train
createrepo ./
ls repodata/
#打包
tar -zcvf openstack-train.tar.gz openstack_Train/
#这样就可以将tar包保存到本地,供以后使用
1. 将tar包上传,解压到/mnt/
2. 配置yum.repo文件,配置yum到本地目录
3. 就可以使用不需要联网下载
————————————————
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
原文链接:https://blog.csdn.net/m0_47006406/article/details/120559779

浙公网安备 33010602011771号