openstack 学习

openstack 云平台学习笔记
参考文档:https://www.unixhot.com(社区博主:赵班长)
首先ntp时间服务器要搭建好

一、 Linux OpenStack学习
云平台的一个管理项目(compute(计算)network(网络)storage(存储))
准备实战部署OpenStack:
准备两台服务器:
192.168.36.144(控制节点)
192.168.36.147(计算节点)
在生产环境中需要ntp时间同步服务器
二、openstack基础环境的部署
1.关闭selinux
[root@openstack ~]# vim /etc/selinux/config
[root@openstack ~]# egrep -v "$|#" /etc/selinux/config
SELINUX=disabled
SELINUXTYPE=targeted

2.防火墙的关闭
[root@openstack ~]# systemctl stop firewalld.service
[root@openstack ~]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
三、安装openstack仓库

  1. 下载epel源
    [root@openstack ~]# rpm -ivh http://mirrors.aliyun.com/epel/epel-release-latest-7.noarch.rpm
    [root@openstack ~]# yum clean all
    [root@openstack ~]# yum checkmake

2.下载openstack仓库
[root@openstack ~]# yum install -y centos-release-openstack-queens
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile

  • base: mirror.lzu.edu.cn.....。

3.安装openstack客户端
[root@openstack ~]# yum install -y python-openstackclient

4.安装openstack SELinux管理包
[root@openstack ~]# yum install -y openstack-selinux
四、MySQL数据库部署 1.MySQL安装
1.[root@openstack ~]# yum install -y mariadb mariadb-server python2-PyMySQL

2.修改MySQL配置文件
[root@openstack ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld] bind-address = 192.168.56.11 #设置监听的IP地址 default-storage-engine = innodb #设置默认的存储引擎 innodb_file_per_table = on#使用独享表空间 collation-server = utf8_general_ci #服务器的默认校对规则 character-set-server = utf8 #服务器安装时指定的默认字符集设定
max_connections = 4096 #设置MySQL的最大连接数,生产请根据实际情况设置。

3.启动MySQL Server并设置开机启动
[root@openstack ~]# systemctl enable mariadb.service
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
[root@openstack ~]# systemctl start mariadb.service
[root@openstack ~]# netstat -nlp|grep mysql
tcp 0 0 192.168.36.144:3306 0.0.0.0:* LISTEN 19383/mysqld
unix 2 [ ACC ] STREAM LISTENING 66948 19383/mysqld /var/lib/mysql/mysql.sock

4.进行数据库安全设置
[root@openstack ~]# mysql_secure_installation

5.登录mysql创建所有需要数据库和用户的模块
■Keystone数据库
MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.00 sec)

■Keystone用户
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
Query OK, 0 rows affected (0.00 sec)

■Glance数据库
MariaDB [(none)]> CREATE DATABASE glance;
Query OK, 1 row affected (0.00 sec)

■Glance用户
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
Query OK, 0 rows affected (0.00 sec)

■Nova数据库
MariaDB [(none)]> CREATE DATABASE nova;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> CREATE DATABASE nova_api;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> CREATE DATABASE nova_cell0;
Query OK, 1 row affected (0.00 sec)

■Nova用户
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';
Query OK, 0 rows affected (0.00 sec)

■Neutron 数据库
MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.00 sec)

■Neutron 用户
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
Query OK, 0 rows affected (0.00 sec)
■Cinder数据库
MariaDB [(none)]> CREATE DATABASE cinder;
Query OK, 1 row affected (0.00 sec)

■Cinder用户
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
Query OK, 0 rows affected (0.00 sec)
五、消息代理RabbitMQ
1.安装RabbitMQ
[root@openstack ~]# yum install -y rabbitmq-server
2.设置开启启动,并启动RabbitMQ

[root@openstack ~]# systemctl enable rabbitmq-server.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to /usr/lib/systemd/system/rabbitmq-server.service.
[root@openstack ~]# systemctl start rabbitmq-server.service

3.添加openstack用户
[root@openstack ~]# rabbitmqctl add_user openstack openstack
Creating user "openstack" #用户:openstack 密码:openstack

4.给刚才创建的openstack用户,创建权限
[root@openstack ~]# rabbitmqctl set_permissions openstack "." "." ".*"
Setting permissions for user "openstack" in vhost "/"

5.启用Web监控插件
[root@openstack ~]# rabbitmq-plugins list
Configured: E = explicitly enabled; e = implicitly enabled
| Status: * = running on rabbit@openstack
|/
[ ] amqp_client 3.6.16
[ ] cowboy 1.0.4
[ ] cowlib 1.0.2
[ ] rabbitmq_amqp1_0 3.6.16
[ ] rabbitmq_auth_backend_ldap 3.6.16
[ ] rabbitmq_auth_mechanism_ssl 3.6.16
[ ] rabbitmq_consistent_hash_exchange 3.6.16
[ ] rabbitmq_event_exchange 3.6.16
[ ] rabbitmq_federation 3.6.16
[ ] rabbitmq_federation_management 3.6.16
[ ] rabbitmq_jms_topic_exchange 3.6.16
[ ] rabbitmq_management 3.6.16
[ ] rabbitmq_management_agent 3.6.16
[ ] rabbitmq_management_visualiser 3.6.16
[ ] rabbitmq_mqtt 3.6.16
[ ] rabbitmq_random_exchange 3.6.16
[ ] rabbitmq_recent_history_exchange 3.6.16
[ ] rabbitmq_sharding 3.6.16
[ ] rabbitmq_shovel 3.6.16
[ ] rabbitmq_shovel_management 3.6.16
[ ] rabbitmq_stomp 3.6.16
[ ] rabbitmq_top 3.6.16
[ ] rabbitmq_tracing 3.6.16
[ ] rabbitmq_trust_store 3.6.16
[ ] rabbitmq_web_dispatch 3.6.16
[ ] rabbitmq_web_mqtt 3.6.16
[ ] rabbitmq_web_mqtt_examples 3.6.16
[ ] rabbitmq_web_stomp 3.6.16
[ ] rabbitmq_web_stomp_examples 3.6.16
[ ] sockjs 0.3.4

[root@openstack ~]# rabbitmq-plugins enable rabbitmq_management
The following plugins have been enabled:
amqp_client
cowlib
cowboy
rabbitmq_web_dispatch
rabbitmq_management_agent
rabbitmq_management
Applying plugin configuration to rabbit@openstack... started 6 plugins.

6.查看监听端口
[root@openstack ~]# netstat -luntp|grep 15672
tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 19627/beam.smp

7.web页面登录

登录默认用户:guest
默认密码:guest

六、keystone的安装与配置
Keystone(OpenStack Identity Service)是OpenStack框架中,负责身份验证、服务规则和服务令牌的功能,它实现了OpenStack的Identity API。 Keystone类似一个服务总线, 或者说是整个OpenStack框架的注册中心,其他服务通过keystone来注册其服务的Endpoint(服务访问的URL),任何服务之间相互的调用,需要经过Keystone的身份验证,来获得目标服务的Endpoint来找到目标服务
1.安装keystone
[root@openstack ~]# yum install -y openstack-keystone httpd mod_wsgi memcached python-memcached

2.设置Memcache开启启动并启动Memcached
[root@openstack ~]# systemctl enable memcached.service
Created symlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service.
[root@openstack ~]# systemctl start memcached.service

3.配置memcache让别人可以访问
[root@openstack ~]# ss -luntp|grep memcache
tcp LISTEN 0 128 127.0.0.1:11211 : users: (("memcached",pid=21260,fd=26))
[root@openstack ~]# vim /etc/sysconfig/memcached

修改如下:

[root@openstack ~]# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.36.144,::1" #这里修改为本机IP
[root@openstack ~]# systemctl restart memcached.service
[root@openstack ~]# ss -luntp|grep memcache
tcp LISTEN 0 128 192.168.36.144:11211 : users:(("memcached",pid=22003,fd=26))

4.配置KeyStone数据库
[root@openstack ~]# vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:keystone@192.168.36.144/keystone
[token]
provider = fernet

5.同步数据库
[root@openstack ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
检查keystone数据库是否有数据
[root@openstack ~]# mysql -ukeystone -pkeystone -e "use keystone;show tables;"
+-----------------------------+
| Tables_in_keystone |
+-----------------------------+
| access_token |
| application_credential |
| application_credential_role |
| assignment |
| config_register |
| consumer |
| credential |
| endpoint |
| endpoint_group |
| federated_user |
| federation_protocol |
| group |
| id_mapping |
| identity_provider |
| idp_remote_ids |
| implied_role |
| limit |
| local_user |
| mapping |
| migrate_version |
| nonlocal_user |
| password |
| policy |
| policy_association |
| project |
| project_endpoint |
| project_endpoint_group |
| project_tag |
| region |
| registered_limit |
| request_token |
| revocation_event |
| role |
| sensitive_config |
| service |
| service_provider |
| system_assignment |
| token |
| trust |
| trust_role |
| user |
| user_group_membership |
| user_option |
| whitelisted_config |
+-----------------------------+
6.初始化fernet keys
[root@openstack ~]# keystone-manage fernet_setup --keystone-user keystone -- keystone-group keystone
[root@openstack ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

7.初始化keystone

在Queens版本中只需要一个端口(5000),就用于所有接口,以前的版本中中5000用于普通接口,35357仅负责管理服务,该换此处ADMIN_PASS为密码

[root@openstack keystone]# keystone-manage bootstrap --bootstrap-password admin \

--bootstrap-admin-url http://192.168.36.144:35357/v3/
--bootstrap-internal-url http://192.168.36.144:35357/v3/
--bootstrap-public-url http://192.168.36.144:5000/v3/
--bootstrap-region-id RegionOne

注意我用的是老版本的接口,也可以直接用5000

进入数据库检查keystone是否初始化成功
MariaDB [keystone]> select * from user\G;
*************************** 1. row ***************************
id: c4e7064e332c4767b106cdac8686c4da
extra: {}
enabled: 1
default_project_id: NULL
created_at: 2020-02-28 15:37:32
last_active_at: NULL
domain_id: default
1 row in set (0.00 sec)

MariaDB [keystone]> select * from role\G
*************************** 1. row ***************************
id: 0dafb48f5c1040e79b264d5f4fdcfdef
name: admin
extra: {}
domain_id: <>
1 row in set (0.00 sec)

MariaDB [keystone]> select * from endpoint\G
*************************** 1. row ***************************
id: 30e8f1d7693e4af0ad765051ae06b136
legacy_endpoint_id: NULL
interface: internal
service_id: a555348470a24e07b1f332f8163d410d
url: http://192.168.36.144:35357/v3/
extra: {}
enabled: 1
region_id: RegionOne
*************************** 2. row ***************************
id: 7fb6157e5f42484cab94cc089394db7c
legacy_endpoint_id: NULL
interface: public
service_id: a555348470a24e07b1f332f8163d410d
url: http://192.168.36.144:5000/v3/
extra: {}
enabled: 1
region_id: RegionOne
*************************** 3. row ***************************
id: d2ecafb21b1f42d6ba72db1a73d439b9
legacy_endpoint_id: NULL
interface: admin
service_id: a555348470a24e07b1f332f8163d410d
url: http://192.168.36.144:35357/v3/
extra: {}
enabled: 1
region_id: RegionOne
3 rows in set (0.00 sec)
七、配置Apache HTTP 服务器
[root@openstack keystone]# cd /etc/httpd/conf/
[root@openstack conf]# ls -l
total 28
-rw-r--r-- 1 root root 11753 Aug 6 2019 httpd.conf
-rw-r--r-- 1 root root 13077 Aug 8 2019 magic
1.备份文件
[root@openstack conf]# cp -a httpd.conf httpd.conf.back

2.配置httpd.conf
[root@openstack conf]# vim httpd.conf

配置如下

ServerName 192.168.36.144:80

3.创建一个软连接
[root@openstack conf]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
[root@openstack conf]# ls -l /etc/httpd/conf.d/wsgi-keystone.conf
lrwxrwxrwx 1 root root 38 Feb 29 00:23 /etc/httpd/conf.d/wsgi-keystone.conf -> /usr/share/keystone/wsgi-keystone.conf

4.keystone的有两个端口
[root@openstack conf]# cat /etc/httpd/conf.d/wsgi-keystone.conf
Listen 5000
Listen 35357

5.启动Apache服务
[root@openstack conf]# systemctl enable httpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@openstack conf]# systemctl start httpd.service
[root@openstack conf]# netstat -nultp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 19627/beam.smp
tcp 0 0 192.168.36.144:3306 0.0.0.0:* LISTEN 19383/mysqld
tcp 0 0 192.168.36.144:11211 0.0.0.0:* LISTEN 22003/memcached
tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 8910/sshd
tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 19627/beam.smp
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 9165/master
tcp6 0 0 :::5000 ::😗 LISTEN 27781/httpd
tcp6 0 0 :::5672 ::😗 LISTEN 19627/beam.smp
tcp6 0 0 ::1:11211 ::😗 LISTEN 22003/memcached
tcp6 0 0 :::80 ::😗 LISTEN 27781/httpd
tcp6 0 0 :::22 ::😗 LISTEN 8910/sshd
tcp6 0 0 ::1:25 ::😗 LISTEN 9165/master
tcp6 0 0 :::35357 ::😗 LISTEN 27781/httpd

八、配置admin账户
1.[root@openstack conf]# export OS_USERNAME=admin
[root@openstack conf]# export OS_PASSWORD=admin
[root@openstack conf]# export OS_PROJECT_NAME=admin
[root@openstack conf]# export OS_USER_DOMAIN_NAME=Default
[root@openstack conf]# export OS_PROJECT_DOMAIN_NAME=Default
[root@openstack conf]# export OS_AUTH_URL=http://192.168.36.144:35357/v3
[root@openstack conf]# export OS_IDENTITY_API_VERSION=3

2.检查
[root@openstack conf]# openstack role list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 0dafb48f5c1040e79b264d5f4fdcfdef | admin |
+----------------------------------+-------+
[root@openstack conf]# openstack user list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| c4e7064e332c4767b106cdac8686c4da | admin |
+----------------------------------+-------+
[root@openstack conf]# openstack service list
+----------------------------------+----------+----------+
| ID | Name | Type |
+----------------------------------+----------+----------+
| a555348470a24e07b1f332f8163d410d | keystone | identity |
+----------------------------------+----------+----------+
[root@openstack conf]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| 30e8f1d7693e4af0ad765051ae06b136 | RegionOne | keystone | identity | True | internal | http://192.168.36.144:35357/v3/ |
| 7fb6157e5f42484cab94cc089394db7c | RegionOne | keystone | identity | True | public | http://192.168.36.144:5000/v3/ |
| d2ecafb21b1f42d6ba72db1a73d439b9 | RegionOne | keystone | identity | True | admin | http://192.168.36.144:35357/v3/ |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------

3.创建项目和demo用户
1.创建demo项目
[root@openstack conf]# openstack project create --domain default --description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 39aef50f504f4849ac1bcc9d8d01b105 |
| is_domain | False |
| name | demo |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+

2.创建demo用户
[root@openstack conf]# openstack user create --domain default --password demo demo
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | d2f73c50bfb1494985a5b9a8a551819c |
| name | demo |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

3.创建一个user角色
[root@openstack conf]# openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | e1fc4d52b7c24c4a89e59cf2a13a099b |
| name | user |
+-----------+----------------------------------+

4.添加user 角色到 demo 项目和用户上
[root@openstack conf]# openstack role add --project demo --user demo user

5.创建Service项目
[root@openstack conf]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | 82378e3cba2f4e6789d82894a6580da7 |
| is_domain | False |
| name | service |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
检查一下
[root@openstack conf]# openstack project list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 623edcd216e74e6c830227c51d639b42 | admin |
| 82378e3cba2f4e6789d82894a6580da7 | service |
+----------------------------------+---------+

6.创建glance用户
[root@openstack conf]# openstack user create --domain default --password glance glance
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 40d138688c3c46ab9470b1a0361f4325 |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
[root@openstack conf]# openstack role add --project service --user glance admin

7.创建nova用户
[root@openstack conf]# openstack user create --domain default --password nova nova
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 73d853685bb048f8b74087b8d224dc05 |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
[root@openstack conf]# openstack role add --project service --user nova admin

8.创建placement用户
[root@openstack conf]# openstack user create --domain default --password placement placement
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 07210a62564f40719e710eb1bfbe0b0f |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
[root@openstack conf]# openstack role add --project service --user placement admin

9.创建Neutron用户
[root@openstack conf]# openstack user create --domain default --password neutron neutron
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | ba608bf9d9214c2ba198fc703be81305 |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
[root@openstack conf]# openstack role add --project service --user neutron admin

10.创建cinder用户
[root@openstack conf]# openstack user create --domain default --password cinder cinder
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 89ab655cd72d4bb495016efdc0721895 |
| name | cinder |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
[root@openstack conf]# openstack role add --project service --user cinder admin
检查:
[root@openstack conf]# openstack user list
+----------------------------------+-----------+
| ID | Name |
+----------------------------------+-----------+
| 07210a62564f40719e710eb1bfbe0b0f | placement |
| 40d138688c3c46ab9470b1a0361f4325 | glance |
| 73d853685bb048f8b74087b8d224dc05 | nova |
| 89ab655cd72d4bb495016efdc0721895 | cinder |
| ba608bf9d9214c2ba198fc703be81305 | neutron |
| c4e7064e332c4767b106cdac8686c4da | admin |
| d2f73c50bfb1494985a5b9a8a551819c | demo |
+----------------------------------+-----------+
[root@openstack conf]# openstack project list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 39aef50f504f4849ac1bcc9d8d01b105 | demo |
| 623edcd216e74e6c830227c51d639b42 | admin |
| 82378e3cba2f4e6789d82894a6580da7 | service |
+----------------------------------+---------+
[root@openstack conf]# openstack role list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 0dafb48f5c1040e79b264d5f4fdcfdef | admin |
| e1fc4d52b7c24c4a89e59cf2a13a099b | user |
+----------------------------------+-------+
九、验证keystone
1.重置OS_TOKENOS_URL 环境变量:
[root@openstack conf]# unset OS_AUTH_URL OS_PASSWORD

2.作为 admin 用户,请求认证令牌
[root@openstack conf]# openstack --os-auth-url http://192.168.36.144:35357/v3 \

--os-project-domain-name default --os-user-domain-name default
--os-project-name admin --os-username admin token issue
Password: (此处输入用户密码)admin
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-02-29T10:04:24+0000 |
| id | gAAAAABeWikYocUTIqa-V1dxacqJEhhya0BYbtk7RSDTH9N0l-ui6df70rcqBznx5LQp4VYzPOpxS0VLt7aowNzzy4G5J09Ofj0T3g1Lhlsv-tXDB04kRZOSYLohg2u36KXMhp5ZktXy5zGdjPftvEptUmqjYiNTc6f25ZGVFRiHSRcYBdu8Auo |
| project_id | 623edcd216e74e6c830227c51d639b42 |
| user_id | c4e7064e332c4767b106cdac8686c4da |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

3.作为demo 用户,请求认证令牌
[root@openstack conf]# openstack --os-auth-url http://192.168.36.144:5000/v3 \

--os-project-domain-name default --os-user-domain-name default
--os-project-name demo --os-username demo token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-02-29T10:09:01+0000 |
| id | gAAAAABeWiotUfbj-VpABm85F407jMbS_2ZDilEVrtHdxkHP-EpOXPJQQfDp1jSVAhdAc8K_OBHkZ6i2eiuTMS-Bn4vG9TJa25TUt63A14y8c44KoH8v_HszlEQAN10CwYT1XNElUw7KPq75mvGpc3dgXOWuRI4SxGZoWUWqadzHwDQubYTZeSA |
| project_id | 39aef50f504f4849ac1bcc9d8d01b105 |
| user_id | d2f73c50bfb1494985a5b9a8a551819c |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
十、创建环境变量的脚本
1.admin用户
[root@openstack conf]# vim /root/admin-openstack.sh
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://192.168.36.144:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
检查:
[root@openstack ~]# source admin-openstack.sh
[root@openstack ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-02-29T10:29:02+0000 |
| id | gAAAAABeWi7eVlXtqIEhklTETx4ETE2x2NwLDhrS1XkKAaHLM6n95DcOIon0x8kj0rH1leFprK2-ZhxUHwfCwWVT5idtWsK96ZiG5uk_pU-7ybIgS0Dsx4jkQXCHqunu2mxqspT10_hO-wswBGcHd_aODalaCGFOueTQCw6iMZk_MoY9qoC_PYE |
| project_id | 623edcd216e74e6c830227c51d639b42 |
| user_id | c4e7064e332c4767b106cdac8686c4da |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

2.demo用户
[root@openstack conf]# vim /root/demo-openstack.sh
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.36.144:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
检查
[root@openstack ~]# source demo-openstack.sh
[root@openstack ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-02-29T10:31:01+0000 |
| id | gAAAAABeWi9VDzrny73OkkarCY4m4jOS0_M4gy8Pk2JxVMumzn_zrwun4m5IoF5B9jKhzNTjOwWSlQ3nOULX3T87UqmcV_oXT3Wno1eczM5QqVAjdmjuzBUVIS5ZpOWTB8ZNFMvfv0G6BlhLpPOueCNq7BOUcthGJbJ_gMUVq3m8f1kBIx1VB3c |
| project_id | 39aef50f504f4849ac1bcc9d8d01b105 |
| user_id | d2f73c50bfb1494985a5b9a8a551819c |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
十一、Glance配置
OpenStack中的Glance是镜像服务,能够提供发现、注册并查询虚拟机镜像,也是Openstack的一个组件或者说项目之一。镜像服务提供了一个Rest API的方式。 OpenStack的Glance镜像可以存放在本地文件系统,也可以存放在OpenStack的对象存储上。默认情况下是本地文件,存放在/var/lib/glance/images/目录下。
1.安装Glance
[root@openstack ~]# yum install -y openstack-glance

2.创建Glance服务实体
[root@openstack ~]# openstack service create --name glance \

--description "OpenStack Image" image
You are not authorized to perform the requested action: identity:create_service. (HTTP 403) (Request-ID: req-f93de763-1fef-469c-85d5-f62c2d2a65ae)
出现上面报错:解决方法
[root@openstack ~]# source admin-openstack.sh
[root@openstack ~]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 9c1bc45bce5947839db7572a04cb3536 |
| name | glance |
| type | image |
+-------------+----------------------------------+

3.创建镜像服务的 API 端点:
[root@openstack ~]# openstack endpoint create --region RegionOne \

image public http://192.168.36.144:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 81ef73ec95974f3193e25d9fbdbe3072 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9c1bc45bce5947839db7572a04cb3536 |
| service_name | glance |
| service_type | image |
| url | http://192.168.36.144:9292 |
+--------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne
image internal http://192.168.36.144:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 10a29fecf67145268e3ce0b7c22e82c7 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9c1bc45bce5947839db7572a04cb3536 |
| service_name | glance |
| service_type | image |
| url | http://192.168.36.144:9292 |
+--------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne
image admin http://192.168.36.144:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | e68824d060944cd1b56b25e48a2d4d8d |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9c1bc45bce5947839db7572a04cb3536 |
| service_name | glance |
| service_type | image |
| url | http://192.168.36.144:9292 |
+--------------+----------------------------------+

4.Glance数据库配置 Glance-api.conf和Glance-registry.conf文件
编辑文件 /etc/glance/glance-api.conf
[root@openstack ~]# vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:glance@192.168.36.144/glance
配置glance连接keystone
[keystone_authtoken]
auth_uri = http://192.168.36.144:5000
auth_url = http://192.168.36.144:35357
memcached_servers = 192.168.36.144:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor = keystone
编辑文件 /etc/glance/glance-registry.conf
[root@openstack ~]# vim /etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:glance@192.168.36.144/glance
[keystone_authtoken]
配置glance连接keystone
auth_uri = http://192.168.36.144:5000
auth_url = http://192.168.36.144:35357
memcached_servers = 192.168.36.144:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor = keystone

5.设置Glance镜像存储
[root@linux-node1 ~]# vim /etc/glance/glance-api.conf
[glance_store] stores = file,http default_store=file filesystem_store_datadir=/var/lib/glance/images/

6.同步数据库
[root@openstack ~]# su -s /bin/sh -c "glance-manage db_sync" glance
检查Glance数据库:
[root@openstack ~]# mysql -uglance -pglance -e "use glance;show tables;"
+----------------------------------+
| Tables_in_glance |
+----------------------------------+
| alembic_version |
| image_locations |
| image_members |
| image_properties |
| image_tags |
| images |
| metadef_namespace_resource_types |
| metadef_namespaces |
| metadef_objects |
| metadef_properties |
| metadef_resource_types |
| metadef_tags |
| migrate_version |
| task_info |
| tasks |
+----------------------------------+

7.配置成功启动Glance
[root@openstack ~]# systemctl enable openstack-glance-api.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service.
[root@openstack ~]# systemctl enable openstack-glance-registry.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service to /usr/lib/systemd/system/openstack-glance-registry.service.
[root@openstack ~]# systemctl start openstack-glance-api.service
[root@openstack ~]# systemctl start openstack-glance-registry.service
[root@openstack ~]# ss -nlt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:9191 :
LISTEN 0 128 *:25672 :
LISTEN 0 128 192.168.36.144:3306 :
LISTEN 0 128 192.168.36.144:11211 :
LISTEN 0 128 *:9292

8.测试Glance状态
[root@openstack ~]# source admin-openstack.sh
[root@openstack ~]# openstack image list
注:没有出现报错即可

9.Glance镜像 在刚开始实施OpenStack平台阶段,如果没有制作镜像。可以使用一个实验的镜像进行测试,这是一个小的Linux系统。
[root@openstack ~]# cd /usr/local/src
[root@openstack src]# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
[root@openstack ~]# source admin-openstack.sh
[root@openstack src]# openstack image create "cirros" --disk-format qcow2 \ --container-format bare --file cirros-0.3.5-x86_64-disk.img --public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | f8ab98ff5e73ebab884d80c9dc9c7290 |
| container_format | bare |
| created_at | 2020-02-29T13:29:15Z |
| disk_format | qcow2 |
| file | /v2/images/23f9a497-35dc-4d00-8c40-59bc5f440728/file |
| id | 23f9a497-35dc-4d00-8c40-59bc5f440728 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 623edcd216e74e6c830227c51d639b42 |
| protected | False |
| schema | /v2/schemas/image |
| size | 13267968 |
| status | active |
| tags | |
| updated_at | 2020-02-29T13:29:16Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+

注:因为cirros-0.3.5-x86_64-disk.img镜像就在src目录下所以没写绝对路径如果镜像不在此目录下要写绝对路径/usr/local/src/cirros-0.3.5-x86_64-disk.img
[root@openstack src]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 23f9a497-35dc-4d00-8c40-59bc5f440728 | cirros | active |
+--------------------------------------+--------+--------+

10.Glance服务注册 想要让别的服务可以使用Glance,就需要在Keystone上完成服务的注册。注意需要先source一下admin的环境变量。
[root@openstack ~]# openstack service create --name glance --description "OpenStack Image service" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image service |
| enabled | True |
| id | d67bb2e3c2bc48dd9c2c9fb0953935c8 |
| name | glance |
| type | image |
+-------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne image public http://192.168.36.144:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 83020da1cb58436aaf15c137d92470e1 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d67bb2e3c2bc48dd9c2c9fb0953935c8 |
| service_name | glance |
| service_type | image |
| url | http://192.168.36.144:9292 |
+--------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne image internal http://192.168.36.144:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 42a38babdf344b06b38fb481bd6866cb |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d67bb2e3c2bc48dd9c2c9fb0953935c8 |
| service_name | glance |
| service_type | image |
| url | http://192.168.36.144:9292 |
+--------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne image admin http://192.168.36.144:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 9b34915161274824b233c95cd405071b |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d67bb2e3c2bc48dd9c2c9fb0953935c8 |
| service_name | glance |
| service_type | image |
| url | http://192.168.36.144:9292 |
+--------------+----------------------------------+

十二、Nova服务的配置
Nova是OpenStack最早的两个模块之一(另一个是对象存储)。在OpenStack体系中是计算资源虚拟化的项目。同时Nova也是OpenStack项目中组件最多的一个项目。在中小型部署中,我们常常把除nova-compute之外的其它组件部署到一台服务器上,称之为控制节点。将nova-compute部署到单独的一台服务器上,称之为计算节点。
一、控制节点安装与配置
1.控制节点安装
[root@openstack ~]# yum install -y openstack-nova-api openstack-nova-placement-api \ openstack-nova-conductor openstack-nova-console \ openstack-nova-novncproxy openstack-nova-scheduler

2.Nova数据库配置
[root@openstack ~]# vim /etc/nova/nova.conf
[DEFAULT] enabled_apis = osapi_compute,metadata
[api_database]
connection=mysql+pymysql://nova:nova@192.168.36.144/nova_api
[database]
connection=mysql+pymysql://nova:nova@192.168.36.144/nova

3.RabbitMQ配置
[root@openstack ~]# vim /etc/nova/nova.conf
[DEFAULT]
transport_url=rabbit://openstack:openstack@192.168.36.144

4.Keystone相关配置
[root@openstack ~]# vim /etc/nova/nova.conf
[api]
auth_strategy=keystone
[keystone_authtoken]
auth_uri = http://192.168.36.144:5000
auth_url = http://192.168.36.144:35357
memcached_servers = 192.168.36.144:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

5.关闭Nova的防火墙功能
[root@openstack ~]# vim /etc/nova/nova.conf
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver

6.VNC配置
[root@openstack ~]# vim /etc/nova/nova.conf
10288 [vnc]
enabled=true
server_listen=0.0.0.0
server_proxyclient_address=192.168.36.144

7.设置glance
[root@openstack ~]# vim /etc/nova/nova.conf
[glance]
api_servers= http://192.168.36.144:9292

8.在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency] lock_path=/var/lib/nova/tmp

9.设置placement
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.36.144:35357/v3
username = placement
password = placement

10.修改nova-placement-api.conf
[root@openstack ~]# vim /etc/httpd/conf.d/00-nova-placement-api.conf

在最后添加

<Directory /usr/bin>

= 2.4>

Require all granted

<IfVersion < 2.4>

Order allow,deny

Allow from all

注意:如果上面这一步没做后面创建的虚拟机status列会报错 [root@openstack ~]# openstack server list +--------------------------------------+---------------+---------+-------------------------+--------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+---------------+---------+-------------------------+--------+---------+ | 885c6d02-e560-4515-889a-4d3c1b544f7f | demo-instance | ERROR | provider=192.168.36.107 | cirros | m1.nano | +--------------------------------------+---------------+---------+-------------------------+--------+---------+

11.同步nova_api数据库
[root@openstack ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
■注册cell0数据库
[root@openstack ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
■创建cell1的cell
[root@openstack ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
71613854-7549-4a32-ab99-79e2a1ef2d69

11.同步nova数据库
[root@openstack ~]# su -s /bin/sh -c "nova-manage db sync" nova

12.验证cell0和cell1的注册是否正确
[root@openstack ~]# nova-manage cell_v2 list_cells
+-------+--------------------------------------+----------------------------------------+-----------------------------------------------------+
| Name | UUID | Transport URL | Database Connection |
+-------+--------------------------------------+----------------------------------------+-----------------------------------------------------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:@192.168.36.144/nova_cell0 |
| cell1 | 71613854-7549-4a32-ab99-79e2a1ef2d69 | rabbit://openstack:
@192.168.36.144 | mysql+pymysql://nova:****@192.168.36.144/nova |
+-------+--------------------------------------+----------------------------------------+-----------------------------------------------------+
检查:
[root@openstack ~]# mysql -unova -pnova -e " use nova;show tables;"
+--------------------------------------------+
| Tables_in_nova |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
.......。
[root@openstack ~]# mysql -unova -pnova -e " use nova_api;show tables;"
+------------------------------+
| Tables_in_nova_api |
+------------------------------+
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| build_requests |
| cell_mappings |
| consumers |
........。

13..启动Nova Service
[root@openstack ~]# systemctl enable openstack-nova-api.service
openstack-nova-consoleauth.service
openstack-nova-scheduler.service
openstack-nova-conductor.service
openstack-nova-novncproxy.service

Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service to /usr/lib/systemd/system/openstack-nova-consoleauth.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service.
[root@openstack ~]# systemctl start openstack-nova-api.service
openstack-nova-consoleauth.service
openstack-nova-scheduler.service openstack-nova-conductor.service
openstack-nova-novncproxy.service

14.Nova服务注册
[root@openstack ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | e0e0d32eeb91448bad93e850cdc3914f |
| name | nova |
| type | compute |
+-------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne compute public http://192.168.36.144:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 10e392d78c3042d1ad018a918e01a4ab |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | e0e0d32eeb91448bad93e850cdc3914f |
| service_name | nova |
| service_type | compute |
| url | http://192.168.36.144:8774/v2.1 |
+--------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne compute internal http://192.168.36.144:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | dfa8f33322d04fa7aa966494931fb2e7 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | e0e0d32eeb91448bad93e850cdc3914f |
| service_name | nova |
| service_type | compute |
| url | http://192.168.36.144:8774/v2.1 |
+--------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne compute admin http://192.168.36.144:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | fef6776b22ab4d3ba81e72ffd2b43c41 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | e0e0d32eeb91448bad93e850cdc3914f |
| service_name | nova |
| service_type | compute |
| url | http://192.168.36.144:8774/v2.1 |
+--------------+----------------------------------+
[root@openstack ~]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | 35d9ec113b6f44b8b6645b45f6016e2e |
| name | placement |
| type | placement |
+-------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne placement public http://192.168.36.144:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | ef019c7b32d346b29d48c60afc4e93bd |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 35d9ec113b6f44b8b6645b45f6016e2e |
| service_name | placement |
| service_type | placement |
| url | http://192.168.36.144:8778 |
+--------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne placement internal http://192.168.36.144:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 24201ba9fec94994aa43af100bb04926 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 35d9ec113b6f44b8b6645b45f6016e2e |
| service_name | placement |
| service_type | placement |
| url | http://192.168.36.144:8778 |
+--------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne placement admin http://192.168.36.144:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | d1fa99a05c3d4c5f8e2eac6d0aa22e26 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 35d9ec113b6f44b8b6645b45f6016e2e |
| service_name | placement |
| service_type | placement |
| url | http://192.168.36.144:8778 |
+--------------+----------------------------------+
[root@openstack ~]# openstack host list
+-----------+-------------+----------+
| Host Name | Service | Zone |
+-----------+-------------+----------+
| openstack | consoleauth | internal |
| openstack | scheduler | internal |
| openstack | conductor | internal |
+-----------+-------------+----------+

[root@openstack ~]# openstack service list
+----------------------------------+-----------+-----------+
| ID | Name | Type |
+----------------------------------+-----------+-----------+
| 35d9ec113b6f44b8b6645b45f6016e2e | placement | placement |
| a555348470a24e07b1f332f8163d410d | keystone | identity |
| d67bb2e3c2bc48dd9c2c9fb0953935c8 | glance | image |
| e0e0d32eeb91448bad93e850cdc3914f | nova | compute |
+----------------------------------+-----------+-----------+
注意:如果想删除上面的用户可以用这个命令:
命令:openstack service list 查看服务列表
命令:openstack service delete (ID号) 删除多余的服务

二、Nova计算节点的安装与配置

  1. 下载epel源
    [root@opens ~]# rpm -ivh http://mirrors.aliyun.com/epel/epel-release-latest-7.noarch.rpm
    [root@opens ~]# yum clean all
    [root@opens ~]# yum checkmake

2.下载openstack仓库
[root@opens ~]# yum install -y openstack-nova-compute sysfsutils

3.从主控节点把nova.conf拷贝到计算节点
[root@openstack ~]# scp /etc/nova/nova.conf 192.168.36.144:/etc/nova/nova.conf

4.删除多余的数据配置
[api_database]

connection=mysql+pymysql://nova:nova@192.168.36.144/nova_api

[database]

connection=mysql+pymysql://nova:nova@192.168.36.144/nova

5.修改VNC配置 计算节点需要监听所有IP,同时设置novncproxy的访问地址
[vnc]
enabled=true
server_listen = 0.0.0.0
server_proxyclient_address=192.168.36.147
novncproxy_base_url=http://192.168.36.144:6080/vnc_auto.html

6.确定您的计算节点是否支持虚拟机的硬件加速
[root@opens ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
2
如果返回的是非0的值,那么表示计算节点服务器支持硬件虚拟化,需要在nova.conf里面设置
[root@opens ~]# vim /etc/nova/nova.conf
[libvirt]
virt_type=qemu

7.启动nova-compute
[root@opens ~]# systemctl enable libvirtd.service openstack-nova-compute.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.
[root@opens ~]# systemctl start libvirtd.service openstack-nova-compute.service
出现以下报错:配置文件没有权限
oslo_config.cfg.ConfigFilesPermissionDeniedError: Failed to open some config files: /etc/no
解决方法:赋予Nova权限
[root@opens ~]# ls -l /etc/nova/nova.conf
-rw-r----- 1 root root 372925 Mar 2 19:15 /etc/nova/nova.conf
[root@opens ~]# chown root.nova /etc/nova/nova.conf
[root@opens ~]# ls -l /etc/nova/nova.conf
-rw-r----- 1 root nova 372925 Mar 2 19:15 /etc/nova/nova.conf
重新启动:
[root@opens ~]# systemctl start libvirtd.service openstack-nova-compute.service

8.验证计算节点
[root@opens ~]# openstack host list
-bash: openstack: command not found
[root@opens ~]# yum install -y python2-openstackclient
[root@opens ~]# openstack host list
Missing value auth-url required for auth plugin password
出现以上报错:
解决方法:从控制节点拷贝admin账户信息“admin-openstack.sh”
[root@openstack ~]# scp admin-openstack.sh 192.168.36.147:/root/admin-openstack.sh
[root@opens ~]# source admin-openstack.sh
[root@opens ~]# openstack host list
+-----------+-------------+----------+
| Host Name | Service | Zone |
+-----------+-------------+----------+
| openstack | consoleauth | internal |
| openstack | scheduler | internal |
| openstack | conductor | internal |
| opens | compute | nova |
+-----------+-------------+----------+

在控制节点也可以查
[root@openstack ~]# openstack service list
+----------------------------------+-----------+-----------+
| ID | Name | Type |
+----------------------------------+-----------+-----------+
| 35d9ec113b6f44b8b6645b45f6016e2e | placement | placement |
| a555348470a24e07b1f332f8163d410d | keystone | identity |
| d67bb2e3c2bc48dd9c2c9fb0953935c8 | glance | image |
| e0e0d32eeb91448bad93e850cdc3914f | nova | compute |
+----------------------------------+-----------+-----------+

9.计算节点加入控制节点
[root@openstack ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 71613854-7549-4a32-ab99-79e2a1ef2d69
Checking host mapping for compute host 'opens': ad9b14b6-3d90-4c31-aad0-8d287edefd58
Creating host mapping for compute host 'opens': ad9b14b6-3d90-4c31-aad0-8d287edefd58
Found 1 unmapped computes in cell: 71613854-7549-4a32-ab99-79e2a1ef2d69

十三、Neutron的安装与配置
OpenStack Networking Services(Neutron),OpenStack 网络服务,OpenStack核心项目之一,由早期的nova-network独立成一个子项目后演变而来,它为OpenStack提供了云计算环境下的虚拟网络功能。 在OpenStack世界中,网络组件最初叫nova-network,nova-network实现简单,直接采用基于Linux内核的Linux网桥。由于少了很多层抽象,所以比较简单稳定算。但是它的不足之处是支持的插件少(只支持Linux网桥),支持的网络拓扑少(只支持flat, vlan)。
网络选项1:公共网络
1.安装Neutron
[root@openstack ~]# yum install openstack-neutron openstack-neutron-ml2 \

openstack-neutron-linuxbridge ebtables -y

2.配置Neutron服务文件/etc/neutron/neutron.conf
[root@openstack ~]# vim /etc/neutron/neutron.conf
数据库配置
[database]
connection = mysql+pymysql://neutron:neutron@192.168.36.144:3306/neutron

3.Keystone连接配置
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.36.144:5000
auth_url = http://192.168.36.144:35357
memcached_servers = 192.168.36.144:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

4.RabbitMQ相关设置
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.36.144

5.Neutron网络基础配置
[DEFAULT] core_plugin = ml2 service_plugins =

6.网络拓扑变化Nova通知配置
[DEFAULT] notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True [nova]
auth_url = http://192.168.36.144:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

7.在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency] lock_path = /var/lib/neutron/tmp

8.Neutron ML2配置vim /etc/neutron/plugins/ml2/ml2_conf.ini
[root@openstack ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2] type_drivers = flat,vlan,gre,vxlan,geneve #支持多选,所以把所有的驱动都选择上。 tenant_network_types = flat,vlan,gre,vxlan,geneve #支持多项,所以把所有的网络类型都选择上。 mechanism_drivers = linuxbridge,openvswitch,l2population #选择插件驱动,支持多选,开源的有linuxbridge和openvswitch #启用端口安全扩展驱动 extension_drivers = port_security,qos [ml2_type_flat] #设置网络提供 flat_networks = provider [securitygroup] #启用ipset enable_ipset = True

9.Neutron Linuxbridge配置
[root@openstack ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge] physical_interface_mappings = provider:eth0 [vxlan] #禁止vxlan网络 enable_vxlan = False [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver enable_security_group = True

10.Neutron DHCP-Agent配置
[root@openstack ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = True

11.Neutron metadata配置
[root@openstack ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = 192.168.36.144
metadata_proxy_shared_secret = unixhot.com

12.Neutron相关配置在nova.conf(配置计算服务使用网络服务)
[root@openstack ~]# vim /etc/nova/nova.conf
7604[neutron]
url = http://192.168.36.144:9696
auth_url = http://192.168.36.144:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = unixhot.com

13.网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini``指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini
[root@openstack ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

14.同步数据库:
[root@openstack ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \

--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
检查
[root@openstack ~]# mysql -uneutron -pneutron -e "use neutron;show tables;"
+-----------------------------------------+
| Tables_in_neutron |
+-----------------------------------------+
| address_scopes |
| agents |
| alembic_version |
| allowedaddresspairs |
..........。

15.重启计算API 服务
[root@openstack ~]# systemctl restart openstack-nova-api.service

16.启动网络服务并配置他们开机自启动。
[root@openstack ~]# systemctl enable neutron-server.service \

neutron-linuxbridge-agent.service neutron-dhcp-agent.service
neutron-metadata-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service.
[root@openstack ~]# systemctl start neutron-server.service
neutron-linuxbridge-agent.service neutron-dhcp-agent.service
neutron-metadata-agent.service

17.Neutron服务注册
[root@openstack ~]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | b9b3881574a24c90be02daf155d5aeaf |
| name | neutron |
| type | network |
+-------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne network public http://192.168.36.144:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2a48ade9c3e74a0e934f25d4005f73c7 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | b9b3881574a24c90be02daf155d5aeaf |
| service_name | neutron |
| service_type | network |
| url | http://192.168.36.144:9696 |
+--------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne network internal http://192.168.36.144:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 52799f4eea0d4839acfa950ba7a007ee |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | b9b3881574a24c90be02daf155d5aeaf |
| service_name | neutron |
| service_type | network |
| url | http://192.168.36.144:9696 |
+--------------+----------------------------------+
[root@openstack ~]# openstack endpoint create --region RegionOne network admin http://192.168.36.144:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 18eb2c21109e4151aafc621e1e428971 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | b9b3881574a24c90be02daf155d5aeaf |
| service_name | neutron |
| service_type | network |
| url | http://192.168.36.144:9696 |
+--------------+----------------------------------+
检查
[root@openstack ~]# openstack network agent list
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+
| 77da6d1e-8313-423d-9479-f6cf8bb973ea | Linux bridge agent | openstack | None | 😃 | UP | neutron-linuxbridge-agent |
| 8cbc197b-c2a1-4282-b04a-c44863379091 | Metadata agent | openstack | None | 😃 | UP | neutron-metadata-agent |
| baae1bab-895d-4c44-ac84-b05ad920ab08 | DHCP agent | openstack | nova | 😃 | UP | neutron-dhcp-agent |
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+------------------------

如果注册服务错误可以使用以下命令进行删除
[root@openstack ~]# openstack endpoint list
[root@openstack ~]# openstack endpoint delete +(ID)
十四、Neutron计算节点部署
1.安装软件包
[root@opens ~]# yum install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset

2.Keystone连接配置
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.36.144:5000
auth_url = http://192.168.36.144:35357
memcached_servers = 192.168.36.144:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

3.RabbitMQ相关设置
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.36.144 #请注意是在DEFAULT配置栏目下,因为该配置文件有多个transport_url的配置

4.锁路径
[oslo_concurrency] lock_path = /var/lib/neutron/tmp

5.配置LinuxBridge配置(从控制节点拷贝)
[root@openstack ~]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.36.147:/etc/neutron/plugins/ml2/linuxbridge_agent.ini

6.设置计算节点的nova.conf
[neutron]
url = http://192.168.36.144:9696
auth_url = http://192.168.36.144:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron

7.重启计算服务
[root@opens ~]# systemctl restart openstack-nova-compute.service

8.启动计算节点linuxbridge-agent
[root@opens ~]# systemctl enable neutron-linuxbridge-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
[root@opens ~]# systemctl start neutron-linuxbridge-agent.service

9.检查计算节点
[root@opens ~]# source admin-openstack.sh
[root@opens ~]# openstack network agent list
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+
| 253d95f1-a6a1-467e-a5f6-2ac1d271d43f | Linux bridge agent | opens | None | 😃 | UP | neutron-linuxbridge-agent |
| 77da6d1e-8313-423d-9479-f6cf8bb973ea | Linux bridge agent | openstack | None | 😃 | UP | neutron-linuxbridge-agent |
| 8cbc197b-c2a1-4282-b04a-c44863379091 | Metadata agent | openstack | None | 😃 | UP | neutron-metadata-agent |
| baae1bab-895d-4c44-ac84-b05ad920ab08 | DHCP agent | openstack | nova | 😃 | UP | neutron-dhcp-agent |
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+

注意:到目前为止如果没有发生报错基本上可以创建云主机

十五、创建一台私有云主机
这是我们已经安装配置完成了的插件项目 ,好的,到目前为止,你我们已经完成了启动一台虚拟机所有的必备条件。

MySQL:为各个服务提供数据存储
RabbitMQ:为各个服务之间通信提供交通枢纽
KeyStone:为各个服务器之间通信提供认证和服务注册
Glance:为虚拟机提供镜像管理
Nova:为虚拟机提供计算资源
Neutron:为虚拟机提供网络资源。

1.创建网络
[root@openstack ~]# source admin-openstack.sh
[root@openstack ~]# openstack network create --share --external
--provider-physical-network provider
--provider-network-type flat provider
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2020-03-04T13:22:24Z |
| description | |
| dns_domain | None |
| id | c3ec6542-fd72-412e-8eb9-863d1d3ecec4 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| is_vlan_transparent | None |
| mtu | 1500 |
| name | provider |
| port_security_enabled | True |
| project_id | 623edcd216e74e6c830227c51d639b42 |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 4 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2020-03-04T13:22:24Z |
+---------------------------+--------------------------------------+

2.在网络上创建一个子网
[root@openstack ~]# openstack subnet create --network provider
--allocation-pool start=192.168.36.130,end=192.168.36.200
--dns-nameserver 114.114.114.114
--dns-nameserver 8.8.8.8
--dns-nameserver 192.168.36.2
--gateway 192.168.36.2
--subnet-range 192.168.36.0/24 provider-subnet
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 192.168.36.100-192.168.36.200 |
| cidr | 192.168.36.0/24 |
| created_at | 2020-03-04T13:42:52Z |
| description | |
| dns_nameservers | 114.114.114.114 |
| enable_dhcp | True |
| gateway_ip | 192.168.36.2 |
| host_routes | |
| id | 90cfcd48-df4b-4ad5-a126-7c89b6d5a91b |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | provider-subnet |
| network_id | c3ec6542-fd72-412e-8eb9-863d1d3ecec4 |
| prefix_length | None |
| project_id | 623edcd216e74e6c830227c51d639b42 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2020-03-04T13:42:52Z |
+-------------------+--------------------------------------+
[root@openstack ~]# ifconfig
brqc3ec6542-fd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.36.144 netmask 255.255.255.0 broadcast 192.168.36.255
注意: openstack创建的桥接网卡

  1. 创建云主机类型
    [root@openstack ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
    +----------------------------+---------+
    | Field | Value |
    +----------------------------+---------+
    | OS-FLV-DISABLED:disabled | False |
    | OS-FLV-EXT-DATA:ephemeral | 0 |
    | disk | 1 |
    | id | 0 |
    | name | m1.nano |
    | os-flavor-access:is_public | True |
    | properties | |
    | ram | 64 |
    | rxtx_factor | 1.0 |
    | swap | |
    | vcpus | 1 |
    +----------------------------+---------+

4.创建密钥对
[root@openstack ~]# source demo-openstack.sh
[root@openstack ~]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa): #按回车键
[root@openstack ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | d9:2b:36:a9:66:72:79:ce:c2:3a:27:c3:fd:4d:0d:c2 |
| name | mykey |
| user_id | d2f73c50bfb1494985a5b9a8a551819c |
+-------------+-------------------------------------------------+
[root@openstack ~]# openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | d9:2b:36:a9:66:72:79:ce:c2:3a:27:c3:fd:4d:0d:c2 |
+-------+-------------------------------------------------+

5.添加安全组规则
[root@openstack ~]# openstack security group rule create --proto icmp default
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2020-03-04T14:08:28Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 72710515-81a1-4cd3-bc91-3996dcd20320 |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | 39aef50f504f4849ac1bcc9d8d01b105 |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | d3c94eb3-bba8-4d3b-8a23-c0bc5fdbf1fd |
| updated_at | 2020-03-04T14:08:28Z |
+-------------------+--------------------------------------+
[root@openstack ~]# openstack security group rule create --proto tcp --dst-port 22 default
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2020-03-04T14:08:59Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 5473830d-6b86-4965-8692-b2339bf94d5a |
| name | None |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | 39aef50f504f4849ac1bcc9d8d01b105 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | d3c94eb3-bba8-4d3b-8a23-c0bc5fdbf1fd |
| updated_at | 2020-03-04T14:08:59Z |
+-------------------+--------------------------------------+

6.启动实例
[root@openstack ~]# source demo-openstack.sh
[root@openstack ~]# openstack flavor list
+----+---------+-----+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+-----+------+-----------+-------+-----------+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
+----+---------+-----+------+-----------+-------+-----------+

7..查看可用的镜像
[root@openstack ~]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 23f9a497-35dc-4d00-8c40-59bc5f440728 | cirros | active |
+--------------------------------------+--------+--------+

8.查看可用的网络
[root@openstack ~]# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| c3ec6542-fd72-412e-8eb9-863d1d3ecec4 | provider | 90cfcd48-df4b-4ad5-a126-7c89b6d5a91b |
+--------------------------------------+----------+--------------------------------------+

9.查看可用的安全组
[root@openstack ~]# openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+---------+------------------------+----------------------------------+
| d3c94eb3-bba8-4d3b-8a23-c0bc5fdbf1fd | default | Default security group | 39aef50f504f4849ac1bcc9d8d01b105 |
+--------------------------------------+---------+------------------------+----------------------------------+

10.创建虚拟机
[root@openstack ~]# openstack server create --flavor m1.nano --image cirros \

--nic net-id=c3ec6542-fd72-412e-8eb9-863d1d3ecec4 --security-group default
--key-name mykey demo-instance
+-----------------------------+-----------------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | M7LnhrB6qwMp |
| config_drive | |
| created | 2020-03-04T14:15:31Z |
| flavor | m1.nano (0) |
| hostId | |
| id | a90c4288-a12f-492b-bb97-4f94672a3c3a |
| image | cirros (23f9a497-35dc-4d00-8c40-59bc5f440728) |
| key_name | mykey |
| name | demo-instance |
| progress | 0 |
| project_id | 39aef50f504f4849ac1bcc9d8d01b105 |
| properties | |
| security_groups | name='d3c94eb3-bba8-4d3b-8a23-c0bc5fdbf1fd' |
| status | BUILD |
| updated | 2020-03-04T14:15:32Z |
| user_id | d2f73c50bfb1494985a5b9a8a551819c |
| volumes_attached | |
+-----------------------------+-----------------------------------------------+

注意指定网络的时候需要使用ID,而不是名称

11.查看虚拟机
[root@openstack ~]# openstack server list
+--------------------------------------+---------------+--------+-------------------------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+---------------+--------+-------------------------+--------+---------+
| 885c6d02-e560-4515-889a-4d3c1b544f7f | demo-instance | ACTIVE | provider=192.168.36.107 | cirros | m1.nano |
+--------------------------------------+---------------+--------+-------------------------+--------+---------+
[root@openstack ~]# openstack console url show demo-instance
+-------+-------------------------------------------------------------------------------------+
| Field | Value |
+-------+-------------------------------------------------------------------------------------+
| type | novnc |
| url | http://192.168.36.147:6080/vnc_auto.html?token=94ca0df8-0b09-4e31-a74f-8166184f742f |
+-------+-------------------------------------------------------------------------------------+

十六、Horizon的安装与配置
Cinder是OpenStack中存储虚拟化的组件,用来存储虚拟机镜像。OpenStack从Folsom版本开始使用Cinder替换原来的Nova-Volume服务,为OpenStack云平台提供块存储服务。 目前Cinder支持各类开源和商业的存储系统。目前开源的支持通过其配置文件可以看到支持: Nexenta、GlusterFS、NFS、Cepf、ISCSI等。
1.Horizon安装
[root@opens ~]# yum install -y openstack-dashboard
因为keystone和Horizon都是以HTTPD服务运行的,为了避免不必要的冲突我们把Horizon安装在计算计算节点上。

2.Horizon配置
[root@opens ~]# vim /etc/openstack-dashboard/local_settings

允许所有主机访问 ALLOWED_HOSTS = ['*', ]

keystone的IP

OPENSTACK_HOST = "192.168.36.144" #设置API版本 OPENSTACK_API_VERSIONS = {
"data-processing": 1.1,
"identity": 3,
"image": 2,
"volume": 2,
"compute": 2,
} 开启多域支持 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True 设置默认的域 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default' #设置Keystone地址 OPENSTACK_HOST = "192.168.36.144" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST #为通过仪表盘创建的用户配置默认的 user 角色 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" #设置Session存储到Memcached SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '192.168.36.144:11211', } } #启用Web界面上修改密码 OPENSTACK_HYPERVISOR_FEATURES = {
'can_set_mount_point': True,
'can_set_password': True,
'requires_keypair': True,
'enable_quotas': True
} #设置时区 TIME_ZONE = "Asia/Shanghai" #禁用自服务网络的一些高级特性 OPENSTACK_NEUTRON_NETWORK = { ... 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, 'enable_fip_topology_check': False, }

3.启动并添加HTTP服务
[root@opens ~]# systemctl restart httpd.service
[root@opens ~]# systemctl enable httpd.service

4.浏览器访问
http://192.168.36.147/dashboard/

注意:如果发生访问不了需要配置 /etc/httpd/conf.d/openstack-dashboard.con
[root@opens ~]# vim /etc/httpd/conf.d/openstack-dashboard.con

配置如下

WSGIDaemonProcess dashboard
WSGIProcessGroup dashboard
WSGISocketPrefix run/wsgi
WSGIApplicationGroup %{GLOBAL} #在配置里面添加这么一行

WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
Alias /dashboard/static /usr/share/openstack-dashboard/static

<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
Options All
AllowOverride All
Require all granted

<Directory /usr/share/openstack-dashboard/static>
Options All
AllowOverride All
Require all granted

十七、如何使用web页面创建云主机
1.

十八、控制节点安装openstack-nova-compute深入了解openstack
1.安装openstack-nova-compute
[root@openstack ~]# yum install -y openstack-nova-compute sysfsutils

2.配置Nova.conf文件vim /etc/nova/nova.conf
修改VNC配置 计算节点需要监听所有IP,同时设置novncproxy的访问地址
[vnc]
enabled=true
server_listen = 0.0.0.0
server_proxyclient_address=192.168.36.144
novncproxy_base_url=http://192.168.36.144:6080/vnc_auto.html

确定您的计算节点是否支持虚拟机的硬件加速
[root@openstack ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
2

如果返回的是非0的值,那么表示计算节点服务器支持硬件虚拟化,需要在nova.conf里面设置
[root@openstack ~]# vim /etc/nova/nova.conf
[libvirt]
virt_type = qemu

因为是控制节点所以不能删除数据库在其它计算节点要删除数据库配置

3.启动openstack-nova-compute
[root@openstack ~]# systemctl enable libvirtd.service openstack-nova-compute.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.
[root@openstack ~]# systemctl start libvirtd.service openstack-nova-compute.service

4.在你一台主机上安装openstack-nova-compute
配置elep源
[root@yjs ~]# rpm -ivh http://mirrors.aliyun.com/epel/epel-release-latest-7.noarch.rpm

下载openstack仓库
[root@yjs ~]# yum install -y centos-release-openstack-queens

安装openstack-nova-compute
[root@yjs ~]# yum install -y openstack-nova-compute sysfsutils python2-openstackclient

python2-openstackclient(openstack命令的安装包)

配置nova.conf文件(从控制节点拷贝)
[root@openstack ~]# scp -rp /etc/nova/nova.conf 192.168.36.148:/etc/nova/nova.conf
[root@yjs ~]# vim /etc/nova/nova.conf

注释或删除数据库

connection=mysql+pymysql://nova:nova@192.168.36.144/nova

connection=mysql+pymysql://nova:nova@192.168.36.144/nova_api

配置【vnc】
[vnc]
enabled=true
server_listen=0.0.0.0
server_proxyclient_address=192.168.36.148
novncproxy_base_url=http://192.168.36.148:6080/vnc_auto.html

[libvirt]
virt_type=qemu

启动并添加开机自启
[root@yjs ~]# systemctl enable libvirtd.service openstack-nova-compute.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.
[root@yjs ~]# systemctl start libvirtd.service openstack-nova-compute.service

检查是否支持虚拟化
[root@yjs ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
1

返回非0的值表示支持虚拟化

检查是否添加成功
[root@openstack ~]# nova service-list
+--------------------------------------+------------------+-----------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down |
+--------------------------------------+------------------+-----------+----------+---------+-------+----------------------------+-----------------+-------------+
| b4e7b7d8-f751-45bc-bb1b-edbb2efc4c81 | nova-consoleauth | openstack | internal | enabled | up | 2020-03-08T05:05:02.000000 | - | False |
| 4f882461-12cd-480d-888f-d7b2dc3eceb7 | nova-scheduler | openstack | internal | enabled | up | 2020-03-08T05:05:00.000000 | - | False |
| a7bfbc12-a1e3-4a9a-af22-a60b48d49164 | nova-conductor | openstack | internal | enabled | up | 2020-03-08T05:05:00.000000 | - | False |
| 841e2299-a355-4d5a-80cc-7735dbdbf36a | nova-compute | opens | nova | enabled | up | 2020-03-08T05:05:07.000000 | - | False |
| b9ef142f-95b6-49e6-965b-4fbf41e0a194 | nova-compute | openstack | nova | enabled | up | 2020-03-08T05:04:58.000000 | - | False |
| ca034203-f5e2-41e1-b353-a21ec4eda7f1 | nova-compute | yjs | nova | enabled | up | 2020-03-08T05:04:59.000000 | - | False |
+--------------------------------------+------------------+-----------+----------+---------+-------+----------------------------+----

十九、创建centos镜像
官方文档:https://docs.openstack.org/image-guide/centos-image.html
1.[root@openstack /]# mkdir -p /iso/cdrom/
下载centos镜像上传到 /iso/cdrom/目录下
[root@openstack /]# ll /iso/cdrom/
total 4481024
-rw-r--r-- 1 root root 4588568576 Mar 8 15:51 CentOS-7-x86_64-DVD-1810.iso

2.创建镜像
[root@openstack /]# yum install virt-install libvirt -y
[root@openstack ~]# systemctl enable libvirtd
[root@openstack ~]# systemctl start libvirtd

创建磁盘:
[root@kvm ~]# qemu-img create -f qcow2 /tmp/centos.qcow2 10G
Formatting '/tmp/centos.qcow2', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
创建镜像:
virt-install --virt-type kvm --name centos --ram 1024
--disk /tmp/centos.qcow2,format=qcow2
--network network=default
--graphics vnc,listen=0.0.0.0 --noautoconsole
--os-type=linux --os-variant=centos7.0
--cdrom=/iso/CentOS-7-x86_64-DVD-1810.iso

开始使用vnc工具制作镜像
1.

配置镜像和安装系统一样这里就不讲了
10.启动镜像
[root@kvm ~]# virsh start centos
[root@kvm ~]# virsh list --all
Id Name State

3 centos running

11.配置镜像,安装基础软件或者配置yum源
yum install net-tools vim gcc-+cc bash-completion wget -y
关闭selinux
SELINUX=disabled

12.配置脚本(脚本有问题,暂不使用)
vim /root/init.sh

脚本如下:

!/bin/bash

if [ ! -d /root/.ssh ]; then mkdir -p /root/.ssh chmod 700 /root/.ssh fi # Fetch public key using HTTP ATTEMPTS=30 FAILED=0 while [ ! -f /root/.ssh/authorized_keys ]; do curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key \ > /tmp/metadata-key 2>/dev/null if [ $? -eq 0 ]; then cat /tmp/metadata-key >> /root/.ssh/authorized_keys chmod 0600 /root/.ssh/authorized_keys restorecon /root/.ssh/authorized_keys rm -f /tmp/metadata-key echo "Successfully retrieved public key from instance metadata" echo "" echo "AUTHORIZED KEYS" echo "" cat /root/.ssh/authorized_keys echo "*****************" fi done

13.将这个脚本拉取到虚拟镜像(我这个脚本是在主机上创建的)
scp 192.168.36.149:/root/init.sh /root/init.sh

14.赋予脚本执行权限
chmod +x /root/init.sh
mv /root/init.sh /opt
echo "/bin/bash /opt/init.sh" >>/etc/rc.local
chmod +x /etc/rc.d/rc.local
15.上传镜像到glance
[root@openstack ~]# source admin-openstack.sh
[root@openstack ~]# openstack image create "CentOS-7-x86_64" \

--file /tmp/centos.qcow2
--disk-format qcow2 --container-format bare
-- public
[root@openstack ~]# openstack image create "CentOS-7-x86_64" --file /tmp/centos.qcow2 --disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | af760d1e4eb38372ce0e01623d04e639 |
| container_format | bare |
| created_at | 2020-03-09T15:01:37Z |
| disk_format | qcow2 |
| file | /v2/images/29b0e29c-b561-4dd0-bb3b-7b78d64a2fc9/file |
| id | 29b0e29c-b561-4dd0-bb3b-7b78d64a2fc9 |
| min_disk | 0 |
| min_ram | 0 |
| name | CentOS-7-x86_64 |
| owner | 623edcd216e74e6c830227c51d639b42 |
| protected | False |
| schema | /v2/schemas/image |
| size | 1516175360 |
| status | active |
| tags | |
| updated_at | 2020-03-09T15:02:05Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+

16.创建云主机类型
http://192.168.36.147/dashboard

登录demo用户创建云主机(上面讲过这里不讲)
成功之后进入控制台

二十、cinder的安装与配置(控制节点)
Cinder是OpenStack中存储虚拟化的组件,用来存储虚拟机镜像。OpenStack从Folsom版本开始使用Cinder替换原来的Nova-Volume服务,为OpenStack云平台提供块存储服务。 目前Cinder支持各类开源和商业的存储系统。目前开源的支持通过其配置文件可以看到支持: Nexenta、GlusterFS、NFS、Cepf、ISCSI等。
cinder-api
接受API请求,并将其路由到cinder-volume执行。
cinder-volume
与块存储服务和例如cinder-scheduler的进程进行直接交互。它也可以与这些进程通过一个消息队列进行交互。cinder-volume服务响应送到块存储服务的读写请求来维持状态。它也可以和多种存储提供者在驱动架构下进行交互。
cinder-scheduler守护进程
选择最优存储提供节点来创建卷。其与nova-scheduler组件类似。
cinder-backup守护进程
cinder-backup服务提供任何种类备份卷到一个备份存储提供者。就像cinder-volume服务,它与多种存储提供者在驱动架构下进行交互。
消息队列
在块存储的进程之间路由信息。
1.Cinder的安装
[root@openstack ~]# yum install -y openstack-cinder

2.编辑 /etc/cinder/cinder.conf配置文件
[database]
connection = mysql+pymysql://cinder:cinder@192.168.36.144/cinder

3.配置 “RabbitMQ” 消息队列访问
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.36.144

4.在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置keystone认证服务访问
[DEFAULT]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://192.168.36.144:5000
auth_url = http://192.168.36.144:35357
memcached_servers = 192.168.36.144:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

5.在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]

6.初始化块设备服务的数据库
[root@openstack ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT".

检查
Database changed
MariaDB [cinder]> show tables
-> ;
+----------------------------+
| Tables_in_cinder |
+----------------------------+
| attachment_specs |
| backup_metadata |
| backups |
| cgsnapshots |
| clusters |
| consistencygroups |
| driver_initiator_data |
| encryption |

7.配置Nova以使用块设备存储,注意是所有
[cinder] os_region_name = RegionOne

8.重启Nova_api服务
[root@openstack ~]# systemctl restart openstack-nova-api.service

9.启动cinder服务,并设置为开机自动启动。
[root@openstack ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.
[root@openstack ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

10.Cinder注册Service和Endpoint
[root@openstack ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 3c3fa49d348d4921aaca0eaaa4487d4b |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+

[root@openstack ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 03473a4d0e454f8f9c4859fc1f372653 |
| name | cinderv3 |
| type | volumev3 |
+-------------+----------------------------------+

[root@openstack ~]# openstack endpoint create --region RegionOne \

volumev2 public http://192.168.36.144:8776/v2/%(project_id)s
+--------------+----------------------------------------------+
| Field | Value |
+--------------+----------------------------------------------+
| enabled | True |
| id | e361215ea4384441912fa0109862c8c8 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3c3fa49d348d4921aaca0eaaa4487d4b |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://192.168.36.144:8776/v2/%(project_id)s |
+--------------+----------------------------------------------+

[root@openstack ~]# openstack endpoint create --region RegionOne \

volumev2 internal http://192.168.36.144:8776/v2/%(project_id)s
+--------------+----------------------------------------------+
| Field | Value |
+--------------+----------------------------------------------+
| enabled | True |
| id | d8030c8532ea4c92b77bc67757ec403c |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3c3fa49d348d4921aaca0eaaa4487d4b |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://192.168.36.144:8776/v2/%(project_id)s |
+--------------+----------------------------------------------+

[root@openstack ~]# openstack endpoint create --region RegionOne \

volumev2 admin http://192.168.36.144:8776/v2/%(project_id)s
+--------------+----------------------------------------------+
| Field | Value |
+--------------+----------------------------------------------+
| enabled | True |
| id | 4b96610262f843b591ff8b06270a5824 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3c3fa49d348d4921aaca0eaaa4487d4b |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://192.168.36.144:8776/v2/%(project_id)s |
+--------------+----------------------------------------------+

[root@openstack ~]# openstack endpoint create --region RegionOne \

volumev3 public http://192.168.36.144:8776/v3/%(project_id)s
+--------------+----------------------------------------------+
| Field | Value |
+--------------+----------------------------------------------+
| enabled | True |
| id | 6d89ce6177384ea599e7e05b26e470a0 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 03473a4d0e454f8f9c4859fc1f372653 |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://192.168.36.144:8776/v3/%(project_id)s |
+--------------+----------------------------------------------+

[root@openstack ~]# openstack endpoint create --region RegionOne \

volumev3 internal http://192.168.36.144:8776/v3/%(project_id)s
+--------------+----------------------------------------------+
| Field | Value |
+--------------+----------------------------------------------+
| enabled | True |
| id | 57d992eee42f4e76a31d55d0430ab800 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 03473a4d0e454f8f9c4859fc1f372653 |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://192.168.36.144:8776/v3/%(project_id)s |
+--------------+----------------------------------------------+

[root@openstack ~]# openstack endpoint create --region RegionOne \

volumev3 admin http://192.168.36.144:8776/v3/%(project_id)s
+--------------+----------------------------------------------+
| Field | Value |
+--------------+----------------------------------------------+
| enabled | True |
| id | 5e0f62ef9cae4da796fe6711f04a0cdb |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 03473a4d0e454f8f9c4859fc1f372653 |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://192.168.36.144:8776/v3/%(project_id)s |
+--------------+----------------------------------------------+

[root@openstack ~]# openstack service list
+----------------------------------+-----------+-----------+
| ID | Name | Type |
+----------------------------------+-----------+-----------+
| 03473a4d0e454f8f9c4859fc1f372653 | cinderv3 | volumev3 |
| 35d9ec113b6f44b8b6645b45f6016e2e | placement | placement |
| 3c3fa49d348d4921aaca0eaaa4487d4b | cinderv2 | volumev2 |
| a555348470a24e07b1f332f8163d410d | keystone | identity |
| b9b3881574a24c90be02daf155d5aeaf | neutron | network |
| d67bb2e3c2bc48dd9c2c9fb0953935c8 | glance | image |
| e0e0d32eeb91448bad93e850cdc3914f | nova | compute |
+----------------------------------+-----------+-----------+

二十一、cinder的安装与配置(存储节点)
1.安装 LVM 包
[root@opens ~]# yum install -y lvm2 device-mapper-persistent-data

2.启动LVM的metadata服务并且设置该服务随系统启动
[root@opens ~]# systemctl enable lvm2-lvmetad.service
[root@opens ~]# systemctl start lvm2-lvmetad.service

3.把/dev/sdb创建为LVM的物理卷(在虚拟添加一块磁盘)

[root@opens ~]# fdisk -l
Disk /dev/sdb: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@opens ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created.

4.创建名为cinder-volumes的逻辑卷组
[root@opens ~]# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created

5.在devices部分,添加一个过滤器,只接受/dev/sdb设备,拒绝其他所有设备
[root@opens ~]# vim /etc/lvm/lvm.conf
devices {
filter = [ "a/sda/", "a/sdb/", "r/.*/"]

6.安装配置cinder
[root@opens ~]# yum install openstack-cinder targetcli python-keystone -y

同步控制节点配置文件 由于存储节点大多数配置和控制节点相同,可以直接使用控制节点配置好的cinder.conf。再此基础上进行小的变动。
[root@openstack ~]# scp -rp /etc/cinder/cinder.conf 192.168.36.147:/etc/cinder/cinder.conf
配置数据库
[database]
connection = mysql+pymysql://cinder:cinder@192.168.36.144/cinder

配置 “RabbitMQ” 消息队列访问
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.36.144

在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置keystone认证服务访问
[DEFAULT]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://192.168.36.144:5000
auth_url = http://192.168.36.144:35357
memcached_servers = 192.168.36.144:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

设置Cinder后端驱动,配置LVM后端以LVM驱动结束

没有找到,配置到最后

[lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm volume_backend_name=iSCSI-Storage

在 [DEFAULT] 部分,启用 LVM 后端:
[DEFAULT]
enabled_backends = lvm

在 [DEFAULT] 区域,配置镜像服务 API 的位置
[DEFAULT]
glance_api_servers = http://192.168.36.144:9292

启动块存储卷服务及其依赖的服务,并将其配置为随系统启动
[root@opens ~]# systemctl enable openstack-cinder-volume.service target.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.
[root@opens ~]# systemctl start openstack-cinder-volume.service target.service

[root@openstack ~]# source admin-openstack.sh
[root@openstack ~]# cinder service-list
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | openstack | nova | enabled | up | 2020-03-11T07:43:36.000000 | - |
| cinder-volume | opens@lvm | nova | enabled | down | 2020-03-11T07:31:54.000000 | - |
+------------------+-----------+------+---------+-------+----------------------------+-----------------+

解决方法:
[root@opens ~]# vgs
WARNING: Device for PV sZedj5-VAcz-Gak4-9Z6c-hMtc-8wuA-2fKdIu not found or rejected by a filter.
WARNING: Device for PV sZedj5-VAcz-Gak4-9Z6c-hMtc-8wuA-2fKdIu not found or rejected by a filter.
Couldn't find device with uuid sZedj5-VAcz-Gak4-9Z6c-hMtc-8wuA-2fKdIu.
VG #PV #LV #SN Attr VSize VFree
centos 1 2 0 wz--n- <49.61g 0
cinder-volumes 1 0 0 wz-pn- <30.00g <30.00g
没有过滤sdb

[root@opens ~]# vim /etc/lvm/lvm.conf
devices {
filter = [ "a/sda/", "a/sdb/", "r/.*/"]

[root@openstack ~]# cinder service-list
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | openstack | nova | enabled | up | 2020-03-11T07:54:56.000000 | - |
| cinder-volume | opens@lvm | nova | enabled | up | 2020-03-11T07:54:59.000000 | - |
+------------------+-----------+------+---------+-------+----------------------------+-----------------+

posted @ 2020-05-11 21:11  海上月  阅读(585)  评论(0)    收藏  举报