openstack--mitaka版本部署

简介

OpenStack是一个由NASA(美国国家航空航天局)和Rackspace合作研发并发起的,以Apache许可证授权的自由软件和开放源代码项目。

OpenStack是一个开源的云计算管理平台项目,由几个主要的组件组合起来完成具体工作。OpenStack支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供API以进行集成。

openstack是一个云平台管理的项目,它不是一个软件。也就是说我们可以使用openstack来管理我们一个数据中心大量资源池。它里面包含了很多子项目。

openstack包含三大项:计算 网络 存储 

 openstack主要目标是来简化资源的管理和分配,把计算 网络 存储。三大项虚拟成三大资源池,例如需要计算资源我这里可以提供,需要网络资源这里也可以提供以及存储资源的需求,对外提供api,通过api进行交互。

 openstack的设计基本上是按照亚马逊进行设置的,我们可以将openstack理解为开源版本的aws。因为它很多地方都参考亚马逊进行操作的,而且openstack的很多api跟亚马逊是相通的。

openstack的架构:

服务名称 项目名称 描述
Dasgviard Horizon 基于Openstack API接口使用diango开发的Web管理
Compute Nova 通过虚拟化技术提供计算资源池
Networking Neutron 实现了虚拟机的网络资源管理。
Storage (存储) 
Object Storage Swift 对象存储,适用于“一次写入、多次读取”
Block Storage Cinder 块存储,提供存储资源池
Shared Services (共享服务) 
Identify Service Keystone 认证管理
Image Service Glance 提供虚拟镜像的注册和存储管理
Telemetry Ceilometer 提供监控和数据采集、计量服务
Higher-level Services (高层服务)  
Orchestration Heat 自动化部署的组件
Database Service Trove 提供数据库应用服务

 

 

 

 

 

 

 

 

 

 

 

 

 

说明:这里面所有的服务都是围绕着VM进行提供服务的,虚拟机需要什么资源,我们就提供什么资源

  我们可以将服务分为两大类,一个是服务的提供者,一个是服务的消费者。提供者是我可以提供某个服务,消费者是我可以用到这个服务。

  openstack可以说是一个框架,或者说是一个管理平台。

openstack部署环境准备

openstack部署参考中文文档地址:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/

安装centos7的时候在进入安装界面的时候选择install centos7并按TAB键,然后输入net.ifnames=0  biosdevname=0回车即可,这样就可以将默认网卡的名称变成eth0、eth1类似centos6的网卡名称

1、两台centos7.1的主机ip地址为:

  控制节点:192.168.182.170    主机名称为linux-node1.goser.com

  计算节点:192.168.182.171    主机名称为linux-node2.goser.com

2、关闭selinux和firewalld服务

  vim  /etc/selinux/config   设置SELINUX=disabled    同时使用命令setenforce 0

  禁止防火墙开启自启动:systemctl disable  firewalld.service

  关闭防火墙:systemctl stop firewalld.service

3、安装阿里云的epel源和设置openstack-mitaka的yum仓库

 openstack更新太快,大多数版本的rdo源都已经失效,这样只能手动创建一个yum源,让这个yum源指向openstack的rpm包地址,配置如下:

vim /etc/yum.repos.d/rdo-release.repo

[openstack-mitaka]
name=OpenStack mitaka Repository
baseurl=http://vault.centos.org/7.3.1611/cloud/x86_64/openstack-mitaka/
enabled=1
gpgcheck=0
gpgkey=

 openstack各个版本yum包地址分别有:

  • http://vault.centos.org/7.3.1611/cloud/x86_64/
  • https://repos.fedorapeople.org/repos/openstack/EOL/
  • https://repos.fedorapeople.org/repos/openstack/EOL/openstack-juno/
  • https://mirrors.aliyun.com/centos/7/cloud/x86_64/

 下载阿里云的centos源和epel源:

  • wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
  • wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

4、设置hosts文件,保证控制端和计算端统一 

vim   /etc/hosts

192.168.182.170     linux-node1    linux-node1.goser.com   
192.168.182.171     linux-node2    linux-node2.goser.com

5、在控制节点和计算节点安装时间同步工具,并做定时任务

yum install ntpdate -y
ntpdate time1.aliyun.com
timedatectl set-timezone Asia/Shanghai

crontab -e
*/5 * * * *  /usr/sbin/ntpdate time1.aliyun.com >/dev/null 2>&1

 在控制端和计算节点做一次时间同步:ntpdate  time1.aliyun.com

6、在控制节点安装openstack各个组件包

  1)安装OpenStack客户端

   yum install -y python-openstackclient

  2)安装openstack SELinux管理包

   yum install -y openstack-selinux

  3)MySQL安装

   yum install -y mariadb mariadb-server python2-PyMySQL 

  4)安装RabbitMQ

   yum install -y rabbitmq-server

  5)安装keystone、apache、memcache

   yum install -y openstack-keystone httpd mod_wsgi memcached python-memcached

  6)安装Glance

   yum install -y openstack-glance

  7)安装nova

   yum install -y openstack-nova-api openstack-nova-cert  openstack-nova-conductor openstack-nova-console  openstack-nova-novncproxy openstack-nova-scheduler

  8)安装neutron

   yum install -y openstack-neutron openstack-neutron-ml2  openstack-neutron-linuxbridge ebtables 

  9)安装dashboard

   yum install openstack-dashboard -y

7、在计算节点安装openstack各个组件包

  1)安装OpenStack客户端

   yum install -y python-openstackclient

  2)安装openstack SELinux管理包

   yum install -y openstack-selinux

  3)安装nova-compute

   yum install -y openstack-nova-compute sysfsutils

  4)安装neutron

   yum install -y openstack-neutron openstack-neutron-linuxbridge ebtables 

keystone部署

1、mysql数据库配置

  创建并编辑 /etc/my.cnf.d/openstack.cnf

 vim  /etc/my.cnf.d/openstack.cnf

[mysqld]
bind-address = 192.168.1.180
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

  启动数据库服务,并将其配置为开机自启:

systemctl enable mariadb.service
systemctl start mariadb.service

  为了保证数据库服务的安全性,运行``mysql_secure_installation``脚本。特别需要说明的是,为数据库的root用户设置一个适当的密码。

mysql_secure_installation

Enter current password for root (enter for none):    #直接回车,root当前密码为空
Set root password? [Y/n] y
New password: 
Re-enter new password: 
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] y
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y

  使用root用户登录数据库,创建opentstack各个组件对应的数据库及用户

[root@linux-node1 ~]# mysql -uroot  -p
create database keystone;
grant all on keystone.* to 'keystone'@'localhost' identified by 'keystone';
grant all on keystone.* to 'keystone'@'%' identified by 'keystone';
create database glance;
grant all on glance.* to 'glance'@'localhost' identified by 'glance';
grant all on glance.* to 'glance'@'%' identified by 'glance';
create database nova;
grant all on nova.* to 'nova'@'localhost' identified by 'nova'; 
grant all on nova.* to 'nova'@'%' identified by 'nova';
create database nova_api;
grant all on nova_api.* to 'nova'@'localhost' identified by 'nova';
grant all on nova_api.* to 'nova'@'%' identified by 'nova';
create database neutron;
grant all on neutron.* to 'neutron'@'localhost' identified by 'neutron';
grant all on neutron.* to 'neutron'@'%' identified by 'neutron';
create database cinder;
grant all on cinder.* to 'cinder'@'localhost' identified by 'cinder';
grant all on cinder.* to 'cinder'@'%' identified by 'cinder';

2、rabbitmq的配置

  启动消息队列服务并将其配置为随系统启动,这时候rabbitmq开启对外端口为5672

systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

  为rabbitmq添加 openstack 用户:

rabbitmqctl add_user openstack  openstack

  给``openstack``用户配置写和读权限:

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

  为rabbitmq添加web管理插件,对应web管理的端口为15672

rabbitmq-plugins enable rabbitmq_management

  访问http://192.168.182.170:15672   将新创建的openstack账号设置成administrator权限,这样openstack账号就可以登录rabbitmq了

3、启动memcached服务,为接下来的keystone设置的token存放位置做准备

  启动Memcached服务,并且配置它随机启动。

systemctl enable memcached.service
systemctl start memcached.service

  如果想对memecached服务的端口或连接数做修改的话,可以编辑memcached文件:

[root@linux-node1 ~]# cat /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1,::1"

4、keystone认证服务的配置

  生成一个随机值在初始的配置中作为管理员的令牌。

  因为要创建用户就要访问keystone,而访问keystone又必须要有用户名和密码认证通过后才可创建认证用户,这时候就要用这个随机数做token来临时创建认证用户了

[root@linux-node1 ~]# openssl rand -hex 10
c105bfe5af14d431eb03 

  编辑文件 /etc/keystone/keystone.conf,配置如下:

[root@linux-node1 ~]# grep  '^[a-z]' /etc/keystone/keystone.conf 
admin_token = c105bfe5af14d431eb03
connection = mysql+pymysql://keystone:keystone@192.168.182.170/keystone
servers = 192.168.182.170:11211
provider = fernet
driver = memcache

  初始化keystone数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

  这时候生成keystone的日志文件

[root@linux-node1 ~]# ll  /var/log/keystone/keystone.log
-rw-r--r-- 1 keystone keystone 4402 Nov 27 11:15 /var/log/keystone/keystone.log

  初始化Fernet keys:

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

  这时候生成feinet  keys如下:

[root@linux-node1 ~]# tree  /etc/keystone/
/etc/keystone/
├── default_catalog.templates
├── fernet-keys
│   ├── 0
│   └── 1
├── keystone.conf
├── keystone-paste.ini
├── logging.conf
├── policy.json
└── sso_callback_template.html

  验证keystone数据库初始化是否成功

mysql  -h 192.168.182.170  -ukeystone -pkeystone -e 'use keystone;show tables;'

  编辑``/etc/httpd/conf/httpd.conf`` 文件,配置``ServerName`` 选项为控制节点的ip

ServerName 192.168.182.170:80

  用下面的内容创建文件 /etc/httpd/conf.d/wsgi-keystone.conf做为keystone的启动文件

Listen 5000
Listen 35357

<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost *:35357>
    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>

  启动 Apache HTTP 服务并配置开机自启动

systemctl enable httpd.service
systemctl start httpd.service

  配置keystone认证的环境变量

export OS_TOKEN=c105bfe5af14d431eb03
export OS_URL=http://192.168.182.170:35357/v3
export OS_IDENTITY_API_VERSION=3

  创建域、项目、用户和角色

#创建admin项目、用户和角色
openstack domain create --description "Default Domain" default

openstack project create --domain default  --description "Admin Project" admin

openstack user create --domain default --password-prompt admin
User Password:
Repeat User Password:

openstack role create admin

openstack role add --project admin --user admin admin
#创建demo项目、用户和user角色
openstack project create --domain default  --description "Demo Project" demo

openstack user create --domain default   --password-prompt demo
User Password:
Repeat User Password:

openstack role create user

openstack role add --project demo --user demo user
#创建service项目,glance、nova、neutron用户
openstack project create --domain default  --description "Service Project" service

openstack user create --domain default --password-prompt glance
openstack role add  --project service --user glance admin

openstack user create --domain default --password-prompt nova
openstack role add  --project service --user nova admin

openstack user create --domain default --password-prompt neutron
openstack role add  --project service --user neutron admin

  创建服务实体和API端点

openstack service create --name keystone --description "OpenStack Identity" identity
openstack endpoint create --region RegionOne identity public http://192.168.182.170:5000/v3
openstack endpoint create --region RegionOne identity internal http://192.168.182.170:5000/v3
openstack endpoint create --region RegionOne identity admin http://192.168.182.170:5000/v3 

  取消``OS_TOKEN``和``OS_URL`` 环境变量,用用户名admin和密码是否可以keystone认证

unset OS_TOKEN OS_URL

openstack --os-auth-url http://192.168.182.170:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
+------------+------------------------------------------------------------------+
| Field      | Value                                                            |
+------------+------------------------------------------------------------------+
| expires    | 2017-11-27T04:38:13.000000Z                                      |
| id         | gAAAAABaG4ilD-                                                   |
|            | ZK2Prj5CdmZMg5LOs4jeIm7TM12waM6R7FgcxRb8ma3wchSp6KzduGT3         |
|            | -2k1upBgSkjBpXtvLlJdBnKNbHJUS-zL0z11YBj7Xnqg_jxrjCA_kbLj2kaye0bc |
|            | JYhWdiamz9yx1_gNSPq0hASUpQTBFIaRx0rT9JAZKe7Y0jj60                |
| project_id | ddd5995f068240c095880b9f260d47d4                                 |
| user_id    | 86267f2ed51841459848a5978b4c3b85                                 |
+------------+------------------------------------------------------------------+

  用用户名demo和密码是否可以keystone认证

openstack --os-auth-url http://192.168.182.170:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
+------------+------------------------------------------------------------------+
| Field      | Value                                                            |
+------------+------------------------------------------------------------------+
| expires    | 2017-11-27T04:38:58.000000Z                                      |
| id         | gAAAAABaG4jTcynjr8YiMEpr3fTARCu_4KUovTY1C6WCOo5fXjMZE4ihbbJLCOMf |
|            | DhlOa_UXsfg8ndIR8zcUpliiVBFxBqNaXS6DktjziLBvhPivfUB7XyIgtczF83K4 |
|            | mjbKfzTKVomHogdisRXX4c6bV_9hlgxrjV-kDPzT7eJBMPw_UPHRCwc          |
| project_id | 7f2a8340be8e4b7e8d80656b031b4a9c                                 |
| user_id    | 7af19952e19c49d5aca5d348bb538dc4                                 |
+------------+------------------------------------------------------------------+

  创建 OpenStack 客户端admin环境脚本

[root@linux-node1 ~]# vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://192.168.182.170:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

  创建 OpenStack 客户端demo环境脚本

[root@linux-node1 ~]# vim  demo-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.182.170:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

  运行admin-openrc,验证请求认证令牌

source  admin-openrc

[root@linux-node1 ~]# openstack token issue
+------------+------------------------------------------------------------------+
| Field      | Value                                                            |
+------------+------------------------------------------------------------------+
| expires    | 2017-11-27T04:43:14.000000Z                                      |
| id         | gAAAAABaG4nSxX3ILfHNjU-t                                         |
|            | -CvXYm1w7CtdCARdX8br5gE7NmPoxzJ5yi2LwrI226FJ2pVOmXV-RguJ-        |
|            | RAZlylmvnL7SDR9mg6r8z_eNFo5KWjkgsmNW2p-                          |
|            | DqmN9Piejd_9KB24De78-oRlbXuu7mhflaXimBTogbwmv5PekLMJzfkq98kX_ug  |
| project_id | ddd5995f068240c095880b9f260d47d4                                 |
| user_id    | 86267f2ed51841459848a5978b4c3b85                                 |
+------------+------------------------------------------------------------------+

  运行demo-openrc,验证请求认证令牌

[root@linux-node1 ~]# source demo-openrc 
[root@linux-node1 ~]# openstack token issue
+------------+------------------------------------------------------------------+
| Field      | Value                                                            |
+------------+------------------------------------------------------------------+
| expires    | 2017-11-27T04:43:32.000000Z                                      |
| id         | gAAAAABaG4nkrMEYjVZEq7SOyhrMh7Pu_UDvg0fM7UrJDM_lWYb8LsqHLE1zvRRF |
|            | JbGOAAlTtM06J6fcIoJAYsXytsMOYvkHaxLhdS5M7S74T3jurwXDeT2OmLywXpYt |
|            | JuG2BGM5EEADwNx2_jRDuUCDzWIUxqZYdV1ZVj7CgUPuRTAePuO19jc          |
| project_id | 7f2a8340be8e4b7e8d80656b031b4a9c                                 |
| user_id    | 7af19952e19c49d5aca5d348bb538dc4                                 |
+------------+------------------------------------------------------------------+

  查看用户、角色、项目、api端点、域的操作

[root@linux-node1 ~]# source admin-openrc 

[root@linux-node1 ~]# openstack domain  list
+----------------------------------+---------+---------+----------------+
| ID                               | Name    | Enabled | Description    |
+----------------------------------+---------+---------+----------------+
| 3a17eaf8dbaa41b8b418aa63d261ec32 | default | True    | Default Domain |
+----------------------------------+---------+---------+----------------+
[root@linux-node1 ~]# openstack  project  list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 1d5005582d754ac7b4e694ad151e872d | service |
| 7f2a8340be8e4b7e8d80656b031b4a9c | demo    |
| ddd5995f068240c095880b9f260d47d4 | admin   |
+----------------------------------+---------+
[root@linux-node1 ~]# openstack  role  list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| 05b9619746ae4cf7ae5fc7512068855a | admin |
| 9573abd4805344bf967db8ae27404898 | user  |
+----------------------------------+-------+
[root@linux-node1 ~]# openstack  user list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 51000edaa45a4af78c610312b06e43c4 | neutron |
| 7af19952e19c49d5aca5d348bb538dc4 | demo    |
| 86267f2ed51841459848a5978b4c3b85 | admin   |
| d01de7324d9c4b8588829910ef4e7934 | glance  |
| d4eee347d53a4593a3fa580a6aa9dab7 | nova    |
+----------------------------------+---------+
[root@linux-node1 ~]# openstack endpoint  list
+-----------------------+-----------+--------------+--------------+---------+-----------+-----------------------+
| ID                    | Region    | Service Name | Service Type | Enabled | Interface | URL                   |
+-----------------------+-----------+--------------+--------------+---------+-----------+-----------------------+
| 4daab8c26a1b4fe69d37d | RegionOne | keystone     | identity     | True    | internal  | http://192.168.182.17 |
| 31f23f6850a           |           |              |              |         |           | 0:5000/v3             |
| 7704b0fabe3744c2a1f3c | RegionOne | keystone     | identity     | True    | public    | http://192.168.182.17 |
| 192b77453ee           |           |              |              |         |           | 0:5000/v3             |
| e6d0bcb9aaa844b0ac71e | RegionOne | keystone     | identity     | True    | admin     | http://192.168.182.17 |
| c521afd05b3           |           |              |              |         |           | 0:5000/v3             |
+-----------------------+-----------+--------------+--------------+---------+-----------+-----------------------+

镜像认证glance部署

   编辑文件 /etc/glance/glance-api.conf,做如下操作:

[root@linux-node1 ~]# grep  '^[a-z]' /etc/glance/glance-api.conf 
connection = mysql+pymysql://glance:glance@192.168.182.170/glance
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images
auth_uri = http://192.168.182.170:5000
auth_url = http://192.168.182.170:35357
memcached_servers = 192.168.182.170:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
flavor = keystone

  编辑文件 ``/etc/glance/glance-registry.conf``,做如下操作

[root@linux-node1 ~]# grep  '^[a-z]' /etc/glance/glance-registry.conf 
connection = mysql+pymysql://glance:glance@192.168.182.170/glance
auth_uri = http://192.168.182.170:5000
auth_url = http://192.168.182.170:35357
memcached_servers = 192.168.182.170:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
flavor = keystone

  写入镜像服务数据库glance

su -s /bin/sh -c "glance-manage db_sync" glance

  验证glance数据库初始化是否成功

[root@linux-node1 ~]# mysql  -h 192.168.182.170  -uglance -pglance -e 'use glance;show tables;'  

  启动镜像服务、配置开机启动:

systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service  openstack-glance-registry.service

  创建``glance``服务实体和镜像服务的 API 端点

[root@linux-node1 ~]# source admin-openrc 

openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://192.168.182.170:9292
openstack endpoint create --region RegionOne   image internal http://192.168.182.170:9292
openstack endpoint create --region RegionOne   image admin http://192.168.182.170:9292  

  验证操作

[root@linux-node1 ~]# source admin-openrc 

#下载源镜像:
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
#使用 QCOW2 磁盘格式, bare 容器格式上传镜像到镜像服务并设置公共可见,这样所有的项目都可以访问它
openstack image create "cirros"  --file cirros-0.3.4-x86_64-disk.img  --disk-format qcow2 --container-format bare   --public

[root@linux-node1 ~]# openstack image  list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| bf7adea9-d4ec-4b33-aac4-e0f0e4c6964e | cirros | active |
+--------------------------------------+--------+--------+
[root@linux-node1 ~]# glance  image-list
+--------------------------------------+--------+
| ID                                   | Name   |
+--------------------------------------+--------+
| bf7adea9-d4ec-4b33-aac4-e0f0e4c6964e | cirros |
+--------------------------------------+--------+
#生成的镜像文件存在如下位置,这是由配置文件设置决定的
[root@linux-node1 ~]# ll /var/lib/glance/images/
-rw-r----- 1 glance glance 13287936 Nov 27 13:42 /var/lib/glance/images/bf7adea9-d4ec-4b33-aac4-e0f0e4c6964e

计算服务nova部署

1、安装并配置控制节点

  编辑``/etc/nova/nova.conf``文件并完成下面的操作

[root@linux-node1 ~]# grep  '^[a-z]' /etc/nova/nova.conf 
enabled_apis=osapi_compute,metadata
auth_strategy=keystone
firewall_driver=nova.virt.firewall.NoopFirewallDriver
use_neutron=true
rpc_backend=rabbit
connection=mysql+pymysql://nova:nova@192.168.182.170/nova_api
connection=mysql+pymysql://nova:nova@192.168.182.170/nova
api_servers=http://192.168.182.170:9292
auth_uri = http://192.168.182.170:5000
auth_url = http://192.168.182.170:35357
memcached_servers = 192.168.182.170:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
lock_path=/var/lib/nova/tmp
rabbit_host=192.168.182.170
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=openstack
vncserver_listen=192.168.182.170
vncserver_proxyclient_address=192.168.182.170

  同步Compute 数据库

su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage db sync" nova

  验证数据库同步是否成功

[root@linux-node1 ~]# mysql  -h 192.168.182.170  -unova -pnova -e 'use nova_api;show tables;'

[root@linux-node1 ~]# mysql  -h 192.168.182.170  -unova -pnova -e 'use nova;show tables;'

  启动 Compute 服务并将其设置为开机自启动

systemctl enable openstack-nova-api.service  openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service  openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

  创建 nova 服务实体和 API 端点

[root@linux-node1 ~]# source admin-openrc 

openstack service create --name nova  --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://192.168.182.170:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne   compute internal http://192.168.182.170:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne   compute admin http://192.168.182.170:8774/v2.1/%\(tenant_id\)s   

  验证操作

[root@linux-node1 ~]# openstack  host list
+-----------------------+-------------+----------+
| Host Name             | Service     | Zone     |
+-----------------------+-------------+----------+
| linux-node1.goser.com | consoleauth | internal |
| linux-node1.goser.com | conductor   | internal |
| linux-node1.goser.com | scheduler   | internal |
+-----------------------+-------------+----------+

  复制nova.conf配置文件到计算节点,因为计算节点的nova.conf配置和控制节点基本上一致,只需要做微调即可

#首先在计算节点将nova.conf配置文件做备份
[root@linux-node2 nova]# mv nova.conf nova.conf.bak
#然后将控制节点的nova.conf配置文件用scp推送到计算节点上
[root@linux-node1 nova]# scp  nova.conf 192.168.182.171:/etc/nova

2、安装并配置计算节点

  将从控制节点推送过来的nova.conf配置文件,更改其所有者权限

[root@linux-node2 nova]# chown root.nova nova.conf

  对配置文件做修改如下:取消了数据库连接、增加了novncproxy配置和虚拟机创建类型

[root@linux-node2 nova]# grep  '^[a-z]' nova.conf
enabled_apis=osapi_compute,metadata
auth_strategy=keystone
firewall_driver=nova.virt.firewall.NoopFirewallDriver
use_neutron=true
rpc_backend=rabbit
api_servers=http://192.168.182.170:9292
auth_uri = http://192.168.182.170:5000
auth_url = http://192.168.182.170:35357
memcached_servers = 192.168.182.170:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
virt_type=kvm
lock_path=/var/lib/nova/tmp
rabbit_host=192.168.182.170
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=openstack
enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.182.171
novncproxy_base_url=http://192.168.182.170:6080/vnc_auto.html

  启动计算服务及其依赖,并将其配置为开机自动启动

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

3、对计算服务控制节点和计算节点做验证操作:

[root@linux-node1 ~]# source admin-openrc 

[root@linux-node1 ~]# openstack  host list
+-----------------------+-------------+----------+
| Host Name             | Service     | Zone     |
+-----------------------+-------------+----------+
| linux-node1.goser.com | consoleauth | internal |
| linux-node1.goser.com | conductor   | internal |
| linux-node1.goser.com | scheduler   | internal |
| linux-node2.goser.com | compute     | nova     |
+-----------------------+-------------+----------+
[root@linux-node1 ~]# nova service-list
+----+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | linux-node1.goser.com | internal | enabled | up    | 2017-11-27T06:33:33.000000 | -               |
| 2  | nova-conductor   | linux-node1.goser.com | internal | enabled | up    | 2017-11-27T06:33:33.000000 | -               |
| 3  | nova-scheduler   | linux-node1.goser.com | internal | enabled | up    | 2017-11-27T06:33:33.000000 | -               |
| 6  | nova-compute     | linux-node2.goser.com | nova     | enabled | up    | 2017-11-27T06:33:34.000000 | -               |
+----+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
[root@linux-node1 ~]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID                                   | Name   | Status | Server |
+--------------------------------------+--------+--------+--------+
| bf7adea9-d4ec-4b33-aac4-e0f0e4c6964e | cirros | ACTIVE |        |
+--------------------------------------+--------+--------+--------+

网络服务neutron部署

1、安装并配置控制节点

   编辑``/etc/neutron/neutron.conf`` 文件并完成如下操作

[root@linux-node1 ~]# grep '^[a-z]'  /etc/neutron/neutron.conf 
auth_strategy = keystone
core_plugin = ml2
service_plugins =
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
rpc_backend = rabbit
connection = mysql+pymysql://neutron:neutron@192.168.182.170/neutron
auth_uri = http://192.168.182.170:5000
auth_url = http://192.168.182.170:35357
memcached_servers = 192.168.182.170:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
auth_url = http://192.168.182.170:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
lock_path = /var/lib/neutron/tmp 
rabbit_host = 192.168.182.170
rabbit_port = 5672
rabbit_userid = openstack
rabbit_password = openstack

  编辑``/etc/neutron/plugins/ml2/ml2_conf.ini``文件并完成以下操作

[root@linux-node1 ml2]# grep '^[a-z]'  ml2_conf.ini 
type_drivers = flat,vlan,gre,vxlan,geneve
tenant_network_types = 
mechanism_drivers = linuxbridge,openvswitch
extension_drivers = port_security
flat_networks = public
enable_ipset = true

  编辑``/etc/neutron/plugins/ml2/linuxbridge_agent.ini``文件并且完成以下操作

[root@linux-node1 ml2]# grep '^[a-z]'  linuxbridge_agent.ini 
physical_interface_mappings = public:eth0
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
enable_vxlan = false

  编辑``/etc/neutron/dhcp_agent.ini``文件并完成下面的操作

[root@linux-node1 neutron]# grep '^[a-z]'  dhcp_agent.ini 
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

  编辑``/etc/neutron/metadata_agent.ini``文件并完成以下操作

[root@linux-node1 neutron]# grep '^[a-z]'  metadata_agent.ini 
nova_metadata_ip = 192.168.182.170
metadata_proxy_shared_secret = goser

  重新编辑``/etc/nova/nova.conf``文件,在nova.conf配置文件中添加neutron配置项,增加部分为

[neutron]
url = http://192.168.182.170:9696
auth_url = http://192.168.182.170:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy=true
metadata_proxy_shared_secret = goser

  对nova.conf配置文件的修改最终版为:

[root@linux-node1 ~]# grep '^[a-z]'  /etc/nova/nova.conf 
enabled_apis=osapi_compute,metadata
auth_strategy=keystone
firewall_driver=nova.virt.firewall.NoopFirewallDriver
use_neutron=true
rpc_backend=rabbit
connection=mysql+pymysql://nova:nova@192.168.182.170/nova_api
connection=mysql+pymysql://nova:nova@192.168.182.170/nova
api_servers=http://192.168.182.170:9292
auth_uri = http://192.168.182.170:5000
auth_url = http://192.168.182.170:35357
memcached_servers = 192.168.182.170:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
url = http://192.168.182.170:9696
auth_url = http://192.168.182.170:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy=true
metadata_proxy_shared_secret = goser
lock_path=/var/lib/nova/tmp
rabbit_host=192.168.182.170
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=openstack
vncserver_listen=192.168.182.170
vncserver_proxyclient_address=192.168.182.170

  网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini``指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini``

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

  同步neutron数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

  验证neutron数据库同步是否成功

[root@linux-node1 ~]# mysql  -h 192.168.182.170  -uneutron -pneutron -e 'use neutron;show tables;' 

  因为修改了nova.conf配置,需要重启nova API 服务

systemctl restart openstack-nova-api.service

  启动 网络服务并配置它开启自启动

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

  创建``neutron``服务实体和网络服务API端点

[root@linux-node1 ~]# source admin-openrc 

openstack service create --name neutron  --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://192.168.182.170:9696
openstack endpoint create --region RegionOne network internal http://192.168.182.170:9696
openstack endpoint create --region RegionOne network admin http://192.168.182.170:9696

  验证控制节点neutron操作

[root@linux-node1 ~]# neutron  agent-list
+-----------+------------+-----------+-------------------+-------+----------------+-----------+
| id        | agent_type | host      | availability_zone | alive | admin_state_up | binary    |
+-----------+------------+-----------+-------------------+-------+----------------+-----------+
| 1b99c789- | Linux      | linux-nod |                   | :-)   | True           | neutron-l |
| a062-4913 | bridge     | e1.goser. |                   |       |                | inuxbridg |
| -8622-cba | agent      | com       |                   |       |                | e-agent   |
| 9946632f5 |            |           |                   |       |                |           |
| df345724  | Metadata   | linux-nod |                   | :-)   | True           | neutron-  |
| -5a1a-    | agent      | e1.goser. |                   |       |                | metadata- |
| 45bd-     |            | com       |                   |       |                | agent     |
| ba0c-1030 |            |           |                   |       |                |           |
| 547b3180  |            |           |                   |       |                |           |
| e0d93019- | DHCP agent | linux-nod | nova              | :-)   | True           | neutron-  |
| 3703      |            | e1.goser. |                   |       |                | dhcp-     |
| -413c-    |            | com       |                   |       |                | agent     |
| a34b-e4a8 |            |           |                   |       |                |           |
| 4417db84  |            |           |                   |       |                |           |
+-----------+------------+-----------+-------------------+-------+----------------+-----------+

2、安装并配置计算节点

  编辑``/etc/neutron/neutron.conf`` 文件并完成如下操作

[root@linux-node2 neutron]# grep  '^[a-z]'  neutron.conf            
auth_strategy = keystone
rpc_backend = rabbit
auth_uri = http://192.168.182.170:5000
auth_url = http://192.168.182.170:35357
memcached_servers = 192.168.182.170:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
lock_path = /var/lib/neutron/tmp
rabbit_host = 192.168.182.170
rabbit_port = 5672
rabbit_userid = openstack
rabbit_password = openstack

  重新编辑``/etc/nova/nova.conf``文件并完成下面的操作

[root@linux-node2 ~]# grep  '^[a-z]'  /etc/nova/nova.conf
enabled_apis=osapi_compute,metadata
auth_strategy=keystone
firewall_driver=nova.virt.firewall.NoopFirewallDriver
use_neutron=true
rpc_backend=rabbit
api_servers=http://192.168.182.170:9292
auth_uri = http://192.168.182.170:5000
auth_url = http://192.168.182.170:35357
memcached_servers = 192.168.182.170:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
virt_type=kvm
url = http://192.168.182.170:9696
auth_url = http://192.168.182.170:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
lock_path=/var/lib/nova/tmp
rabbit_host=192.168.182.170
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=openstack
enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.182.171
novncproxy_base_url=http://192.168.182.170:6080/vnc_auto.html

  编辑``/etc/neutron/plugins/ml2/linuxbridge_agent.ini``文件并且完成以下操作

[root@linux-node2 ml2]# grep  '^[a-z]'  linuxbridge_agent.ini 
physical_interface_mappings = public:eth0
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
enable_vxlan = false

  重启nova服务,因为已经对nova.conf配置文件做了修改

systemctl restart openstack-nova-compute.service

  启动Linuxbridge代理并配置它开机自启动

systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

3、对neutron服务的控制节点和计算节点做验证操作

[root@linux-node1 ~]# neutron  agent-list
+----------------+----------------+----------------+-------------------+-------+----------------+------------------+
| id             | agent_type     | host           | availability_zone | alive | admin_state_up | binary           |
+----------------+----------------+----------------+-------------------+-------+----------------+------------------+
| 1b99c789-a062- | Linux bridge   | linux-node1.go |                   | :-)   | True           | neutron-         |
| 4913-8622-cba9 | agent          | ser.com        |                   |       |                | linuxbridge-     |
| 946632f5       |                |                |                   |       |                | agent            |
| df345724-5a1a- | Metadata agent | linux-node1.go |                   | :-)   | True           | neutron-         |
| 45bd-ba0c-     |                | ser.com        |                   |       |                | metadata-agent   |
| 1030547b3180   |                |                |                   |       |                |                  |
| e0d93019-3703  | DHCP agent     | linux-node1.go | nova              | :-)   | True           | neutron-dhcp-    |
| -413c-a34b-    |                | ser.com        |                   |       |                | agent            |
| e4a84417db84   |                |                |                   |       |                |                  |
| fddc2478-0e8d- | Linux bridge   | linux-node2.go |                   | :-)   | True           | neutron-         |
| 47a9-a114-06b7 | agent          | ser.com        |                   |       |                | linuxbridge-     |
| 6ede8d9d       |                |                |                   |       |                | agent            |
+----------------+----------------+----------------+-------------------+-------+----------------+------------------+

创建一个实例

1、创建提供者网络

  创建网络,首先运行  source  admin-openrc

[root@linux-node1 ~]# neutron net-create --shared --provider:physical_network public --provider:network_type flat public-net

  在网络上创建一个子网

[root@linux-node1 ~]# neutron subnet-create --name public-subnet --allocation-pool start=192.168.182.100,end=192.168.182.200 --dns-nameserver 202.96.209.5 --gateway 192.168.182.2 public-net 192.168.182.0/24

  查看创建的网络和子网

[root@linux-node1 ~]# neutron subnet-list
+-------------------------------------+---------------+------------------+--------------------------------------+
| id                                  | name          | cidr             | allocation_pools                     |
+-------------------------------------+---------------+------------------+--------------------------------------+
| 824a128a-5a8a-479f-bdea-            | public-subnet | 192.168.182.0/24 | {"start": "192.168.182.100", "end":  |
| 9342df50eb9c                        |               |                  | "192.168.182.200"}                   |
+-------------------------------------+---------------+------------------+--------------------------------------+
[root@linux-node1 ~]# neutron  net-list
+--------------------------------------+------------+-------------------------------------------------------+
| id                                   | name       | subnets                                               |
+--------------------------------------+------------+-------------------------------------------------------+
| f251b001-5a41-4e21-8e27-df9ee4b19050 | public-net | 824a128a-5a8a-479f-bdea-9342df50eb9c 192.168.182.0/24 |
+--------------------------------------+------------+-------------------------------------------------------+

  创建m1.nano规格的主机

openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano

  生成一个键值对,导入租户``demo``的凭证,接下来将用demo用户来创建一个虚拟机

[root@linux-node1 ~]# source demo-openrc 

[root@linux-node1 ~]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa):   #直接回车
[root@linux-node1 ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
[root@linux-node1 ~]# openstack keypair list
+-------+-------------------------------------------------+
| Name  | Fingerprint                                     |
+-------+-------------------------------------------------+
| mykey | af:ae:9a:98:d9:9b:b0:60:b3:ff:c8:8f:e2:98:24:94 |
+-------+-------------------------------------------------+

  增加安全组规则

[root@linux-node1 ~]# openstack security group rule create --proto icmp default
[root@linux-node1 ~]# openstack security group rule create --proto tcp --dst-port 22 default

2、启动一个实例

  在创建一个实例之前要确保下面的命令正常执行,同时还要保证控制节点和计算节点的时间同步,可以先ntpdate time1.aliyun.com

[root@linux-node1 ~]# source admin-openrc 

[root@linux-node1 ~]# nova service-list
[root@linux-node1 ~]# neutron agent-list
[root@linux-node1 ~]# nova  image-list
[root@linux-node1 ~]# openstack host list
[root@linux-node1 ~]# openstack network list

  接下来切换到demo用户环境下创建一个虚拟机

  启动一台实例,您必须至少指定一个类型、镜像名称、网络、安全组、密钥和实例名称。

  首先保证执行下面命令没问题

source demo-openrc 
openstack flavor list
openstack image list
openstack network list
openstack security group list

#先获得network的id  接下来创建一个虚拟机的时候要用到
[root@linux-node1 ~]# openstack network list
+--------------------------------------+------------+--------------------------------------+
| ID                                   | Name       | Subnets                              |
+--------------------------------------+------------+--------------------------------------+
| f251b001-5a41-4e21-8e27-df9ee4b19050 | public-net | 824a128a-5a8a-479f-bdea-9342df50eb9c |
+--------------------------------------+------------+--------------------------------------+
#开始创建一个虚拟机
openstack server create --flavor m1.nano --image cirros --nic net-id=f251b001-5a41-4e21-8e27-df9ee4b19050 --security-group default  --key-name mykey myfirst-instance

  查看虚拟机创建情况

[root@linux-node1 ~]# openstack  server list
+--------------------------------------+-------------------+--------+----------------------------+
| ID                                   | Name              | Status | Networks                   |
+--------------------------------------+-------------------+--------+----------------------------+
| e2678be8-0936-48a0-9cda-84ed64fcc1fa | myfirst-instance | ACTIVE | public-net=192.168.182.101 |
+--------------------------------------+-------------------+--------+----------------------------+

  等虚拟机创建完成并完全启动后用ssh命令直接登陆即可  [root@linux-node1 ~]# ssh  cirros@192.168.182.101

  也可以通过下面的命令获取这个虚拟机的web地址

[root@linux-node1 ~]# openstack console url show myfirst-instance
+-------+--------------------------------------------------------------------------------------+
| Field | Value                                                                                |
+-------+--------------------------------------------------------------------------------------+
| type  | novnc                                                                                |
| url   | http://192.168.182.170:6080/vnc_auto.html?token=64252e94-9b3f-4e81-8ef2-7ca9e622df8d |
+-------+--------------------------------------------------------------------------------------+

  这样就可以用http://192.168.182.170:6080/vnc_auto.html?token=64252e94-9b3f-4e81-8ef2-7ca9e622df8d直接在浏览器上登陆并管理虚拟机

 dashboard部署

  安装软件包:

yum install openstack-dashboard -y

  编辑文件 /etc/openstack-dashboard/local_settings 并完成如下动作

vim  /etc/openstack-dashboard/local_settings 

OPENSTACK_HOST = "192.168.182.170"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
ALLOWED_HOSTS = ['*', ]
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}
OPENSTACK_NEUTRON_NETWORK = {
    ...
    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
}
TIME_ZONE = "Asia/Shanghai"

  重启web服务器以及会话存储服务

systemctl restart httpd.service memcached.service

  验证操作

  使用admin或demo用户登录http://192.168.182.170/dashboard来访问openstack的web页面来管理实例

 

   通过dashboard创建的虚拟机在计算节点的存放位置如下:

[root@linux-node2 ~]# cd  /var/lib/nova/instances
[root@linux-node2 instances]# tree
.
├── 5a1333d4-7db1-4e2a-b545-095ca11838cd
│   ├── console.log
│   ├── disk
│   ├── disk.info
│   └── libvirt.xml
├── 649269b0-598c-4395-990a-ac733129d3aa
│   ├── console.log
│   ├── disk
│   ├── disk.info
│   └── libvirt.xml
├── _base
│   └── 14783b13e921637b36ef0bc614911ebcb64fb0f3
├── compute_nodes
└── locks
    ├── nova-14783b13e921637b36ef0bc614911ebcb64fb0f3
    └── nova-storage-registry-lock

存储服务cinder部署

1、安装并配置控制节点

  建 cinder 数据库并授权cinder用户访问

  这个步骤在部署keystone的时候已经提前部署完了

 

  安装软件包

yum install openstack-cinder -y

  编辑 /etc/cinder/cinder.conf,同时完成如下动作

[root@linux-node1 ~]# grep '^[a-z]' /etc/cinder/cinder.conf                                           
glance_host = 192.168.182.170
auth_strategy = keystone
iscsi_ip_address = 192.168.182.171
rpc_backend = rabbit
connection = mysql+pymysql://cinder:cinder@192.168.182.170/cinder
auth_uri = http://192.168.182.170:5000
auth_url = http://192.168.182.170:35357
memcached_servers = 192.168.182.170:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
lock_path = /var/lib/cinder/tmp
rabbit_host = 192.168.182.170
rabbit_port = 5672
rabbit_userid = openstack
rabbit_password = openstack

  编辑文件 /etc/nova/nova.conf 并添加如下到其中

#这里一定要注意是在[cinder]标签里配置
[cinder]
os_region_name = RegionOne

  初始化cinder数据库

su -s /bin/sh -c "cinder-manage db sync" cinder

  验证cinder数据库同步是否成功

[root@linux-node1 ~]# mysql  -h 192.168.182.170  -ucinder -pcinder -e 'use cinder;show tables;' 

  重启nova的API 服务,因为修改了nova.conf配置文件

systemctl restart openstack-nova-api.service

  启动块设备存储服务,并将其配置为开机自启

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

  创建一个 cinder 用户,并授予admin的角色

[root@linux-node1 ~]# source admin-openrc

openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:

openstack role add --project service --user cinder admin

  创建 cinder 和 cinderv2各个版本 服务实体及各个版本对应的端点服务

openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

openstack endpoint create --region RegionOne volume public http://192.168.182.170:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal http://192.168.182.170:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume admin http://192.168.182.170:8776/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne volumev2 public http://192.168.182.170:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://192.168.182.170:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://192.168.182.170:8776/v2/%\(tenant_id\)s

  验证操作 

[root@linux-node1 ~]# cinder  service-list                  
+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |          Host         | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | linux-node1.goser.com | nova | enabled |   up  | 2017-11-29T08:10:55.000000 |        -        |
+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+

2、安装并配置一个存储节点

  关闭linux-node2计算节点,并挂载一个20G的硬盘

  部署lvm逻辑卷要首先安装iscsi支持的工具包

yum install lvm2 -y

  启动LVM的metadata服务并且设置开机自启动

systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

  创建LVM 物理卷 /dev/sdb

pvcreate /dev/sdb

  创建 LVM 卷组 cinder-volumes

vgcreate cinder-volumes /dev/sdb

  编辑``/etc/lvm/lvm.conf``文件并完成下面的操作,让实例过滤它只扫描包含``cinder-volume``

vim /etc/lvm/lvm.conf

#在``devices``部分,添加一个过滤器,只接受``/dev/sdb``设备,拒绝其他所有设备
devices {
...
filter = [ "a/sdb/", "r/.*/"]

  安装cinder软件包

yum install openstack-cinder targetcli python-keystone -y

  编辑 /etc/cinder/cinder.conf,同时完成如下动作

[root@linux-node2 cinder]# grep '^[a-z]'  /etc/cinder/cinder.conf
glance_host = 192.168.182.170
glance_api_servers = http://192.168.182.170:9292
auth_strategy = keystone
enabled_backends = lvm
iscsi_ip_address = 192.168.182.171
rpc_backend = rabbit
connection = mysql+pymysql://cinder:cinder@192.168.182.170/cinder
auth_uri = http://192.168.182.170:5000
auth_url = http://192.168.182.170:35357
memcached_servers = 192.168.182.170:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
lock_path = /var/lib/cinder/tmp
rabbit_host = 192.168.182.170
rabbit_port = 5672
rabbit_userid = openstack
rabbit_password = openstack
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name=ISCSI-Storage

  启动块存储卷服务及其依赖的服务,并将其配置为开机自启动

systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

3、验证cinder服务的控制节点和存储节点操作

[root@linux-node1 ~]# source  admin-openrc 

[root@linux-node1 cinder]# cinder  service-list
+------------------+-------------------------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |              Host             | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+-------------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |   linux-node1.oldboyedu.com   | nova | enabled |   up  | 2017-11-29T12:19:15.000000 |        -        |
|  cinder-volume   | linux-node2.oldboyedu.com@lvm | nova | enabled |   up  | 2017-11-29T12:19:15.000000 |        -        |
+------------------+-------------------------------+------+---------+-------+----------------------------+-----------------+

  接下来通过dashboard的卷来创建逻辑卷,然后通过实例中的管理连接将创建的逻辑卷挂到需要的虚拟机上即可。

  当然也可以用nfs或glasterfs将存储挂到虚拟机上,不过这种网络存储或分布式存储工作上基本上不用,一个是IO速度有影响,另一个是不稳定。

部署一个centos6实例

  下载centos的qcow2格式镜像进行上传,qcow2格式镜像直接就可以在openstack里使用,不需要进行格式转换!下载地址:http://cloud.centos.org/centos,可以到里面下载centos5/6/7的qcow2格式的镜像

  下载centos6.6的qcow2的镜像,通过openstack来创建镜像文件

#或者下载好后,用rz -y来上传即可
[root@linux-node1 ~]#wget  http://cloud.centos.org/centos/6.6/images/CentOS-6-x86_64-GenericCloud-1701.qcow2
[root@linux-node1 ~]# source  admin-openrc
[root@linux-node1 ~]# openstack  image create "centos6.6-x86_64" --file CentOS-6-x86_64-GenericCloud-1701.qcow2 --disk-format qcow2 --container-format bare --public 
[root@linux-node1 ~]# openstack image list

  创建一个centos使用的规格主机

openstack flavor create --id 6 --vcpus 1 --ram 512 --disk 8 centos6

  然后使用dashboard来创建centos6的虚拟机

  然后在创建虚拟机的时候在配置项中编辑shell脚本,这个shell脚本的作用为在实例创建成功后即执行,shell脚本的代码为:

#!/bin/sh
sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
/bin/cp -f /home/centos/.ssh/authorized_keys  /root/.ssh/
/etc/init.d/sshd restart
/usr/bin/sudo /usr/bin/passwd centos<<EOF
123456
123456
EOF
/usr/bin/sudo /usr/bin/passwd root<<EOF
passwOrd
passwOrd
EOF 

   创建完centos6的虚拟机后,虚拟机默认的用户名为centos,密码官方未提供。用shell脚本中设置的centos密码和root密码登陆虚拟机,发现只有centos用户的密码是生效的,而root密码没有生效,这可能是官方禁用root登陆的问题。只能通过centos用户登录后,用sudo passwd root来修改root用户,然后再用root的修改密码登陆就没问题了。虽然创建完虚拟机后无法用root的密码ssh登陆,但不影响通过秘钥在openstack控制节点来直接连接虚拟机,而如果计算节点要连接虚拟机的话就要用到密码ssh到虚拟机了或者拷贝控制节点的私钥来免密码直接登陆。

  生产中常用的方式为自制作centos镜像,通过oz工具来制作,下面就用oz制作镜像的工具来制作centos6的镜像

  安装oz工具包 

yum install -y oz libguestfs-tools

  经centos6.6镜像挂载的光驱中,进入镜像存放路劲:cd  /usr/local/src/下,然后用dd命令生成一个镜像:dd if=/dev/cdrom of=centos6-x86_64.iso

  创建tdl 文件

vim  centos6.tdl
<template>
<name>centos6-x86_64</name>
<description>centos6-x86_64 template</description>
<os>
<name>CentOS-6</name>
<version>6</version>
<arch>x86_64</arch>
<rootpw>goser</rootpw>
<install type='iso'>
<iso>file:///usr/local/src/centos6-6.iso</iso>
</install>
</os>
<commands>
<command name='console'>
sed -i 's/ rhgb//g' /boot/grub/grub.conf
sed -i 's/ quiet//g' /boot/grub/grub.conf
sed -i 's/ console=tty0 / serial=tty0 console=ttyS0,115200n8 /g' /boot/grub/grub.conf
</command>
<command name='update'>
sed -i '/^HWADDR/d' /etc/sysconfig/network-scripts/ifcfg-eth0
sed -i '/^UUID/d' /etc/sysconfig/network-scripts/ifcfg-eth0
echo -n > /etc/udev/rules.d/70-persistent-net.rules
echo -n > /lib/udev/rules.d/75-persistent-net-generator.rules
sed -i '/^SELINUX=/s/enforcing/disabled/' /etc/selinux/config
</command>
</commands>
</template>

  制作kickstart文件

[root@linux-node1 src]# vim  centos.ks 
install
text
lang en_US.UTF-8
keyboard us
zerombr
bootloader --location=mbr --driveorder=sda --append="crashkernel=auto rhgb quiet"
network --onboot=yes --device=eth1 --bootproto=dhcp --noipv6 --hostname=CentOS6
timezone --utc Asia/Shanghai
authconfig --enableshadow --passalgo=sha512
rootpw goser 
clearpart --all --initlabel
firstboot --disable
selinux --disabled
firewall --disabled
logging --level=info
reboot
 
%packages
@base
@compat-libraries
@debugging
@development
tree
nmap
dos2unix
sysstat
lrzsz
telnet
 
%post
ssh-keygen -f /root/.ssh/id_rsa -N ""
cat >/root/.ssh/authorized_keys<<EOF
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC33Dp/5kDwuj6Y32SyTXl65efmar6d2ajfuTQmqncqfFNE5wig/iEHsmWHO7b2vbs+4+O5Gzgm8klVJGhFdAUm5AtIbn62aDm5jALUacGWB/VJS/MUeFXNVg9F//jPLj240bApzA56SN5P2dYZYeenqihgsfWzkQBCq8diJD3ZpvreJIWjZSQOF7YQYBOeD4yenlDJwDvOHAcqj9FzazaOQt9tBAyltnCCeeMpiDOpZmWrF/ByLUwlvIj6KlK8DFwkyV9uW3NjewmgQtTZC07R9OagQ6kD0nX0FCe5XRGdeoqaJhLtqWMPHbB3tBhIJVfCBohnUjz8I5eTD3Pi4sDb root@linux-node1.goser.com
EOF
%end

  修改/etc/oz/oz.cfg的配置文件,把制作格式改为qcow2,默认是raw,raw格式输出的文件太大,不支持压缩

[root@linux-node1 src]# vim  /etc/oz/oz.cfg 
[paths]
output_dir = /var/lib/libvirt/images
data_dir = /var/lib/oz
screenshot_dir = /var/lib/oz/screenshots
# sshprivkey = /etc/oz/id_rsa-icicle-gen

[libvirt]
uri = qemu:///system
#image_type = raw
image_type = qcow2
# type = kvm
# bridge_name = virbr0
# cpus = 1
# memory = 1024

[cache]
original_media = yes
modified_media = no
jeos = no

[icicle]
safe_generation = no

  利用oz工具制作openstack镜像

[root@linux-node1 src]# oz-install -p -u -d3 -a centos.ks centos6.tdl 

  上传镜像到 Glance

[root@linux-node1 ~]# source  admin-openrc
[root@linux-node1 ~]# openstack  image create "oz-for-centos6.6" --file CentOS-6-x86_64.dsk --disk-format qcow2 --container-format bare --public

  创建一个centos使用的规格主机

openstack flavor create --id 7 --vcpus 1 --ram 512 --disk 15 centos6-oz

  然后使用dashboard来创建centos6的虚拟机就可以了,这样在创建完的虚拟机就可以使用root来ssh登陆,并且还支持从openstack控制节点共享秘钥来登陆。

 

posted @ 2017-11-28 09:51  goser  阅读(504)  评论(0)    收藏  举报