学习openstack(八)
一、OpenStack初探
1.1 OpenStack简介
OpenStack是一整套开源软件项目的综合,它允许企业或服务提供者建立、运行自己的云计算和存储设施。Rackspace与NASA是最初重要的两个贡献者,前者提供了“云文件”平台代码,该平台增强了OpenStack对象存储部分的功能,而后者带来了“Nebula”平台形成了OpenStack其余的部分。而今,OpenStack基金会已经有150多个会员,包括很多知名公司如“Canonical、DELL、Citrix”等。
1.2 OpenStack的几大组件
1.2.1 图解各大组件之间关系

1.2.2 谈谈openstack的组件
-
OpenStack 认证(keystone)
Keystone为所有的OpenStack组件提供认证和访问策略服务,它依赖自身REST(基于Identity API)系统进行工作,主要对(但不限于)Swift、Glance、Nova等进行认证与授权。事实上,授权通过对动作消息来源者请求的合法性进行鉴定
Keystone采用两种授权方式,一种基于用户名/密码,另一种基于令牌(Token)。除此之外,Keystone提供以下三种服务:
a.令牌服务:含有授权用户的授权信息
b.目录服务:含有用户合法操作的可用服务列表
c.策略服务:利用Keystone具体指定用户或群组某些访问权限
![]()
认证服务组件
1)通过宾馆对比keystone
User 住宾馆的人
Credentials 身份证
Authentication 认证你的身份证
Token 房卡
project 组间
Service 宾馆可以提供的服务类别,比如,饮食类,娱乐类
Endpoint 具体的一种服务,比如吃烧烤,打羽毛球
Role VIP 等级,VIP越高,享有越高的权限
2)keystone组件详细说明
a.服务入口endpoint:如Nova、Swift和Glance一样每个OpenStack服务都拥有一个指定的端口和专属的URL,我们称其为入口(endpoints)。
b.用户user:Keystone授权使用者
注:代表一个个体,OpenStack以用户的形式来授权服务给它们。用户拥有证书(credentials),且可能分配给一个或多个租户。经过验证后,会为每个单独的租户提供一个特定的令牌。
c.服务service:总体而言,任何通过Keystone进行连接或管理的组件都被称为服务。举个例子,我们可以称Glance为Keystone的服务。
d.角色role:为了维护安全限定,就内特定用户可执行的操作而言,该用户关联的角色是非常重要的。注:一个角色是应是某个租户的使用权限集合,以允许某个指定用户访问或使用特定操作。角色是使用权限的逻辑分组,它使得通用的权限可以简单地分组并绑定到与某个指定租户相关的用户。
e.租间project:租间指的是具有全部服务入口并配有特定成员角色的一个项目。注:一个租间映射到一个Nova的“project-id”,在对象存储中,一个租间可以有多个容器。根据不同的安装方式,一个租间可以代表一个客户、帐号、组织或项目。
-
OpenStack Dashboard界面 (horizon)
Horizon是一个用以管理、控制OpenStack服务的Web控制面板,它可以管理实例、镜像、创建密匙对,对实例添加卷、操作Swift容器等。除此之外,用户还可以在控制面板中使用终端(console)或VNC直接访问实例。总之,Horizon具有如下一些特点:
a.实例管理:创建、终止实例,查看终端日志,VNC连接,添加卷等
b.访问与安全管理:创建安全群组,管理密匙对,设置浮动IP等
c.偏好设定:对虚拟硬件模板可以进行不同偏好设定
d.镜像管理:编辑或删除镜像
e.查看服务目录
f.管理用户、配额及项目用途
g.用户管理:创建用户等
h.卷管理:创建卷和快照
i.对象存储处理:创建、删除容器和对象
j.为项目下载环境变量 -
OpenStack nova
图解nova

API:负责接收和响应外部请求,支持OpenStackAPI,EC2API
nova-api 组件实现了RESTfulAPI功能,是外部访问Nova的唯一途径,接收外部的请求并通过Message Queue将请求发送给其他服务组件,同时也兼容EC2API,所以可以用EC2的管理工具对nova进行日常管理
Cert:负责身份认证
Scheduler:用于云主机调度
Nova Scheduler模块在openstack中的作用是决策虚拟机创建在哪个主机(计算节点),一般会根据过滤计算节点或者通过加权的方法调度计算节点来创建虚拟机。
1)过滤
首先得到未经过过滤的主机列表,然后根据过滤属性,选择服务条件的计算节点主机
2)调度
经过过滤后,需要对主机进行权值的计算,根据策略选择相应的某一台主机(对于每一个要创建的虚拟机而言)
注:Openstack默认不支持指定的计算节点创建虚拟机
你可以得到更多nova的知识==>>Nova过滤调度器
Conductor:计算节点访问,数据的中间件
Consloeauth:用于控制台的授权认证
Novncproxy:VNC代理
-
OpenStack 对象存储 (swift)
Swift为OpenStack提供一种分布式、持续虚拟对象存储,它类似于Amazon Web Service的S3简单存储服务。Swift具有跨节点百级对象的存储能力。Swift内建冗余和失效备援管理,也能够处理归档和媒体流,特别是对大数据(千兆字节)和大容量(多对象数量)的测度非常高效。
swift功能及特点
- 海量对象存储
- 大文件(对象)存储
- 数据冗余管理
- 归档能力—–处理大数据集
- 为虚拟机和云应用提供数据容器
- 处理流媒体
- 对象安全存储
- 备份与归档
- 良好的可伸缩性
Swift的组件
- Swift账户
- Swift容器
- Swift对象
- Swift代理
- Swift RING
Swift代理服务器
用户都是通过Swift-API与代理服务器进行交互,代理服务器正是接收外界请求的门卫,它检测合法的实体位置并路由它们的请求。
此外,代理服务器也同时处理实体失效而转移时,故障切换的实体重复路由请求。
Swift对象服务器
对象服务器是一种二进制存储,它负责处理本地存储中的对象数据的存储、检索和删除。对象都是文件系统中存放的典型的二进制文件,具有扩展文件属性的元数据(xattr)。注:xattr格式被Linux中的ext3/4,XFS,Btrfs,JFS和ReiserFS所支持,但是并没有有效测试证明在XFS,JFS,ReiserFS,Reiser4和ZFS下也同样能运行良好。不过,XFS被认为是当前最好的选择。
Swift容器服务器
容器服务器将列出一个容器中的所有对象,默认对象列表将存储为SQLite文件(译者注:也可以修改为MySQL,安装中就是以MySQL为例)。容器服务器也会统计容器中包含的对象数量及容器的存储空间耗费。
Swift账户服务器
账户服务器与容器服务器类似,将列出容器中的对象。
Ring(索引环)
Ring容器记录着Swift中物理存储对象的位置信息,它是真实物理存储位置的实体名的虚拟映射,类似于查找及定位不同集群的实体真实物理位置的索引服务。这里所谓的实体指账户、容器、对象,它们都拥有属于自己的不同的Rings。
-
OpenStack 块存储(cinder)
API service:负责接受和处理Rest请求,并将请求放入RabbitMQ队列。Cinder提供Volume API V2
Scheduler service:响应请求,读取或写向块存储数据库为维护状态,通过消息队列机制与其他进程交互,或直接与上层块存储提供的硬件或软件交互,通过driver结构,他可以与中队的存储
提供者进行交互
Volume service: 该服务运行在存储节点上,管理存储空间。每个存储节点都有一个Volume Service,若干个这样的存储节点联合起来可以构成一个存储资源池。为了支持不同类型和型号的存储 -
OpenStack Image service (glance)
glance 主要有三个部分构成:glance-api,glance-registry以及image store
glance-api:接受云系统镜像的创建,删除,读取请求
glance-registry:云系统的镜像注册服务 -
OpenStack 网络 (neutron)
这里就不详细介绍了,后面会有详细的讲解
二、环境准备
2.1 准备机器
本次实验使用的是VMvare虚拟机。详情如下
- 控制节点
hostname:linux-node1.oldboyedu.com
ip地址:192.168.56.11 网卡NAT eth0
系统及硬件:CentOS 7.1 内存2G,硬盘50G - 计算节点:
hostname:linux-node2.oldboyedu.com
ip地址:192.168.56.12 网卡NAT eth0
系统及硬件:CentOS 7.1 内存2G,硬盘50G
2.2 OpenStack版本介绍
本文使用的是最新版L(Liberty)版,其他版本如下图
![]()
2.3 安装组件服务
2.3.1 控制节点安装
- Base
yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm -yyum install centos-release-openstack-liberty -yyum install python-openstackclient -y
- MySQL
yum install mariadb mariadb-server MySQL-python -y
- RabbitMQ
yum install rabbitmq-server -y
- Keystone
yum install openstack-keystone httpd mod_wsgi memcached python-memcached -y
- Glance
yum install openstack-glance python-glance python-glanceclient -y
- Nova
yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient -y
- Neutron
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset -y
- Dashboard
yum install openstack-dashboard -y
2.3.2 计算节点安装
- Base
yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpmyum install centos-release-openstack-liberty -yyum install python-openstackclient -y
- Nova linux-node2.example.com
yum install openstack-nova-compute sysfsutils -y
- Neutron
yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset -y
三、实战OpenStack之控制节点
3.1 CentOS7的时间同步服务器chrony
下载chrony
[root@linux-node1 ~]# yum install -y chrony
修改其配置文件
[root@linux-node1 ~]# vim /etc/chrony.confallow 192.168/16
chrony开机自启动,并且启动
[root@linux-node1 ~]#systemctl enable chronyd.service[root@linux-node1 ~]#systemctl start chronyd.service
设置Centos7的时区
[root@linux-node1 ~]# timedatectl set-timezoneb Asia/Shanghai
查看时区和时间
[root@linux-node1 ~]# timedatectl statusLocal time:Tue2015-12-1512:19:55 CSTUniversal time:Tue2015-12-1504:19:55 UTCRTC time:Sun2015-12-1315:35:33Timezone:Asia/Shanghai(CST,+0800)NTP enabled: yesNTP synchronized: noRTC inlocal TZ: noDST active: n/a[root@linux-node1 ~]# dateTueDec1512:19:57 CST 2015
3.2 入手mysql
Openstack的所有组件除了Horizon,都要用到数据库,本文使用的是mysql,在CentOS7中,默认叫做MariaDB。
拷贝配置文件
[root[@linux-node1 ~]#cp /usr/share/mysql/my-medium.cnf /etc/my.cnf
修改mysql配置并启动
[root@linux-node1 ~]# vim /etc/my.cnf(在mysqld模块下添加如下内容)[mysqld]default-storage-engine = innodb 默认的存储引擎innodb_file_per_table 使用独享的表空间collation-server = utf8_general_ci设置校对标准init-connect ='SET NAMES utf8'设置连接的字符集character-set-server = utf8 设置创建数据库时默认的字符集
开机自启和启动mysql
[root@linux-node1 ~]# systemctl enable mariadb.serviceln -s '/usr/lib/systemd/system/mariadb.service''/etc/systemd/system/multi-user.target.wants/mariadb.service'[root@linux-node1 ~]# systemctl start mariadb.service
设置mysql的密码
[root@linux-node1 ~]# mysql_secure_installation
创建所有组件的库并授权
[root@linux-node1 ~]# mysql -uroot -p123456
执行sql
CREATE DATABASE keystone;GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';CREATE DATABASE glance;GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';CREATE DATABASE nova;GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';CREATE DATABASE neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';CREATE DATABASE cinder;GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
3.3 Rabbit消息队列
SOA架构:面向服务的体系结构是一个组件模型,它将应用程序的不同功能单元(称为服务)通过这些服务之间定义良好的接口和契约联系起来。接口是采用中立的方式进行定义的,它应该独立于实现服务的硬件平台、操作系统和编程语言。这使得构建在各种各样的系统中的服务可以使用一种统一和通用的方式进行交互。
在这里Openstack采用了SOA架构方案,结合了SOA架构的松耦合特点,单独组件单独部署,每个组件之间可能互为消费者和提供者,通过消息队列(openstack 支持Rabbitmq,Zeromq,Qpid)进行通信,保证了当某个服务当掉的情况,不至于其他都当掉。
启动Rabbitmq[root@linux-node1 ~]# systemctl enable rabbitmq-server.serviceln -s '/usr/lib/systemd/system/rabbitmq-server.service''/etc/systemd/system/multi-user.target.wants/rabbitmq-server.service'[root@linux-node1 ~]# systemctl start rabbitmq-server.service
新建Rabbitmq用户并授权
[root@linux-node1 ~]# rabbitmqctl add_user openstack openstack[root@linux-node1 ~]# rabbitmqctl set_permissions openstack ".*"".*"".*"
启用Rabbitmq的web管理插件
[root@linux-node1 ~]rabbitmq-plugins enable rabbitmq_management
重启Rabbitmq
[root@linux-node1 ~]# systemctl restart rabbitmq-server.service
查看Rabbit的端口,其中5672是服务端口,15672是web管理端口,25672是做集群的端口
[root@linux-node1 ~]# netstat -lntup |grep 5672tcp 000.0.0.0:256720.0.0.0:* LISTEN 52448/beamtcp 000.0.0.0:156720.0.0.0:* LISTEN 52448/beamtcp6 00:::5672:::* LISTEN 52448/beam
在web界面添加openstack用户,设置权限,首次登陆必须使用账号和密码必须都是guest
role设置为administrator,并设置openstack的密码
若想要监控Rabbit,即可使用下图中的API

3.4 Keystone组件
修改keystone的配置文件
[root@linux-node1 opt]# vim /etc/keystone/keystone.confadmin_token =863d35676a5632e846d9用作无用户时,创建用户来链接,此内容使用openssl随机产生connection = mysql://keystone:keystone@192.168.56.11/keystone用作链接数据库,三个keysthone分别为keystone组件,keystone用户名,mysql中的keysthone库名
切换到keystone用户,导入keystoe数据库
[root@linux-node1 opt]# su -s /bin/sh -c "keystone-manage db_sync" keystone[root@linux-node1 keystone]# cd /var/log/keystone/[root@linux-node1 keystone]# lltotal 8-rw-r--r--1 keystone keystone 7064Dec1514:43 keystone.log(通过切换到keystone用户下导入数据库,当启动的时候回把日志写入到该日志中,如果使用root执行倒库操作,则无法通过keysthone启动keystone程序)31:verbose = true开启debug模式1229:servers =192.168.57.11:11211更改servers标签,填写memcache地址1634:driver = sql开启默认sql驱动1827:provider = uuid开启并使用唯一识别码1832:driver = memcache(使用用户密码生成token时,存储到memcache中,高性能提供服务)
查看更改结果
[root@linux-node1 keystone]# grep -n "^[a-Z]"/etc/keystone/keystone.conf12:admin_token =863d35676a5632e846d931:verbose = true419:connection = mysql://keystone:keystone@192.168.56.11/keystone1229:servers =192.168.57.11:112111634:driver = sql1827:provider = uuid1832:driver = memcache
检查数据库导入结果
MariaDB[keystone]> show tables;+------------------------+|Tables_in_keystone|+------------------------+| access_token || assignment || config_register || consumer || credential || domain || endpoint || endpoint_group || federation_protocol || group || id_mapping || identity_provider || idp_remote_ids || mapping || migrate_version || policy || policy_association || project || project_endpoint || project_endpoint_group || region || request_token || revocation_event || role || sensitive_config || service || service_provider || token || trust || trust_role || user || user_group_membership || whitelisted_config |+------------------------+33 rows inset(0.00 sec)
添加一个apache的wsgi-keystone配置文件,其中5000端口是提供该服务的,35357是为admin提供管理用的
[root@linux-node1 keystone]# cat /etc/httpd/conf.d/wsgi-keystone.confListen5000Listen35357<VirtualHost*:5000>WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}WSGIProcessGroup keystone-publicWSGIScriptAlias//usr/bin/keystone-wsgi-publicWSGIApplicationGroup%{GLOBAL}WSGIPassAuthorizationOn<IfVersion>=2.4>ErrorLogFormat"%{cu}t %M"</IfVersion>ErrorLog/var/log/httpd/keystone-error.logCustomLog/var/log/httpd/keystone-access.log combined<Directory/usr/bin><IfVersion>=2.4>Require all granted</IfVersion><IfVersion<2.4>Order allow,denyAllow from all</IfVersion></Directory></VirtualHost><VirtualHost*:35357>WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}WSGIProcessGroup keystone-adminWSGIScriptAlias//usr/bin/keystone-wsgi-adminWSGIApplicationGroup%{GLOBAL}WSGIPassAuthorizationOn<IfVersion>=2.4>ErrorLogFormat"%{cu}t %M"</IfVersion>ErrorLog/var/log/httpd/keystone-error.logCustomLog/var/log/httpd/keystone-access.log combined<Directory/usr/bin><IfVersion>=2.4>Require all granted</IfVersion><IfVersion<2.4>Order allow,denyAllow from all</IfVersion></Directory></VirtualHost>
配置apache的servername,如果不配置servername,会影响keystone服务
[root@linux-node1 httpd]# vim conf/httpd.confServerName192.168.56.11:80
启动memcached,httpd,keystone
[root@linux-node1 httpd]# systemctl enable memcached httpdln -s '/usr/lib/systemd/system/memcached.service''/etc/systemd/system/multi-user.target.wants/memcached.service'ln -s '/usr/lib/systemd/system/httpd.service''/etc/systemd/system/multi-user.target.wants/httpd.service'[root@linux-node1 httpd]# systemctl start memcached httpd
查看httpd占用端口情况
[root@linux-node1 httpd]# netstat -lntup|grep httpdtcp6 00:::5000:::* LISTEN 70482/httpdtcp6 00:::80:::* LISTEN 70482/httpdtcp6 00:::35357:::* LISTEN 70482/httpd
创建用户并连接keystone,在这里可以使用两种方式,通过keystone –help后家参数的方式,或者使用环境变量env的方式,下面就将使用环境变量的方式,分别设置了token,API及控制版本(SOA种很适用)
[root@linux-node1 ~]# export OS_TOKEN=863d35676a5632e846d9[root@linux-node1 ~]# export OS_URL=http://192.168.56.11:35357/v3[root@linux-node1 ~]# export OS_IDENTITY_API_VERSION=3
创建admin项目(project)
[root@linux-node1 httpd]# openstack project create --domain default --description "Admin Project" admin+-------------+----------------------------------+|Field|Value|+-------------+----------------------------------+| description |AdminProject|| domain_id | default || enabled |True|| id |45ec9f72892c404897d0f7d6668d7a53|| is_domain |False|| name | admin || parent_id |None|+-------------+----------------------------------+
创建admin用户(user)并设置密码(生产环境一定设置一个复杂的)
[root@linux-node1 httpd]# openstack user create --domain default --password-prompt adminUserPassword:RepeatUserPassword:+-----------+----------------------------------+|Field|Value|+-----------+----------------------------------+| domain_id | default || enabled |True|| id | bb6d73c0b07246fb8f26025bb72c06a1 || name | admin |+-----------+----------------------------------+
创建admin的角色(role)
[root@linux-node1 httpd]# openstack role create admin+-------+----------------------------------+|Field|Value|+-------+----------------------------------+| id | b0bd00e6164243ceaa794db3250f267e || name | admin |+-------+----------------------------------+
把admin用户加到admin项目,赋予admin角色,把角色,项目,用户关联起来
[root@linux-node1 httpd]# openstack role add --project admin --user admin admin
创建一个普通用户demo,demo项目,角色为普通用户(uesr),并把它们关联起来
[root@linux-node1 httpd]# openstack project create --domain default --description "Demo Project" demo+-------------+----------------------------------+|Field|Value|+-------------+----------------------------------+| description |DemoProject|| domain_id | default || enabled |True|| id |4a213e53e4814685859679ff1dcb559f|| is_domain |False|| name | demo || parent_id |None|+-------------+----------------------------------+[root@linux-node1 httpd]# openstack user create --domain default --password=demo demo+-----------+----------------------------------+|Field|Value|+-----------+----------------------------------+| domain_id | default || enabled |True|| id | eb29c091e0ec490cbfa5d11dc2388766 || name | demo |+-----------+----------------------------------+[root@linux-node1 httpd]# openstack role create user+-------+----------------------------------+|Field|Value|+-------+----------------------------------+| id |4b36460ef1bd42daaf67feb19a8a55cf|| name | user |+-------+----------------------------------+[root@linux-node1 httpd]# openstack role add --project demo --user demo user
创建一个service的项目,此服务用来管理nova,neuturn,glance等组件的服务
[root@linux-node1 httpd]# openstack project create --domain default --description "Service Project" service+-------------+----------------------------------+|Field|Value|+-------------+----------------------------------+| description |ServiceProject|| domain_id | default || enabled |True|| id |0399778f38934986a923c96d8dc92073|| is_domain |False|| name | service || parent_id |None|+-------------+----------------------------------+
查看创建的用户,角色,项目
[root@linux-node1 httpd]# openstack user list+----------------------------------+-------+| ID |Name|+----------------------------------+-------+| bb6d73c0b07246fb8f26025bb72c06a1 | admin || eb29c091e0ec490cbfa5d11dc2388766 | demo |+----------------------------------+-------+[root@linux-node1 httpd]# openstack project list+----------------------------------+---------+| ID |Name|+----------------------------------+---------+|0399778f38934986a923c96d8dc92073| service ||45ec9f72892c404897d0f7d6668d7a53| admin ||4a213e53e4814685859679ff1dcb559f| demo |+----------------------------------+---------+[root@linux-node1 httpd]# openstack role list+----------------------------------+-------+| ID |Name|+----------------------------------+-------+|4b36460ef1bd42daaf67feb19a8a55cf| user || b0bd00e6164243ceaa794db3250f267e | admin |+----------------------------------+-------+
注册keystone服务,虽然keystone本身是搞注册的,但是自己也需要注册服务
创建keystone认证
[root@linux-node1 httpd]# openstack service create --name keystone --description "OpenStack Identity" identity+-------------+----------------------------------+|Field|Value|+-------------+----------------------------------+| description |OpenStackIdentity|| enabled |True|| id |46228b6dae2246008990040bbde371c3|| name | keystone || type | identity |+-------------+----------------------------------+
分别创建三种类型的endpoint,分别为public:对外可见,internal内部使用,admin管理使用
[root@linux-node1 httpd]# openstack endpoint create --region RegionOne identity public http://192.168.56.11:5000/v2.0+--------------+----------------------------------+|Field|Value|+--------------+----------------------------------+| enabled |True|| id |1143dcd58b6848a1890c3f2b9bf101d5|| interface | public || region |RegionOne|| region_id |RegionOne|| service_id |46228b6dae2246008990040bbde371c3|| service_name | keystone || service_type | identity || url | http://192.168.56.11:5000/v2.0|+--------------+----------------------------------+[root@linux-node1 httpd]# openstack endpoint create --region RegionOne identity internal http://192.168.56.11:5000/v2.0+--------------+----------------------------------+|Field|Value|+--------------+----------------------------------+| enabled |True|| id |496f648007a04e5fbe99b62ed8a76acd|| interface | internal || region |RegionOne|| region_id |RegionOne|| service_id |46228b6dae2246008990040bbde371c3|| service_name | keystone || service_type | identity || url | http://192.168.56.11:5000/v2.0|+--------------+----------------------------------+[root@linux-node1 httpd]# openstack endpoint create --region RegionOne identity admin http://192.168.56.11:35357/v2.0+--------------+----------------------------------+|Field|Value|+--------------+----------------------------------+| enabled |True|| id |28283cbf90b5434ba7a8780fac9308df|| interface | admin || region |RegionOne|| region_id |RegionOne|| service_id |46228b6dae2246008990040bbde371c3|| service_name | keystone || service_type | identity || url | http://192.168.56.11:35357/v2.0|+--------------+----------------------------------+
查看创建的endpoint
[root@linux-node1 httpd]# openstack endpoint list+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+| ID |Region|ServiceName|ServiceType|Enabled|Interface| URL |+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+|1143dcd58b6848a1890c3f2b9bf101d5|RegionOne| keystone | identity |True| public | http://192.168.56.11:5000/v2.0||28283cbf90b5434ba7a8780fac9308df|RegionOne| keystone | identity |True| admin | http://192.168.56.11:35357/v2.0||496f648007a04e5fbe99b62ed8a76acd|RegionOne| keystone | identity |True| internal | http://192.168.56.11:5000/v2.0|+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
链接到keystone,请求token,在这里由于已经添加了用户名和密码,就不在使用token,所有就一定要取消环境变量了
[root@linux-node1 httpd]# unset OS_TOKEN[root@linux-node1 httpd]# unset OS_URL[root@linux-node1 httpd]#openstack --os-auth-url http://192.168.56.11:35357/v3--os-project-domain-id default --os-user-domain-id default --os-project-name admin --os-username admin --os-auth-type password token issuePassword:+------------+----------------------------------+|Field|Value|+------------+----------------------------------+| expires |2015-12-16T17:45:52.926050Z|| id | ba1d3c403bf34759b239176594001f8b || project_id |45ec9f72892c404897d0f7d6668d7a53|| user_id | bb6d73c0b07246fb8f26025bb72c06a1 |+------------+----------------------------------+
配置admin和demo用户的环境变量,并添加执行权限,以后执行命令,直接source一下就行了
[root@linux-node1 ~]# cat admin-openrc.shexport OS_PROJECT_DOMAIN_ID=defaultexport OS_USER_DOMAIN_ID=defaultexport OS_PROJECT_NAME=adminexport OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=adminexport OS_AUTH_URL=http://192.168.56.11:35357/v3export OS_IDENTITY_API_VERSION=3[root@linux-node1 ~]# cat demo-openrc.shexport OS_PROJECT_DOMAIN_ID=defaultexport OS_USER_DOMAIN_ID=defaultexport OS_PROJECT_NAME=demoexport OS_TENANT_NAME=demoexport OS_USERNAME=demoexport OS_PASSWORD=demoexport OS_AUTH_URL=http://192.168.56.11:5000/v3export OS_IDENTITY_API_VERSION=3[root@linux-node1 ~]# chmod +x demo-openrc.sh[root@linux-node1 ~]# chmod +x admin-openrc.sh[root@linux-node1 ~]# source admin-openrc.sh[root@linux-node1 ~]# openstack token issue+------------+----------------------------------+|Field|Value|+------------+----------------------------------+| expires |2015-12-16T17:54:06.632906Z|| id | ade4b0c451b94255af1e96736555db75 || project_id |45ec9f72892c404897d0f7d6668d7a53|| user_id | bb6d73c0b07246fb8f26025bb72c06a1 |+------------+----------------------------------+
3.5 Glance部署
修改glance-api和glance-registry的配置文件,同步数据库
[root@linux-node1 glance]# vim glance-api.conf538 connection=mysql://glance:glance@192.168.56.11/glance[root@linux-node1 glance]# vim glance-registry.conf363 connection=mysql://glance:glance@192.168.56.11/glance[root@linux-node1 glance]# su -s /bin/sh -c "glance-manage db_sync" glanceNo handlers could be found for logger "oslo_config.cfg"(可以忽略)
检查导入glance库的表情况
MariaDB[(none)]> use glance;Database changedMariaDB[glance]> show tables;+----------------------------------+|Tables_in_glance|+----------------------------------+| artifact_blob_locations || artifact_blobs || artifact_dependencies || artifact_properties || artifact_tags || artifacts || image_locations || image_members || image_properties || image_tags || images || metadef_namespace_resource_types || metadef_namespaces || metadef_objects || metadef_properties || metadef_resource_types || metadef_tags || migrate_version || task_info || tasks |+----------------------------------+20 rows inset(0.00 sec)
配置glance连接keystone,对于keystone,每个服务都要有一个用户连接keystone
[root@linux-node1 ~]# source admin-openrc.sh[root@linux-node1 ~]# openstack user create --domain default --password=glance glance+-----------+----------------------------------+|Field|Value|+-----------+----------------------------------+| domain_id | default || enabled |True|| id | f4c340ba02bf44bf83d5c3ccfec77359 || name | glance |+-----------+----------------------------------+[root@linux-node1 ~]# openstack role add --project service --user glance admin
修改glance-api配置文件,结合keystone和mysql
[root@linux-node1 glance]# vim glance-api.conf978 auth_uri = http://192.168.56.11:5000979 auth_url = http://192.168.56.11:35357980 auth_plugin = password981 project_domain_id = default982 user_domain_id = default983 project_name = service984 username = glance985 password = glance1485 flavor=keystone491 notification_driver = noop 镜像服务不需要使用消息队列642 default_store=file镜像存放成文件701 filesystem_store_datadir=/var/lib/glance/images/镜像存放位置363 verbose=True打开debug```修改glance-registry配置文件,结合keystone和mysql```bash[root@linux-node1 glance]# vim glance-registry.conf188:verbose=True316:notification_driver =noop767 auth_uri = http://192.168.56.11:5000768 auth_url = http://192.168.56.11:35357769 auth_plugin = password770 project_domain_id = default771 user_domain_id = default772 project_name = service773 username = glance774 password = glance1256:flavor=keystone```检查glance修改过的配置```bash[root@linux-node1 ~]# grep -n '^[a-z]'/etc/glance/glance-api.conf363:verbose=True491:notification_driver = noop538:connection=mysql://glance:glance@192.168.56.11/glance642:default_store=file701:filesystem_store_datadir=/var/lib/glance/images/978:auth_uri = http://192.168.56.11:5000979:auth_url = http://192.168.56.11:35357980:auth_plugin = password981:project_domain_id = default982:user_domain_id = default983:project_name = service984:username = glance985:password = glance1485:flavor=keystone[root@linux-node1 ~]# grep -n '^[a-z]'/etc/glance/glance-registry.conf188:verbose=True316:notification_driver =noop363:connection=mysql://glance:glance@192.168.56.11/glance767:auth_uri = http://192.168.56.11:5000768:auth_url = http://192.168.56.11:35357769:auth_plugin = password770:project_domain_id = default771:user_domain_id = default772:project_name = service773:username = glance774:password = glance1256:flavor=keystone```对glance设置开机启动并启动glance服务```bash[root@linux-node1 ~]# systemctl enable openstack-glance-apiln -s '/usr/lib/systemd/system/openstack-glance-api.service''/etc/systemd/system/multi-user.target.wants/openstack-glance-api.service'[root@linux-node1 ~]# systemctl enable openstack-glance-registryln -s '/usr/lib/systemd/system/openstack-glance-registry.service''/etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service'[root@linux-node1 ~]# systemctl start openstack-glance-api[root@linux-node1 ~]# systemctl start openstack-glance-registry
查看galnce占用端口情况,其中9191是registry占用端口,9292是api占用端口
[root@linux-node1 ~]# netstat -lntup|egrep "9191|9292"tcp 000.0.0.0:91910.0.0.0:* LISTEN 13180/python2tcp 000.0.0.0:92920.0.0.0:* LISTEN 13162/python2```bash使glance服务在keystone上注册,才可以允许其他服务调用glance```bash[root@linux-node1 ~]# source admin-openrc.sh[root@linux-node1 ~]# openstack service create --name glance --description "OpenStack Image service" image+-------------+----------------------------------+|Field|Value|+-------------+----------------------------------+| description |OpenStackImage service || enabled |True|| id | cc8b4b4c712f47aa86e2d484c20a65c8 || name | glance || type | image |+-------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne image public http://192.168.56.11:9292+--------------+----------------------------------+|Field|Value|+--------------+----------------------------------+| enabled |True|| id |56cf6132fef14bfaa01c380338f485a6|| interface | public || region |RegionOne|| region_id |RegionOne|| service_id | cc8b4b4c712f47aa86e2d484c20a65c8 || service_name | glance || service_type | image || url | http://192.168.56.11:9292|+--------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne image internal http://192.168.56.11:9292+--------------+----------------------------------+|Field|Value|+--------------+----------------------------------+| enabled |True|| id |8005e8fcd85f4ea281eb9591c294e760|| interface | internal || region |RegionOne|| region_id |RegionOne|| service_id | cc8b4b4c712f47aa86e2d484c20a65c8 || service_name | glance || service_type | image || url | http://192.168.56.11:9292|+--------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne image admin http://192.168.56.11:9292+--------------+----------------------------------+|Field|Value|+--------------+----------------------------------+| enabled |True|| id |2b55d6db62eb47e9b8993d23e36111e0|| interface | admin || region |RegionOne|| region_id |RegionOne|| service_id | cc8b4b4c712f47aa86e2d484c20a65c8 || service_name | glance || service_type | image || url | http://192.168.56.11:9292|+--------------+----------------------------------+
在admin和demo中加入glance的环境变量,告诉其他服务glance使用的环境变量,一定要在admin-openrc.sh的路径下执行
[root@linux-node1 ~]# echo "export OS_IMAGE_API_VERSION=2"| tee -a admin-openrc.sh demo-openrc.shexport OS_IMAGE_API_VERSION=2[root@linux-node1 ~]# tail -1 admin-openrc.shexport OS_IMAGE_API_VERSION=2[root@linux-node1 ~]# tail -1 demo-openrc.shexport OS_IMAGE_API_VERSION=2
如果出现以下情况,表示glance配置成功,由于没有镜像,所以看不到
[root@linux-node1 ~]# glance image-list+----+------+| ID |Name|+----+------++----+------+
下载一个镜像
[root@linux-node1 ~]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img--2015-12-1702:12:55-- http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.imgResolving download.cirros-cloud.net (download.cirros-cloud.net)...69.163.241.114Connecting to download.cirros-cloud.net (download.cirros-cloud.net)|69.163.241.114|:80... connected.HTTP request sent, awaiting response...200 OKLength:13287936(13M)[text/plain]Saving to:‘cirros-0.3.4-x86_64-disk.img’100%[======================================>]13,287,936127KB/s in71s2015-12-1702:14:08(183 KB/s)-‘cirros-0.3.4-x86_64-disk.img’ saved [13287936/13287936]
上传镜像到glance,要在上一步所下载的镜像当前目录执行
[root@linux-node1 ~]# glance image-create --name "cirros"--file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress[=============================>]100%+------------------+--------------------------------------+|Property|Value|+------------------+--------------------------------------+| checksum | ee1eca47dc88f4879d8a229cc70a07c6 || container_format | bare || created_at |2015-12-16T18:16:46Z|| disk_format | qcow2 || id |4b36361f-1946-4026-b0cb-0f7073d48ade|| min_disk |0|| min_ram |0|| name | cirros || owner |45ec9f72892c404897d0f7d6668d7a53|| protected |False|| size |13287936|| status | active || tags |[]|| updated_at |2015-12-16T18:16:47Z|| virtual_size |None|| visibility | public |+------------------+--------------------------------------+
查看上传镜像
[root@linux-node1 ~]# glance image-list+--------------------------------------+--------+| ID |Name|+--------------------------------------+--------+|4b36361f-1946-4026-b0cb-0f7073d48ade| cirros |+--------------------------------------+--------+[root@linux-node1 ~]# cd /var/lib/glance/images/[root@linux-node1 images]# ls4b36361f-1946-4026-b0cb-0f7073d48ade(和上述ID一致)
3.6 Nova控制节点的部署
创建nova用户,并加入到service项目中,赋予admin权限
[root@linux-node1 ~]# source admin-openrc.sh .[root@linux-node1 ~]# openstack user create --domain default --password=nova nova+-----------+----------------------------------+|Field|Value|+-----------+----------------------------------+| domain_id | default || enabled |True|| id |73659413d2a842dc82971a0fc531e7b9|| name | nova |+-----------+----------------------------------+[root@linux-node1 ~]# openstack role add --project service --user nova admin
修改nova的配置文件,配置结果如下
[root@linux-node1 ~]# grep -n "^[a-Z]"/etc/nova/nova.conf61:rpc_backend=rabbit 使用rabbitmq消息队列124:my_ip=192.168.56.11变量,方便调用268:enabled_apis=osapi_compute,metadata 禁用ec2的API425:auth_strategy=keystone (使用keystone验证,分清处这个是default模块下的)1053:network_api_class=nova.network.neutronv2.api.API 网络使用neutron的,中间的.代表目录结构1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver(以前的类的名称LinuxBridgeInterfaceDriver,现在叫做NeutronLinuxBridgeInterfaceDriver)1331:security_group_api=neutron 设置安全组sg为neutron1370:debug=true1374:verbose=True1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver(关闭防火墙)1828:vncserver_listen= $my_ip vnc监听地址1832:vncserver_proxyclient_address= $my_ip 代理客户端地址2213:connection=mysql://nova:nova@192.168.56.11/nova2334:host=$my_ip glance的地址2546:auth_uri = http://192.168.56.11:50002547:auth_url = http://192.168.56.11:353572548:auth_plugin = password2549:project_domain_id = default2550:user_domain_id = default2551:project_name = service 使用service项目2552:username = nova2553:password = nova3807:lock_path=/var/lib/nova/tmp 锁路径3970:rabbit_host=192.168.56.11指定rabbit主机3974:rabbit_port=5672 rabbitmq端口3986:rabbit_userid=openstack rabbitmq用户3990:rabbit_password=openstack rabbitmq密码
同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" novaMariaDB[nova]> use nova;Database changedMariaDB[nova]> show tables;+--------------------------------------------+|Tables_in_nova|+--------------------------------------------+| agent_builds || aggregate_hosts || aggregate_metadata || aggregates || block_device_mapping || bw_usage_cache || cells || certificates || compute_nodes || console_pools || consoles || dns_domains || fixed_ips || floating_ips || instance_actions || instance_actions_events || instance_extra || instance_faults || instance_group_member || instance_group_policy || instance_groups || instance_id_mappings || instance_info_caches || instance_metadata || instance_system_metadata || instance_type_extra_specs || instance_type_projects || instance_types || instances || key_pairs || migrate_version || migrations || networks || pci_devices || project_user_quotas || provider_fw_rules || quota_classes || quota_usages || quotas || reservations || s3_images || security_group_default_rules || security_group_instance_association || security_group_rules || security_groups || services || shadow_agent_builds || shadow_aggregate_hosts || shadow_aggregate_metadata || shadow_aggregates || shadow_block_device_mapping || shadow_bw_usage_cache || shadow_cells || shadow_certificates || shadow_compute_nodes || shadow_console_pools || shadow_consoles || shadow_dns_domains || shadow_fixed_ips || shadow_floating_ips || shadow_instance_actions || shadow_instance_actions_events || shadow_instance_extra || shadow_instance_faults || shadow_instance_group_member || shadow_instance_group_policy || shadow_instance_groups || shadow_instance_id_mappings || shadow_instance_info_caches || shadow_instance_metadata || shadow_instance_system_metadata || shadow_instance_type_extra_specs || shadow_instance_type_projects || shadow_instance_types || shadow_instances || shadow_key_pairs || shadow_migrate_version || shadow_migrations || shadow_networks || shadow_pci_devices || shadow_project_user_quotas || shadow_provider_fw_rules || shadow_quota_classes || shadow_quota_usages || shadow_quotas || shadow_reservations || shadow_s3_images || shadow_security_group_default_rules || shadow_security_group_instance_association || shadow_security_group_rules || shadow_security_groups || shadow_services || shadow_snapshot_id_mappings || shadow_snapshots || shadow_task_log || shadow_virtual_interfaces || shadow_volume_id_mappings || shadow_volume_usage_cache || snapshot_id_mappings || snapshots || tags || task_log || virtual_interfaces || volume_id_mappings || volume_usage_cache |+--------------------------------------------+105 rows inset(0.01 sec)
启动nova的全部服务
[root@linux-node1 ~]# systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service[root@linux-node1 ~]# systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
在keystone上注册nova,并检查控制节点的nova服务是否配置成功
[root@linux-node1 ~]# openstack service create --name nova --description "OpenStack Compute" compute+-------------+----------------------------------+|Field|Value|+-------------+----------------------------------+| description |OpenStackCompute|| enabled |True|| id | f5873e5f21994da882599c9866e28d55 || name | nova || type | compute |+-------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute public http://192.168.56.11:8774/v2/%\(tenant_id\)s+--------------+--------------------------------------------+|Field|Value|+--------------+--------------------------------------------+| enabled |True|| id |23e9132aeb3a4dcb8689aa1933ad7301|| interface | public || region |RegionOne|| region_id |RegionOne|| service_id | f5873e5f21994da882599c9866e28d55 || service_name | nova || service_type | compute || url | http://192.168.56.11:8774/v2/%(tenant_id)s |+--------------+--------------------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute internal http://192.168.56.11:8774/v2/%\(tenant_id\)s+--------------+--------------------------------------------+|Field|Value|+--------------+--------------------------------------------+| enabled |True|| id |1d67f3630a0f413e9d6ff53bcc657fb6|| interface | internal || region |RegionOne|| region_id |RegionOne|| service_id | f5873e5f21994da882599c9866e28d55 || service_name | nova || service_type | compute || url | http://192.168.56.11:8774/v2/%(tenant_id)s |+--------------+--------------------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute admin http://192.168.56.11:8774/v2/%\(tenant_id\)s+--------------+--------------------------------------------+|Field|Value|+--------------+--------------------------------------------+| enabled |True|| id | b7f7c210becc4e54b76bb454966582e4 || interface | admin || region |RegionOne|| region_id |RegionOne|| service_id | f5873e5f21994da882599c9866e28d55 || service_name | nova || service_type | compute || url | http://192.168.56.11:8774/v2/%(tenant_id)s |+--------------+--------------------------------------------+[root@linux-node1 ~]# openstack host list+---------------------------+-------------+----------+|HostName|Service|Zone|+---------------------------+-------------+----------+| linux-node1.oldboyedu.com | conductor | internal || linux-node1.oldboyedu.com | consoleauth | internal || linux-node1.oldboyedu.com | cert | internal || linux-node1.oldboyedu.com | scheduler | internal |+---------------------------+-------------+----------+
3.7 Nova compute 计算节点的部署
- 图解Nova cpmpute
nova-compute一般运行在计算节点上,通过Message Queue接收并管理VM的生命周期
nova-compute通过Libvirt管理KVN,通过XenAPI管理Xen等 - 配置时间同步
修改其配置文件
[root@linux-node1 ~]# vim /etc/chrony.confserver 192.168.56.11 iburst(只保留这一个server,也就是控制节点的时间)
chrony开机自启动,并且启动
[root@linux-node1 ~]#systemctl enable chronyd.service[root@linux-node1 ~]#systemctl start chronyd.service
设置Centos7的时区
[root@linux-node1 ~]# timedatectl set-timezone``` Asia/Shanghai查看时区和时间```bash[root@linux-node ~]# timedatectl statusLocal time:Fri2015-12-1800:12:26 CSTUniversal time:Thu2015-12-1716:12:26 UTCRTC time:Sun2015-12-1315:32:36Timezone:Asia/Shanghai(CST,+0800)NTP enabled: yesNTP synchronized: noRTC inlocal TZ: noDST active: n/a[root@linux-node1 ~]# dateFriDec1800:12:43 CST 2015
- 开始部署计算节点
更改计算节点上的配置文件,直接使用控制节点的配置文件
[root@linux-node1 ~]# scp /etc/nova/nova.conf 192.168.56.12:/etc/nova/(在控制节点上操作的scp)
更改配置文件后的过滤结果
[root@linux-node ~]# grep -n '^[a-Z]'/etc/nova/nova.conf61:rpc_backend=rabbit124:my_ip=192.168.56.12改成本机ip268:enabled_apis=osapi_compute,metadata425:auth_strategy=keystone1053:network_api_class=nova.network.neutronv2.api.API1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver1331:security_group_api=neutron1370:debug=true1374:verbose=True1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver1820:novncproxy_base_url=http://192.168.56.11:6080/vnc_auto.html 指定novncproxy的IP地址和端口1828:vncserver_listen=0.0.0.0 vnc监听0.0.0.01832:vncserver_proxyclient_address= $my_ip1835:vnc_enabled=true 启用vnc1838:vnc_keymap=en-us 英语键盘2213:connection=mysql://nova:nova@192.168.56.11/nova2334:host=192.168.56.112546:auth_uri = http://192.168.56.11:50002547:auth_url = http://192.168.56.11:353572548:auth_plugin = password2549:project_domain_id = default2550:user_domain_id = default2551:project_name = service2552:username = nova2553:password = nova2727:virt_type=kvm 使用kvm虚拟机,需要cpu支持,可通过grep "vmx"/proc/cpuinfo查看3807:lock_path=/var/lib/nova/tmp3970:rabbit_host=192.168.56.113974:rabbit_port=56723986:rabbit_userid=openstack3990:rabbit_password=openstack
启动计算节点的libvirt和nova-compute
[root@linux-node ~]# systemctl enable libvirtd openstack-nova-computeln -s '/usr/lib/systemd/system/openstack-nova-compute.service''/etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service'[root@linux-node ~]# systemctl start libvirtd openstack-nova-compute
- 在控制节点中查看注册的host,最后一个compute即是注册的host
[root@linux-node1 ~]# openstack host list+---------------------------+-------------+----------+|HostName|Service|Zone|+---------------------------+-------------+----------+| linux-node1.oldboyedu.com | conductor | internal || linux-node1.oldboyedu.com | consoleauth | internal || linux-node1.oldboyedu.com | cert | internal || linux-node1.oldboyedu.com | scheduler | internal || linux-node.oldboyedu.com | compute | nova |+---------------------------+-------------+----------+
在控制节点中测试nova和glance连接正常,nova链接keystone是否正常
[root@linux-node1 ~]# nova image-list+--------------------------------------+--------+--------+--------+| ID |Name|Status|Server|+--------------------------------------+--------+--------+--------+|4b36361f-1946-4026-b0cb-0f7073d48ade| cirros | ACTIVE ||+--------------------------------------+--------+--------+--------+[root@linux-node1 ~]# nova endpointsWARNING: keystone has no endpoint in!Available endpoints for this service:+-----------+----------------------------------+| keystone |Value|+-----------+----------------------------------+| id |1143dcd58b6848a1890c3f2b9bf101d5|| interface | public || region |RegionOne|| region_id |RegionOne|| url | http://192.168.56.11:5000/v2.0|+-----------+----------------------------------++-----------+----------------------------------+| keystone |Value|+-----------+----------------------------------+| id |28283cbf90b5434ba7a8780fac9308df|| interface | admin || region |RegionOne|| region_id |RegionOne|| url | http://192.168.56.11:35357/v2.0|+-----------+----------------------------------++-----------+----------------------------------+| keystone |Value|+-----------+----------------------------------+| id |496f648007a04e5fbe99b62ed8a76acd|| interface | internal || region |RegionOne|| region_id |RegionOne|| url | http://192.168.56.11:5000/v2.0|+-----------+----------------------------------+WARNING: nova has no endpoint in!Available endpoints for this service:+-----------+---------------------------------------------------------------+| nova |Value|+-----------+---------------------------------------------------------------+| id |1d67f3630a0f413e9d6ff53bcc657fb6|| interface | internal || region |RegionOne|| region_id |RegionOne|| url | http://192.168.56.11:8774/v2/45ec9f72892c404897d0f7d6668d7a53|+-----------+---------------------------------------------------------------++-----------+---------------------------------------------------------------+| nova |Value|+-----------+---------------------------------------------------------------+| id |23e9132aeb3a4dcb8689aa1933ad7301|| interface | public || region |RegionOne|| region_id |RegionOne|| url | http://192.168.56.11:8774/v2/45ec9f72892c404897d0f7d6668d7a53|+-----------+---------------------------------------------------------------++-----------+---------------------------------------------------------------+| nova |Value|+-----------+---------------------------------------------------------------+| id | b7f7c210becc4e54b76bb454966582e4 || interface | admin || region |RegionOne|| region_id |RegionOne|| url | http://192.168.56.11:8774/v2/45ec9f72892c404897d0f7d6668d7a53|+-----------+---------------------------------------------------------------+WARNING: glance has no endpoint in!Available endpoints for this service:+-----------+----------------------------------+| glance |Value|+-----------+----------------------------------+| id |2b55d6db62eb47e9b8993d23e36111e0|| interface | admin || region |RegionOne|| region_id |RegionOne|| url | http://192.168.56.11:9292|+-----------+----------------------------------++-----------+----------------------------------+| glance |Value|+-----------+----------------------------------+| id |56cf6132fef14bfaa01c380338f485a6|| interface | public || region |RegionOne|| region_id |RegionOne|| url | http://192.168.56.11:9292|+-----------+----------------------------------++-----------+----------------------------------+| glance |Value|+-----------+----------------------------------+| id |8005e8fcd85f4ea281eb9591c294e760|| interface | internal || region |RegionOne|| region_id |RegionOne|| url | http://192.168.56.11:9292|+-----------+----------------------------------+
3.8 Neturn 服务部署
注册neutron服务
[root@linux-node1 ~]# source admin-openrc.sh[root@linux-node1 ~]# openstack service create --name neutron --description "OpenStack Networking" network+-------------+----------------------------------+|Field|Value|+-------------+----------------------------------+| description |OpenStackNetworking|| enabled |True|| id | e698fc8506634b05b250e9fdd8205565 || name | neutron || type | network |+-------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne network public http://192.168.56.11:9696+--------------+----------------------------------+|Field|Value|+--------------+----------------------------------+| enabled |True|| id |3cf4a13ec1b94e66a47e27bfccd95318|| interface | public || region |RegionOne|| region_id |RegionOne|| service_id | e698fc8506634b05b250e9fdd8205565 || service_name | neutron || service_type | network || url | http://192.168.56.11:9696|+--------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne network internal http://192.168.56.11:9696+--------------+----------------------------------+|Field|Value|+--------------+----------------------------------+| enabled |True|| id |5cd1e54d14f046dda2f7bf45b418f54c|| interface | internal || region |RegionOne|| region_id |RegionOne|| service_id | e698fc8506634b05b250e9fdd8205565 || service_name | neutron || service_type | network || url | http://192.168.56.11:9696|+--------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne network admin http://192.168.56.11:9696+--------------+----------------------------------+|Field|Value|+--------------+----------------------------------+| enabled |True|| id |2c68cb45730d470691e6a3f0656eff03|| interface | admin || region |RegionOne|| region_id |RegionOne|| service_id | e698fc8506634b05b250e9fdd8205565 || service_name | neutron || service_type | network || url | http://192.168.56.11:9696|+--------------+----------------------------------+创建neutron用户,并添加大service项目,给予admin权限[root@linux-node1 config]# openstack user create --domain default --password=neutron neutron+-----------+----------------------------------+|Field|Value|+-----------+----------------------------------+| domain_id | default || enabled |True|| id |5143854f317541d68efb8bba8b2539fc|| name | neutron |+-----------+----------------------------------+[root@linux-node1 config]# openstack role add --project service --user neutron admin
修改neturn配置文件
[root@linux-node1 ~]# grep -n "^[a-Z]"/etc/neutron/neutron.conf20:state_path =/var/lib/neutron60:core_plugin = ml2 核心插件为ml277:service_plugins = router 服务插件为router92:auth_strategy = keystone360:notify_nova_on_port_status_changes =True端口改变需通知nova364:notify_nova_on_port_data_changes =True367:nova_url = http://192.168.56.11:8774/v2573:rpc_backend=rabbit717:auth_uri = http://192.168.56.11:5000718:auth_url = http://192.168.56.11:35357719:auth_plugin = password720:project_domain_id = default721:user_domain_id = default722:project_name = service723:username = neutron724:password = neutron737:connection = mysql://neutron:neutron@192.168.56.11:3306/neutron780:auth_url = http://192.168.56.11:35357781:auth_plugin = password782:project_domain_id = default783:user_domain_id = default784:region_name =RegionOne785:project_name = service786:username = nova787:password = nova818:lock_path = $state_path/lock998:rabbit_host =192.168.56.111002:rabbit_port =56721014:rabbit_userid = openstack1018:rabbit_password = openstack
修改ml2的配置文件,ml2后续会有详细说明
[root@linux-node1 ~]# grep "^[a-Z]"/etc/neutron/plugins/ml2/ml2_conf.initype_drivers = flat,vlan,gre,vxlan,geneve 各种驱动tenant_network_types = vlan,gre,vxlan,geneve 网络类型mechanism_drivers = openvswitch,linuxbridge 支持的底层驱动extension_drivers = port_security 端口安全flat_networks = physnet1 使用单一扁平网络(和host一个网络)enable_ipset =True
修改的linuxbridge配置文件、
[root@linux-node1 ~]# grep -n "^[a-Z]"/etc/neutron/plugins/ml2/linuxbridge_agent.ini9:physical_interface_mappings = physnet1:eth0 网卡映射eth16:enable_vxlan = false 关闭vxlan51:prevent_arp_spoofing =True57:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver61:enable_security_group =True
修改dhcp的配置文件
[root@linux-node1 ~]# grep -n "^[a-Z]"/etc/neutron/dhcp_agent.ini27:interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver31:dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq使用Dnsmasq作为dhcp服务52:enable_isolated_metadata = true
修改metadata_agent.ini配置文件
[root@linux-node1 config]# grep -n "^[a-Z]"/etc/neutron/metadata_agent.ini4:auth_uri = http://192.168.56.11:50005:auth_url = http://192.168.56.11:353576:auth_region =RegionOne7:auth_plugin = password8:project_domain_id = default9:user_domain_id = default10:project_name = service11:username = neutron12:password = neutron29:nova_metadata_ip =192.168.56.1152:metadata_proxy_shared_secret = neutron
在控制节点的nova中添加关于neutron的配置,`添加如下内容到neutron模块即可
3033:url = http://192.168.56.11:96963034:auth_url = http://192.168.56.11:353573035:auth_plugin = password3036:project_domain_id = default3037:user_domain_id = default3038:region_name =RegionOne3039:project_name = service3040:username = neutron3041:password = neutron3043:service_metadata_proxy =True3044:metadata_proxy_shared_secret = neutron````创建ml2的软连接```bash[root@linux-node1 config]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步neutron数据库,并检查结果
[root@linux-node1 config]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutronMariaDB[(none)]> use neutron;Database changedMariaDB[neutron]> show tables;+-----------------------------------------+|Tables_in_neutron|+-----------------------------------------+| address_scopes || agents || alembic_version || allowedaddresspairs || arista_provisioned_nets || arista_provisioned_tenants || arista_provisioned_vms || brocadenetworks || brocadeports || cisco_csr_identifier_map || cisco_hosting_devices || cisco_ml2_apic_contracts || cisco_ml2_apic_host_links || cisco_ml2_apic_names || cisco_ml2_n1kv_network_bindings || cisco_ml2_n1kv_network_profiles || cisco_ml2_n1kv_policy_profiles || cisco_ml2_n1kv_port_bindings || cisco_ml2_n1kv_profile_bindings || cisco_ml2_n1kv_vlan_allocations || cisco_ml2_n1kv_vxlan_allocations || cisco_ml2_nexus_nve || cisco_ml2_nexusport_bindings || cisco_port_mappings || cisco_router_mappings || consistencyhashes || csnat_l3_agent_bindings || default_security_group || dnsnameservers || dvr_host_macs || embrane_pool_port || externalnetworks || extradhcpopts || firewall_policies || firewall_rules || firewalls || flavors || flavorserviceprofilebindings || floatingips || ha_router_agent_port_bindings || ha_router_networks || ha_router_vrid_allocations || healthmonitors || ikepolicies || ipallocationpools || ipallocations || ipamallocationpools || ipamallocations || ipamavailabilityranges || ipamsubnets || ipavailabilityranges || ipsec_site_connections || ipsecpeercidrs || ipsecpolicies || lsn || lsn_port || maclearningstates || members || meteringlabelrules || meteringlabels || ml2_brocadenetworks || ml2_brocadeports || ml2_dvr_port_bindings || ml2_flat_allocations || ml2_geneve_allocations || ml2_geneve_endpoints || ml2_gre_allocations || ml2_gre_endpoints || ml2_network_segments || ml2_nexus_vxlan_allocations || ml2_nexus_vxlan_mcast_groups || ml2_port_binding_levels || ml2_port_bindings || ml2_ucsm_port_profiles || ml2_vlan_allocations || ml2_vxlan_allocations || ml2_vxlan_endpoints || multi_provider_networks || networkconnections || networkdhcpagentbindings || networkgatewaydevicereferences || networkgatewaydevices || networkgateways || networkqueuemappings || networkrbacs || networks || networksecuritybindings || neutron_nsx_network_mappings || neutron_nsx_port_mappings || neutron_nsx_router_mappings || neutron_nsx_security_group_mappings || nexthops || nsxv_edge_dhcp_static_bindings || nsxv_edge_vnic_bindings || nsxv_firewall_rule_bindings || nsxv_internal_edges || nsxv_internal_networks || nsxv_port_index_mappings || nsxv_port_vnic_mappings || nsxv_router_bindings || nsxv_router_ext_attributes || nsxv_rule_mappings || nsxv_security_group_section_mappings || nsxv_spoofguard_policy_network_mappings || nsxv_tz_network_bindings || nsxv_vdr_dhcp_bindings || nuage_net_partition_router_mapping || nuage_net_partitions || nuage_provider_net_bindings || nuage_subnet_l2dom_mapping || ofcfiltermappings || ofcnetworkmappings || ofcportmappings || ofcroutermappings || ofctenantmappings || packetfilters || poolloadbalanceragentbindings || poolmonitorassociations || pools || poolstatisticss || portbindingports || portinfos || portqueuemappings || ports || portsecuritybindings || providerresourceassociations || qos_bandwidth_limit_rules || qos_network_policy_bindings || qos_policies || qos_port_policy_bindings || qosqueues || quotas || quotausages || reservations || resourcedeltas || router_extra_attributes || routerl3agentbindings || routerports || routerproviders || routerroutes || routerrules || routers || securitygroupportbindings || securitygrouprules || securitygroups || serviceprofiles || sessionpersistences || subnetpoolprefixes || subnetpools || subnetroutes || subnets || tz_network_bindings || vcns_router_bindings || vips || vpnservices |+-----------------------------------------+155 rows inset(0.00 sec)
重启nova-api,并启动neutron服务
[root@linux-node1 config]# systemctl restart openstack-nova-api[root@linux-node1 config]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service[root@linux-node1 config]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
检查neutron-agent结果
[root@linux-node1 config]# neutron agent-list+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+| id | agent_type | host | alive | admin_state_up | binary |+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+|5a9a522f-e2dc-42dc-ab37-b26da0bfe416 |Metadata agent | linux-node1.oldboyedu.com |
|True| neutron-metadata-agent ||8ba06bd7-896c-47aa-a733-8a9a9822361c| DHCP agent | linux-node1.oldboyedu.com |
|True| neutron-dhcp-agent || f16eef03-4592-4352-8d5e-c08fb91dc983 |Linux bridge agent | linux-node1.oldboyedu.com |
|True| neutron-linuxbridge-agent |+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
开始部署neutron的计算节点,在这里直接scp过去,不需要做任何更改
[root@linux-node1 config]# scp /etc/neutron/neutron.conf 192.168.56.12:/etc/neutron/[root@linux-node1 config]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.56.12:/etc/neutron/plugins/ml2/
修改计算节点的nova配置,添加如下内容到neutron模块即可
3033:url = http://192.168.56.11:96963034:auth_url = http://192.168.56.11:353573035:auth_plugin = password3036:project_domain_id = default3037:user_domain_id = default3038:region_name =RegionOne3039:project_name = service3040:username = neutron3041:password = neutron3043:service_metadata_proxy =True3044:metadata_proxy_shared_secret = neutron````复制linuxbridge_agent文件,无需更改,并创建ml2软连接```bash[root@linux-node1 ~]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.56.12:/etc/neutron/plugins/ml2/[root@linux-node ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
重启计算节点的nova-computer
[root@linux-node ml2]# systemctl restart openstack-nova-compute.service
计算机点上启动linuxbridge_agent服务
[root@linux-node ml2]# systemctl restart openstack-nova-compute.service[root@linux-node ml2]# systemctl enable neutron-linuxbridge-agent.serviceln -s '/usr/lib/systemd/system/neutron-linuxbridge-agent.service''/etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service'[root@linux-node ml2]# systemctl start neutron-linuxbridge-agent.service
检查neutron的结果,有四个(控制节点一个,计算节点两个)结果代表正确
[root@linux-node1 config]# neutron agent-list+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+| id | agent_type | host | alive | admin_state_up | binary |+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+|5a9a522f-e2dc-42dc-ab37-b26da0bfe416 |Metadata agent | linux-node1.oldboyedu.com |
|True| neutron-metadata-agent ||7d81019e-ca3b-4b32-ae32-c3de9452ef9d |Linux bridge agent | linux-node.oldboyedu.com |
|True| neutron-linuxbridge-agent ||8ba06bd7-896c-47aa-a733-8a9a9822361c| DHCP agent | linux-node1.oldboyedu.com |
|True| neutron-dhcp-agent || f16eef03-4592-4352-8d5e-c08fb91dc983 |Linux bridge agent | linux-node1.oldboyedu.com |
|True| neutron-linuxbridge-agent |+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
四、创建一台虚拟机
图解网络,并创建一个真实的桥接网络
创建一个单一扁平网络(名字:flat),网络类型为flat,网络适共享的(share),网络提供者:physnet1,它是和eth0关联起来的
[root@linux-node1 ~]# source admin-openrc.sh[root@linux-node1 ~]# neutron net-create flat --shared --provider:physical_network physnet1 --provider:network_type flatCreated a new network:+---------------------------+--------------------------------------+|Field|Value|+---------------------------+--------------------------------------+| admin_state_up |True|| id |7a3c7391-cea7-47eb-a0ef-f7b18010c984 || mtu |0|| name | flat || port_security_enabled |True|| provider:network_type | flat || provider:physical_network | physnet1 || provider:segmentation_id ||| router:external |False|| shared |True|| status | ACTIVE || subnets ||| tenant_id |45ec9f72892c404897d0f7d6668d7a53|+---------------------------+--------------------------------------+
对上一步创建的网络创建一个子网,名字为:subnet-create flat,设置dns和网关
[root@linux-node1 ~]# neutron subnet-create flat 192.168.56.0/24--name flat-subnet --allocation-pool start=192.168.56.100,end=192.168.56.200--dns-nameserver 192.168.56.2--gateway 192.168.56.2Created a new subnet:+-------------------+------------------------------------------------------+|Field|Value|+-------------------+------------------------------------------------------+| allocation_pools |{"start":"192.168.56.100","end":"192.168.56.200"}|| cidr |192.168.56.0/24|| dns_nameservers |192.168.56.2|| enable_dhcp |True|| gateway_ip |192.168.56.2|| host_routes ||| id |6841c8ae-78f6-44e2-ab74-7411108574c2|| ip_version |4|| ipv6_address_mode ||| ipv6_ra_mode ||| name | flat-subnet || network_id |7a3c7391-cea7-47eb-a0ef-f7b18010c984 || subnetpool_id ||| tenant_id |45ec9f72892c404897d0f7d6668d7a53|+-------------------+------------------------------------------------------+
查看创建的网络和子网
[root@linux-node1 ~]# neutron net-list+--------------------------------------+------+------------------------------------------------------+| id | name | subnets |+--------------------------------------+------+------------------------------------------------------+|7a3c7391-cea7-47eb-a0ef-f7b18010c984 | flat |6841c8ae-78f6-44e2-ab74-7411108574c2192.168.56.0/24|+--------------------------------------+------+------------------------------------------------------+
注:创建虚拟机之前,由于一个网络下不能存在多个dhcp,所以一定关闭其他的dhcp选项
下面开始正式创建虚拟机,为了可以连上所创建的虚拟机,在这里要创建一对公钥和私钥,并添加到openstack中
[root@linux-node1 ~]# source demo-openrc.sh[root@linux-node1 ~]# ssh-keygen -q -N ""Enter file in which to save the key (/root/.ssh/id_rsa):[root@linux-node1 ~]# nova keypair-add --pub-key .ssh/id_rsa.pub mykey[root@linux-node1 ~]# nova keypair-list+-------+-------------------------------------------------+|Name|Fingerprint|+-------+-------------------------------------------------+| mykey |9f:25:57:44:45:a3:6d:0d:4b:e7:ca:3a:9c:67:32:6f|+-------+-------------------------------------------------+[root@linux-node1 ~]# ls .ssh/id_rsa id_rsa.pub known_hosts
创建一个安全组,打开icmp和开放22端口
[root@linux-node1 ~]# nova secgroup-add-rule default icmp -1-10.0.0.0/0+-------------+-----------+---------+-----------+--------------+| IP Protocol|FromPort|ToPort| IP Range|SourceGroup|+-------------+-----------+---------+-----------+--------------+| icmp |-1|-1|0.0.0.0/0||+-------------+-----------+---------+-----------+--------------+[root@linux-node1 ~]# nova secgroup-add-rule default tcp 22220.0.0.0/0+-------------+-----------+---------+-----------+--------------+| IP Protocol|FromPort|ToPort| IP Range|SourceGroup|+-------------+-----------+---------+-----------+--------------+| tcp |22|22|0.0.0.0/0||+-------------+-----------+---------+-----------+--------------+
创建虚拟机之前要进行的确认虚拟机类型flavor(相当于EC2的intance的type)、需要的镜像(EC2的AMI),需要的网络(EC2的VPC),安全组(EC2的sg)
[root@linux-node1 ~]# nova flavor-list+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+| ID |Name|Memory_MB|Disk|Ephemeral|Swap|VCPUs| RXTX_Factor |Is_Public|+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+|1| m1.tiny |512|1|0||1|1.0|True||2| m1.small |2048|20|0||1|1.0|True||3| m1.medium |4096|40|0||2|1.0|True||4| m1.large |8192|80|0||4|1.0|True||5| m1.xlarge |16384|160|0||8|1.0|True|+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+[root@linux-node1 ~]# nova image-list+--------------------------------------+--------+--------+--------+| ID |Name|Status|Server|+--------------------------------------+--------+--------+--------+|4b36361f-1946-4026-b0cb-0f7073d48ade| cirros | ACTIVE ||+--------------------------------------+--------+--------+--------+[root@linux-node1 ~]# neutron net-list+--------------------------------------+------+------------------------------------------------------+| id | name | subnets |+--------------------------------------+------+------------------------------------------------------+|7a3c7391-cea7-47eb-a0ef-f7b18010c984 | flat |6841c8ae-78f6-44e2-ab74-7411108574c2192.168.56.0/24|+--------------------------------------+------+------------------------------------------------------+[root@linux-node1 ~]# nova secgroup-list+--------------------------------------+---------+------------------------+|Id|Name|Description|+--------------------------------------+---------+------------------------+|2946cecd-0933-45d0-a6e2-0606abe418ee| default |Default security group |+--------------------------------------+---------+------------------------+
创建一台虚拟机,类型为m1.tiny,镜像为cirros(上文wget的),网络id为neutron net-list出来的,安全组就是默认的,选择刚开的创建的key-pair,虚拟机的名字为hello-instance
[root@linux-node1 ~]# nova boot --flavor m1.tiny --image cirros --nic net-id=7a3c7391-cea7-47eb-a0ef-f7b18010c984 --security-group default --key-name mykey hello-instance+--------------------------------------+-----------------------------------------------+|Property|Value|+--------------------------------------+-----------------------------------------------+| OS-DCF:diskConfig | MANUAL || OS-EXT-AZ:availability_zone ||| OS-EXT-STS:power_state |0|| OS-EXT-STS:task_state | scheduling || OS-EXT-STS:vm_state | building || OS-SRV-USG:launched_at |-|| OS-SRV-USG:terminated_at |-|| accessIPv4 ||| accessIPv6 ||| adminPass |JPp9rX5UBYcW|| config_drive ||| created |2015-12-17T02:03:38Z|| flavor | m1.tiny (1)|| hostId ||| id | bb71867c-4078-4984-bf5a-f10bd84ba72b || image | cirros (4b36361f-1946-4026-b0cb-0f7073d48ade)|| key_name | mykey || metadata |{}|| name | hello-instance || os-extended-volumes:volumes_attached |[]|| progress |0|| security_groups | default || status | BUILD || tenant_id |4a213e53e4814685859679ff1dcb559f|| updated |2015-12-17T02:03:41Z|| user_id | eb29c091e0ec490cbfa5d11dc2388766 |+--------------------------------------+-----------------------------------------------+
查看所创建的虚拟机状态、
[root@linux-node1 ~]# nova list+--------------------------------------+----------------+--------+------------+-------------+---------------------+| ID |Name|Status|TaskState|PowerState|Networks|+--------------------------------------+----------------+--------+------------+-------------+---------------------+| bb71867c-4078-4984-bf5a-f10bd84ba72b | hello-instance | ACTIVE |-|Running| flat=192.168.56.101|+--------------------------------------+----------------+--------+------------+-------------+---------------------+
ssh连接到所创建的虚拟机
[root@linux-node1 ~]# ssh cirros@192.168.56.101
通过vnc生成URL在web界面上链接虚拟机
[root@linux-node1 ~]# nova get-vnc-console hello-instance novnc+-------+------------------------------------------------------------------------------------+|Type|Url|+-------+------------------------------------------------------------------------------------+| novnc | http://192.168.56.11:6080/vnc_auto.html?token=1af18bea-5a64-490e-8251-29c8bed36125|+------

五、深入Neutron讲解

5.1 虚拟机网卡和网桥
[root@linux-node1 ~]# ifconfigbrq7a3c7391-ce: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500inet 192.168.56.11 netmask 255.255.255.0 broadcast 192.168.56.255inet6 fe80::a812:a1ff:fe7b:b829 prefixlen 64 scopeid 0x20<link>ether 00:0c:29:34:98:f2 txqueuelen 0(Ethernet)RX packets 60177 bytes 17278837(16.4MiB)RX errors 0 dropped 0 overruns 0 frame 0TX packets 52815 bytes 14671641(13.9MiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500inet6 fe80::20c:29ff:fe34:98f2 prefixlen 64 scopeid 0x20<link>ether 00:0c:29:34:98:f2 txqueuelen 1000(Ethernet)RX packets 67008 bytes 19169606(18.2MiB)RX errors 0 dropped 0 overruns 0 frame 0TX packets 56855 bytes 17779848(16.9MiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536inet 127.0.0.1 netmask 255.0.0.0inet6 ::1 prefixlen 128 scopeid 0x10<host>loop txqueuelen 0(LocalLoopback)RX packets 432770 bytes 161810178(154.3MiB)RX errors 0 dropped 0 overruns 0 frame 0TX packets 432770 bytes 161810178(154.3MiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0tap34ea740c-a6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500inet6 fe80::6c67:5fff:fe56:58a4 prefixlen 64 scopeid 0x20<link>ether 6e:67:5f:56:58:a4 txqueuelen 1000(Ethernet)RX packets 75 bytes 8377(8.1KiB)RX errors 0 dropped 0 overruns 0 frame 0TX packets 1495 bytes 139421(136.1KiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
查看网桥状态
[root@linux-node1 ~]# brctl showbridge name bridge id STP enabled interfacesbrq7a3c7391-ce 8000.000c293498f2 no eth0tap34ea740c-a6
brq7a3c7391-ce(网桥):可以理解为一个小交换机,网桥上的设备都和eth0能通(数据链路层),其中tap34ea740c-a6作为虚拟机的网卡,从而实现通信
5.2 不同场景网络类型和OpenStack网络分层
5.2.1 Openstack网络分类

5.2.2Openstack网络分层
首先网络分层肯定是基于OSI七层模型的,在这里就不在赘述,只对Openstack的网络进行分层讲解
- 网络:在实际的物理环境下,我们使用交换机或者集线器把多个计算机连接起来形成了网络,在Neutron的世界里,网络也是将多个不同的云主机连接起来。
- 子网:在实际的物理环境下,在一个网络中,我们可以将网络划分成多个逻辑子网,在Neutron的世界里,子网也是路属于网络下的
- 端口:在实际的物理环境下,每个字子网或者每个网络,都有很多的端口,比如交换机端口来供计算机链接,在Neutron的世界里端口也是隶属于子网下,云主机的网卡会对应到一个端口上。
- 路由器:在实际的网络环境下,不同网络或者不同逻辑子网之间如果需要进行通信,需要通过路由器进行路由,在Neutron的实际里路由也是这个作用,用来连接不同的网络或者子网。
5.3 五种neutron常见的模型
- 多平面网络
![]()
- 混合平面私有网络
![]()
- 通过私有网络实现运营商路由功能
![]()
- 通过私有网络实现每个租户创建自己专属的网络区段
![]()
5.4 图解Neutron服务的几大组件

- ML2(The Modular Layer2):提供一个新的插件ML2,这个插件可以作为一个框架同时支持不同的2层网络,类似于中间协调在作用,通过ml2
调用linuxbridge、openvswitch和其他商业的插件,保证了可以同时使用linuxbridge、openvswitch和其他商业的插件。 - DHCP-Agent:为虚拟机分配IP地址,在创建虚拟机之前,创建了一个IP地址池,就是为了给虚拟机分配IP地址的。具体如下
27 interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver Dhcp-agent需要配置与plugin对应的interface_driver
31 dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq当启动一个实例时,分配和配置(ip)的程序包含一个在dnsmasq config中储存ip地址的进程,接着启动或重载dnsmasq。通常,OpenStack在每个网络中只有一个neutron-dhcp-agent负责spawn一个dnsmasq,所以一个庞大的网络(包含所有子网)中只会有一个dnsmasq提供服务。理论上,并且根据实用的实验室测试,dnsmasq应该能每秒处理1000个DHCP请求
52 enable_isolated_metadata = true 启用独立的metadata,后续会有说明 - L3-agent:名字为neutron-l3-agent,为客户机访问外部网络提供3层转发服务。也部署在网络节点上。
- LBaas:负载均衡及服务。后续会有说明
六、虚拟机知多少
虚拟机对于宿主机来说,知识一个进程,通过libvirt调用kvm进行管理虚拟机,当然也可以使用virsh工具来管理虚拟机
查看所虚拟机的真实内容
切换到虚拟机默认的存放路径
[root@linux-node ~]# cd /var/lib/nova/instances/[root@linux-node instances]# ls_base bb71867c-4078-4984-bf5a-f10bd84ba72b compute_nodes locks
- bb71867c-4078-4984-bf5a-f10bd84ba72b目录为虚拟机的ID(可通过nova list查看),详细内容如下
console.log 终端输出到此文件中
disk 虚拟磁盘,后端文件/var/lib/nova/instances/_base/96bfe896f3aaff3091e7e222df51f2545,使用的是copy on write模式,基础镜像就是这里的后端文件,只有变动的内容才放到disk文件中
[root@linux-node bb71867c-4078-4984-bf5a-f10bd84ba72b]# file diskdisk: QEMU QCOW Image(v3), has backing file (path /var/lib/nova/instances/_base/96bfe896f3aaff3091e7e222df51f2545),1073741824 bytes[root@linux-node bb71867c-4078-4984-bf5a-f10bd84ba72b]# qemu-img info diskimage: diskfile format: qcow2virtual size:1.0G(1073741824 bytes)disk size:2.3Mcluster_size:65536backing file:/var/lib/nova/instances/_base/96bfe896f3aaff3091e7e222df51f254516fee9cFormat specific information:compat:1.1lazy refcounts: false
- disk.info disk的详情
[root@linux-node bb71867c-4078-4984-bf5a-f10bd84ba72b]# qemu-img info disk.infoimage: disk.infofile format: rawvirtual size:512(512 bytes)disk size:4.0K
libvirt.xml 就是libvirt自动生成的xml,不可以改动此xml,因为改了也没什么卵用,此xml是启动虚拟机时动态生成的
- compute_nodes记录了主机名和时间戳
[root@linux-node instances]# cat compute_nodes{"linux-node.oldboyedu.com":1450560590.116144}
- locks目录:类似于写shell脚本时的lock文件
学习metadata
- metadata(元数据)
在创建虚拟机时可以添加或者修改虚拟机的默认属性,例如主机名,key-pair,ip地址等
在新创建的虚拟机上查看metadata的数据,这些都是可以通过metadata生成
$ curl http://169.254.169.254/2009-04-04/meta-dataami-idami-launch-indexami-manifest-pathblock-device-mapping/hostnameinstance-actioninstance-idinstance-typelocal-hostnamelocal-ipv4placement/public-hostnamepublic-ipv4public-keys/reservation-idsecurity-groups
- 查看路由
$ ip ro lidefault via 192.168.56.2 dev eth0169.254.169.254 via 192.168.56.100 dev eth0192.168.56.0/24 dev eth0 src 192.168.56.101
- 在控制节点查看网络的命令空间ns
[root@linux-node1 ~]# ip netns liqdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984
- 查看上述ns的具体网卡情况,也就是在命名空间中使用ip ad li并查看端口占用情况
[root@linux-node1 ~]# ip netns exec qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984 ip ad li1: lo:<LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever2: ns-34ea740c-a6:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether fa:16:3e:93:01:0e brd ff:ff:ff:ff:ff:ffinet 192.168.56.100/24 brd 192.168.56.255 scope global ns-34ea740c-a6valid_lft forever preferred_lft foreverinet 169.254.169.254/16 brd 169.254.255.255 scope global ns-34ea740c-a6valid_lft forever preferred_lft foreverinet6 fe80::f816:3eff:fe93:10e/64 scope linkvalid_lft forever preferred_lft forevervalid_lft forever preferred_lft forever[root@linux-node1 ~]# ip netns exec qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984 netstat -lntupActiveInternet connections (only servers)ProtoRecv-Q Send-Q LocalAddressForeignAddressState PID/Program nametcp 000.0.0.0:800.0.0.0:* LISTEN 3875/python2tcp 00192.168.56.100:530.0.0.0:* LISTEN 3885/dnsmasqtcp 00169.254.169.254:530.0.0.0:* LISTEN 3885/dnsmasqtcp6 00 fe80::f816:3eff:fe93:53:::* LISTEN 3885/dnsmasqudp 00192.168.56.100:530.0.0.0:*3885/dnsmasqudp 00169.254.169.254:530.0.0.0:*3885/dnsmasqudp 000.0.0.0:670.0.0.0:*3885/dnsmasqudp6 00 fe80::f816:3eff:fe93:53:::*3885/dnsmasq
- 总结
命名空间ns的ip地址dhcp服务分配的192.168.56.100而且还有一个169.254.169.254的ip,并在此启用了一个http服务(不仅提供http,还提供dns解析等),命名空间在neutron的dhcp-agent配置文件中启用了service_metadata_proxy = True而生效,
所以虚拟机的路由是命名空间通过dhcp推送(ip ro li查看出来的)的,key-pair就是通过命名空间在虚拟机生成时在/etc/rc.local中写一个curl的脚本把key-pair定位到.ssh目录下,并且改名即可,其他同理
七、Dashboard演示
7.1 编辑dashboard的配置文件
[root@linux-node1 ~]# vim /etc/openstack-dashboard/local_settings29 ALLOWED_HOSTS =['*','localhost']哪些主机可以访问,localhost代表列表138 OPENSTACK_HOST ="192.168.56.11"改成keystone的地址140 OPENSTACK_KEYSTONE_DEFAULT_ROLE ="user"keystone之前创建的108 CACHES ={109'default':{110'BACKEND':'django.core.cache.backends.memcached.MemcachedCache',111'LOCATION':'192.168.56.11:11211',112}113}打开使用memcached320 TIME_ZONE ="Asia/ShangHai"设置市区
重启apache
[root@linux-node1 ~]# systemctl restart httpd
7.2 操作dashboard
7.2.1 登录dashboard
使用keystone的demo用户登录(只有在管理员admin权限下才能看到所有instance)

7.2.2 删除之前的虚拟机并重新创建一台虚拟机
了解针对虚拟机的各个状态操作

- 绑定浮动ip:Eip
- 绑定/解绑接口:绑定或者解绑API
- 编辑云主机:修改云主机的参数
- 编辑安全组:修改secrity group的参数
- 控制台:novnc控制台
- 查看日志:查看console.log
- 中止实例:stop虚拟机
- 挂起实例:save 状态
- 废弃实例:将实例暂时留存
- 调整云主机大小: 调整其type
- 锁定/解锁实例:锁定/解锁这个云主机
- 软重启实例:正常重启,先stop后start
- 硬重启实例:类似于断电重启
- 关闭实例: shutdown该实例
- 重建云主机:重新build一个同样的云主机
- 终止实例: 删除云主机
7.2.3 launch instance

八、cinder
8.1存储的三大分类
块存储:硬盘,磁盘阵列DAS,SAN存储
文件存储:nfs,GluserFS,Ceph(PB级分布式文件系统),MooserFS(缺点Metadata数据丢失,虚拟机就毁了)
11.2网络类型选择
对象存储:swift,S3
8.2 cinder控制节点的部署
安装cinder
[root@linux-node1 ~]# yum install openstack-cinder python-cinderclient -y
修改cinder配置文件,修改后结果如下
修改结果如下
[root@linux-node1 ~]# grep -n "^[a-Z]"/etc/cinder/cinder.conf421:glance_host =192.168.56.11536:auth_strategy = keystone 配置glance服务的主机2294:rpc_backend = rabbit 使用rabbirmq消息队列2516:connection = mysql://cinder:cinder@192.168.56.11/cinder 配置mysql地址2641:auth_uri = http://192.168.56.11:50002642:auth_url = http://192.168.56.11:353572643:auth_plugin = password2644:project_domain_id = default2645:user_domain_id = default2646:project_name = service2647:username = cinder2648:password = cinder2873:lock_path =/var/lib/cinder/tmp 锁路径3172:rabbit_host =192.168.56.11 rabbitmq的主机3176:rabbit_port =5672 rabbitmq的端口3188:rabbit_userid = openstack rabbitmq的用户3192:rabbit_password = openstack rabbitmq的密码
修改nov的配置文件
[root@linux-node1 ~]# vim /etc/nova/nova.conf2145 os_region_name =RegionOne通知nova使用cinder
执行同步数据库操作
[root@linux-node1 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
检查倒入数据库结果
[root@linux-node1 ~]# mysql -ucinder -pcinder -e "use cinder;show tables;"+----------------------------+|Tables_in_cinder|+----------------------------+| backups || cgsnapshots || consistencygroups || driver_initiator_data || encryption || image_volume_cache_entries || iscsi_targets || migrate_version || quality_of_service_specs || quota_classes || quota_usages || quotas || reservations || services || snapshot_metadata || snapshots || transfers || volume_admin_metadata || volume_attachment || volume_glance_metadata || volume_metadata || volume_type_extra_specs || volume_type_projects || volume_types || volumes |+----------------------------+
创建一个cinder用户,加入service项目,给予admin角色
[root@linux-node1 ~]# openstack user create --domain default --password-prompt cinderUserPassword:RepeatUserPassword:(密码就是配置文件中配置的2648行)+-----------+----------------------------------+|Field|Value|+-----------+----------------------------------+| domain_id | default || enabled |True|| id |096964bd44124624ba7da2e13a4ebd92|| name | cinder |+-----------+----------------------------------+[root@linux-node1 ~]# openstack role add --project service --user cinder admin
重启nova-api服务和启动cinder服务
[root@linux-node1 ~]# systemctl restart openstack-nova-api.service[root@linux-node1 ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service.Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.[root@linux-node1 ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
创建服务(包含V1和V2)
[root@linux-node1 ~]# openstack service create --name cinder --description "OpenStack Block Storage" volume+-------------+----------------------------------+|Field|Value|+-------------+----------------------------------+| description |OpenStackBlockStorage|| enabled |True|| id |57d5d78509dd4ed8b9878d312b8be26d|| name | cinder || type | volume |+-------------+----------------------------------+[root@linux-node1 ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2+-------------+----------------------------------+|Field|Value|+-------------+----------------------------------+| description |OpenStackBlockStorage|| enabled |True|| id | bac129a7b6494e73947e83e56145c1c4 || name | cinderv2 || type | volumev2 |+-------------+----------------------------------+
分别对V1和V2创建三个环境(admin,internal,public)的endpoint
[root@linux-node1 ~]# openstack endpoint create --region RegionOne volume public http://192.168.56.11:8776/v1/%\(tenant_id\)s+--------------+--------------------------------------------+|Field|Value|+--------------+--------------------------------------------+| enabled |True|| id |151da63772d7444297c3e0321264eabe|| interface | public || region |RegionOne|| region_id |RegionOne|| service_id |57d5d78509dd4ed8b9878d312b8be26d|| service_name | cinder || service_type | volume || url | http://192.168.56.11:8776/v1/%(tenant_id)s |+--------------+--------------------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne volume internal http://192.168.56.11:8776/v1/%\(tenant_id\)s+--------------+--------------------------------------------+|Field|Value|+--------------+--------------------------------------------+| enabled |True|| id |67b5a787d6784184a296a46e46c66d7a|| interface | internal || region |RegionOne|| region_id |RegionOne|| service_id |57d5d78509dd4ed8b9878d312b8be26d|| service_name | cinder || service_type | volume || url | http://192.168.56.11:8776/v1/%(tenant_id)s |+--------------+--------------------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne volume admin http://192.168.56.11:8776/v1/%\(tenant_id\)s+--------------+--------------------------------------------+|Field|Value|+--------------+--------------------------------------------+| enabled |True|| id |719d5f3b1b034d7fb4fe577ff8f0f9ff|| interface | admin || region |RegionOne|| region_id |RegionOne|| service_id |57d5d78509dd4ed8b9878d312b8be26d|| service_name | cinder || service_type | volume || url | http://192.168.56.11:8776/v1/%(tenant_id)s |+--------------+--------------------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne volumev2 public http://192.168.56.11:8776/v2/%\(tenant_id\)s+--------------+--------------------------------------------+|Field|Value|+--------------+--------------------------------------------+| enabled |True|| id |140ea418e1c842c8ba2669d0eda47577|| interface | public || region |RegionOne|| region_id |RegionOne|| service_id | bac129a7b6494e73947e83e56145c1c4 || service_name | cinderv2 || service_type | volumev2 || url | http://192.168.56.11:8776/v2/%(tenant_id)s |+--------------+--------------------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne volumev2 internal http://192.168.56.11:8776/v2/%\(tenant_id\)s+--------------+--------------------------------------------+|Field|Value|+--------------+--------------------------------------------+| enabled |True|| id | e1871461053449a0a9ed1dd93e2de002 || interface | internal || region |RegionOne|| region_id |RegionOne|| service_id | bac129a7b6494e73947e83e56145c1c4 || service_name | cinderv2 || service_type | volumev2 || url | http://192.168.56.11:8776/v2/%(tenant_id)s |+--------------+--------------------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne volumev2 admin http://192.168.56.11:8776/v2/%\(tenant_id\)s+--------------+--------------------------------------------+|Field|Value|+--------------+--------------------------------------------+| enabled |True|| id |1b4f7495b4c5423fa8d541e6d917d3b9|| interface | admin || region |RegionOne|| region_id |RegionOne|| service_id | bac129a7b6494e73947e83e56145c1c4 || service_name | cinderv2 || service_type | volumev2 || url | http://192.168.56.11:8776/v2/%(tenant_id)s |+--------------+--------------------------------------------+
8.3 cinder存储节点的部署(此处使用nova的计算节点)
本文中cinder后端存储使用ISCSI(类似于nova-computer使用的kvm),ISCSI使用LVM,在定义好的VG中,每创建一个云硬盘,就会增加一个LV,使用ISCSI发布。
在存储节点上加一个硬盘
查看磁盘添加情况
[root@linux-node ~]# fdisk -lDisk/dev/sda:53.7 GB,53687091200 bytes,104857600 sectorsUnits= sectors of 1*512=512 bytesSector size (logical/physical):512 bytes /512 bytesI/O size (minimum/optimal):512 bytes /512 bytesDisk label type: dosDisk identifier:0x000bd159DeviceBootStartEndBlocksIdSystem/dev/sda1 *20482099199104857683Linux/dev/sda2 2099200356536311677721682Linux swap /Solaris/dev/sda3 356536321048575993460198483LinuxDisk/dev/sdb:53.7 GB,53687091200 bytes,104857600 sectorsUnits= sectors of 1*512=512 bytesSector size (logical/physical):512 bytes /512 bytesI/O size (minimum/optimal):512 bytes /512 bytes
创建一个pv和vg(名为cinder-volumes)
[root@linux-node ~]# pvcreate /dev/sdbPhysical volume "/dev/sdb" successfully created[root@linux-node ~]# vgcreate cinder-volumes /dev/sdbVolume group "cinder-volumes" successfully created
修改lvm的配置文件中添加filter,只有instance可以访问
[root@linux-node ~]# vim /etc/lvm/lvm.conf107 filter =["a/sdb/","r/.*/"]
存储节点安装
[root@linux-node ~]# yum install openstack-cinder targetcli python-oslo-policy -y
修改存储节点的配置文件,在这里直接拷贝控制节点的文件
[root@linux-node1 ~]# scp /etc/cinder/cinder.conf 192.168.56.12:/etc/cinder/cinder.conf[root@linux-node ~]# grep -n "^[a-Z]"/etc/cinder/cinder.conf421:glance_host =192.168.56.11536:auth_strategy = keystone540:enabled_backends = lvm 使用的后端是lvm,要对应添加的[lvm],当然使用hehe也可2294:rpc_backend = rabbit2516:connection = mysql://cinder:cinder@192.168.56.11/cinder2641:auth_uri = http://192.168.56.11:50002642:auth_url = http://192.168.56.11:353572643:auth_plugin = password2644:project_domain_id = default2645:user_domain_id = default2646:project_name = service2647:username = cinder2648:password = cinder2873:lock_path =/var/lib/cinder/tmp3172:rabbit_host =192.168.56.113176:rabbit_port =56723188:rabbit_userid = openstack3192:rabbit_password = openstack3414:[lvm]此行不是grep过滤出来的,因为是在配置文件最后添加上的,其对应的是540行的lvm3415:volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver使用lvm后端存储3416:volume_group = cinder-volumes vg的名称:刚才创建的3417:iscsi_protocol = iscsi 使用iscsi协议3418:iscsi_helper = lioadm
启动存储节点的cinder
[root@linux-node ~]# systemctl enable openstack-cinder-volume.service target.serviceln -s '/usr/lib/systemd/system/openstack-cinder-volume.service''/etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service'ln -s '/usr/lib/systemd/system/target.service''/etc/systemd/system/multi-user.target.wants/target.service'[root@linux-node ~]# systemctl start openstack-cinder-volume.service target.service
查看云硬盘服务状态(如果是虚拟机作为宿主机,时间不同步,会产生问题)
[root@linux-node1 ~]# cinder service-list+------------------+------------------------------+------+---------+-------+----------------------------+-----------------+|Binary|Host|Zone|Status|State|Updated_at|DisabledReason|+------------------+------------------------------+------+---------+-------+----------------------------+-----------------+| cinder-scheduler | linux-node1.oldboyedu.com | nova | enabled | up |2015-12-25T03:17:31.000000|-|| cinder-volume | linux-node.oldboyedu.com@lvm | nova | enabled | up |2015-12-25T03:17:29.000000|-|+------------------+------------------------------+------+---------+-------+----------------------------+-----------------+
创建一个云硬盘

将云硬盘挂载到虚拟机上,在虚拟机实例详情可以查看到
在虚拟机中对挂载的硬盘进行分区格式化,如果有时不想挂载这个云硬盘了,一定不要删掉,生产环境一定要注意,否则虚拟机会出现error,应使用umont,确定卸载了,再使用dashboard进行删除云硬盘
$ sudo fdisk -lDisk/dev/vda:1073 MB,1073741824 bytes255 heads,63 sectors/track,130 cylinders, total 2097152 sectorsUnits= sectors of 1*512=512 bytesSector size (logical/physical):512 bytes /512 bytesI/O size (minimum/optimal):512 bytes /512 bytesDisk identifier:0x00000000DeviceBootStartEndBlocksIdSystem/dev/vda1 *1606520884491036192+83LinuxDisk/dev/vdb:3221 MB,3221225472 bytes16 heads,63 sectors/track,6241 cylinders, total 6291456 sectorsUnits= sectors of 1*512=512 bytesSector size (logical/physical):512 bytes /512 bytesI/O size (minimum/optimal):512 bytes /512 bytesDisk identifier:0x00000000Disk/dev/vdb doesn't contain a valid partition table$ sudo fdisk /dev/vdbDevice contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel with disk identifier 0xfb4dbd94.Changes will remain in memory only, until you decide to write them.After that, of course, the previous content won't be recoverable.Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)Command(m for help): nPartition type:p primary (0 primary,0 extended,4 free)e extendedSelect(default p): pPartition number (1-4, default 1):Using default value 1First sector (2048-6291455, default 2048):Using default value 2048Last sector,+sectors or +size{K,M,G}(2048-6291455, default 6291455):Using default value 6291455Command(m for help): wThe partition table has been altered!Calling ioctl() to re-read partition table.Syncing disks.$ sudo mkfs.ext4 /dev/vdb\1mke2fs 1.42.2(27-Mar-2012)Filesystem label=OS type:LinuxBlock size=4096(log=2)Fragment size=4096(log=2)Stride=0 blocks,Stripe width=0 blocks196608 inodes,786176 blocks39308 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=80530636824 block groups32768 blocks per group,32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks:32768,98304,163840,229376,294912Allocating group tables:doneWriting inode tables:doneCreating journal (16384 blocks):doneWriting superblocks and filesystem accounting information:done$ sudo mkfs.ext4 /dev/vdbb1mke2fs 1.42.2(27-Mar-2012)Could not stat /dev/vdbb1 ---No such file or directoryThe device apparently does not exist; did you specify it correctly?$ sudo mkfs.ext4 /dev/vdb\1mke2fs 1.42.2(27-Mar-2012)Filesystem label=OS type:LinuxBlock size=4096(log=2)Fragment size=4096(log=2)Stride=0 blocks,Stripe width=0 blocks196608 inodes,786176 blocks39308 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=80530636824 block groups32768 blocks per group,32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks:32768,98304,163840,229376,294912Allocating group tables:doneWriting inode tables:doneCreating journal (16384 blocks):doneWriting superblocks and filesystem accounting information:done$ sudo mkdir /data$ sudo mount /dev/vdb1 /data$ df -hFilesystemSizeUsedAvailableUse%Mounted on/dev 242.3M0242.3M0%/dev/dev/vda1 23.2M18.0M4.0M82%/tmpfs 245.8M0245.8M0%/dev/shmtmpfs 200.0K72.0K128.0K36%/run/dev/vdb1 3.0G68.5M2.7G2%/dat
从云硬盘启动一个虚拟机,先创建一个demo2的云硬盘

九、虚拟机创建流程:
第一阶段:用户操作
1)用户使用Dashboard或者CLI连接keystone,发送用户名和密码,待keystone验证通过,keystone会返回给dashboard一个authtoken
2)Dashboard会带着上述的authtoken访问nova-api进行创建虚拟机请求
3)nova-api会通过keytoken确认dashboard的authtoken认证消息。
第二阶段:nova内组件交互阶段
4)nova-api把用户要创建的虚拟机的信息记录到数据库中.
5)nova-api使用rpc-call的方式发送请求给消息队列
6)nova-scheduler获取消息队列中的消息
7)nova-scheduler和查看数据库中要创建的虚拟机信息和计算节点的信息,进行调度
8)nova-scheduler把调度后的信息发送给消息队列
9)nova-computer获取nova-schedur发送给queue的消息
10)nova-computer通过消息队列发送消息给nova-conudctor,想要获取数据库中的要创建虚拟机信息
11)nova-conductor获取消息队列的消息
12)nova-conductor读取数据库中要创建虚拟机的信息
13)nova-conductor把从数据库获取的消息返回给消息队列
14)nova-computer获取nova-conducter返回给消息队列的信息
第三阶段:nova和其他组件进行交互
15)nova-computer通过authtoken和数据库返回的镜像id请求glance服务
16)glance会通过keystone进行认证
17)glance验证通过后把镜像返回给nova-computer
18)nova-computer通过authtoken和数据库返回的网络id请求neutron服务
19)neutron会通过keystone进行认证
20)neutron验证通过后把网络分配情况返回给nova-computer
21)nova-computer通过authtoken和数据库返回的云硬盘请求cinder服务
22)cinder会通过keystone进行认证
23)cinder验证通过后把云硬盘分配情况返回给nova-computer
第四阶段:nova创建虚拟机
24)nova-compute通过libvirt调用kvm根据已有的信息创建虚拟机,动态生成xml
25)nova-api会不断的在数据库中查询信息并在dashboard显示虚拟机的状态
生产场景注意事项:
1、新加的一个计算节点,创建虚拟机时间会很长,因为第一次使用计算节点,没有镜像,计算节点要把glance的镜像放在后端文件(/var/lib/nova/instance/_base)下,
镜像如果很大,自然会需要很长时间,然后才会在后端文件的基础上创建虚拟机(写时复制copy on write)。
2、创建虚拟机失败的原因之一:创建网桥失败。要保证eth0网卡配置文件的BOOTPROTE是static而不是dhcp状态。
十、负载均衡及服务LBaas
10.1 使用neutron-lbaas
[root@linux-node1 ~]# yum install openstack-neutron-lbaas python-neutron-lbaas -y
安装haproxy,openstack默认使用haproxy作为代理
[root@linux-node1 ~]# yum install haproxy -y
修改lbaas-agent和neutron配置文件,并重启neutron服务
[root@linux-node1 ~]# vim /etc/neutron/lbaas_agent.ini16 interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver31 device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver[root@linux-node1 ~]# vim /etc/neutron/neutron.conf77 service_plugins = router,lbaas[root@linux-node1 ~]# grep -n "^[a-Z]"/etc/neutron/neutron_lbaas.conf64:service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default[root@linux-node1 ~]# systemctl restart neutron-server[root@linux-node1 ~]# systemctl enable neutron-lbaas-agent.service[root@linux-node1 ~]# systemctl start neutron-lbaas-agent.service
使用lbaas创建一个http的负载均衡
在此负载均衡下加一个http节点(此节点使用的不是cirros镜像)
查看命名空间和命名空间
[root@linux-node1 ~]# ip netns liqlbaas-1f6d0ac9-32ee-496b-a183-7eaa85aeb2dbqdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984[root@linux-node1 ~]# ip netns liqlbaas-244327fe-a339-4cfd-a7a8-1be95903d3deqlbaas-1f6d0ac9-32ee-496b-a183-7eaa85aeb2dbqdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984[root@linux-node1 ~]# ip netns exec qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984 netstat -lntupActiveInternet connections (only servers)ProtoRecv-Q Send-Q LocalAddressForeignAddressState PID/Program nametcp 000.0.0.0:800.0.0.0:* LISTEN 3875/python2tcp 00192.168.56.100:530.0.0.0:* LISTEN 31752/dnsmasqtcp 00169.254.169.254:530.0.0.0:* LISTEN 31752/dnsmasqtcp6 00 fe80::f816:3eff:fe93:53:::* LISTEN 31752/dnsmasqudp 00192.168.56.100:530.0.0.0:*31752/dnsmasqudp 00169.254.169.254:530.0.0.0:*31752/dnsmasqudp 000.0.0.0:670.0.0.0:*31752/dnsmasqudp6 00 fe80::f816:3eff:fe93:53:::*31752/dnsmasq
查看控制节点自动生成的haproxy配置文件
[root@linux-node1 ~]# cat /var/lib/neutron/lbaas/244327fe-a339-4cfd-a7a8-1be95903d3de/confglobaldaemonuser nobodygroup haproxylog /dev/log local0log /dev/log local1 noticestats socket /var/lib/neutron/lbaas/244327fe-a339-4cfd-a7a8-1be95903d3de/sock mode 0666 level userdefaultslog globalretries 3option redispatchtimeout connect 5000timeout client 50000timeout server 50000frontend c16c7cf0-089f-4610-9fe2-724abb1bd145option tcplogbind 192.168.56.200:80mode httpdefault_backend 244327fe-a339-4cfd-a7a8-1be95903d3demaxconn 2option forwardforbackend 244327fe-a339-4cfd-a7a8-1be95903d3demode httpbalance roundrobinoption forwardfortimeout check 30soption httpchk GET /http-check expect rstatus 200stick-table type ip size 10kstick on srcserver b6e8f6cc-9b3c-4936-9932-21330536e2fe192.168.56.108:80 weight 5 check inter 30s fall 10
添加vip,关联浮动ip,搞定!

十一、扩展
11.1 所加镜像不知道密码,需要修改
修改dashboard的配置文件,重启服务
[root@linux-node1 ~]# vim /etc/openstack-dashboard/local_settings201 OPENSTACK_HYPERVISOR_FEATURES ={202'can_set_mount_point':True,203'can_set_password':True,204'requires_keypair':True,
修改计算节点的nova配置文件,重启服务
[root@linux-node ~]# vim /etc/nova/nova.conf2735 inject_password=true[root@linux-node ~]# systemctl restart openstack-nova-compute.service

11.2openstack网络类型选择
1、Flat :主机数量限制(253),自己做私有云足够了
2、VLAN :受Vlan4096限制
3、GRE:三层隧道协议,使用封装和解封装的技术来进行传输的稳定,只能使用openvswitch,不支持linuxbridge。缺点:二层协议上升到三层,效率降低
4、vxlan:vmvare的技术,解决了vlan不足的问题,克服GRE点对点扩展性差,把二层的数据进行封装通过UDP协议传输,突破了Vlan的限制,要使用上文所说的L3-Agent
11.3 私有云上线
1)开发测试云,用二手机器即可
2)生产私有云,
3)实现桌面虚拟化





浙公网安备 33010602011771号