openstack部署liberty版本
注意:理解openstack架构和各个服务的功能、组件、运行原理才是最重要的。
环境:CentOS Linux release 7.2.1511 (Core)
控制节点:
关闭防火墙及selinux
使用阿里的镜像源
[root@linux-node1 ~]#wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@linux-node1 ~]#wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
[root@linux-node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.140 linux-node1 linux-node1.example.com
192.168.1.150 linux-node2 linux-node2.example.com
[root@linux-node1 ~]# yum install -y chrony
[root@linux-node1 ~]# grep allow /etc/chrony.conf
allow 192.168/16 --->打开注释
[root@linux-node1 ~]# systemctl enable chronyd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/chronyd.service to /usr/lib/systemd/system/chronyd.service.
[root@linux-node1 ~]# systemctl start chronyd.service
[root@linux-node1 ~]# timedatectl set-timezone Asia/Shanghai
[root@linux-node1 ~]# date
Sat Jul 16 21:07:28 CST 2016 --->时间和物理机的时间一致!
[root@linux-node1 ~]#yum -y install epel-release
[root@linux-node1 ~]#yum install centos-release-openstack-liberty -y
[root@linux-node1 ~]#yum install python-openstackclient -y
####MySQL安装
yum install mariadb mariadb-server MySQL-python -y
[root@linux-node1 ~]# cp /usr/share/mysql/my-medium.cnf /etc/my.cnf
cp: overwrite ‘/etc/my.cnf’? y
my.cnf配置文件里添加如下内容:
[mysqld]
default-storage-engine = innodb
innodb_file_per_table --->使用独享的表空间
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
启动MySQL服务:
[root@linux-node1 ~]# systemctl enable mariadb.service
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
[root@linux-node1 ~]# systemctl start mariadb.service
设置密码:
[root@linux-node1 ~]# mysql_secure_installation
/usr/bin/mysql_secure_installation: line 379: find_mysql_client: command not found
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] Y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] Y
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] Y
... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] Y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] Y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
登陆查看:
[root@linux-node1 ~]# mysql -uroot -p
Enter password: ---->密码redhat
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 10
Server version: 5.5.47-MariaDB-log MariaDB Server
Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
3 rows in set (0.01 sec)
授权语句:
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
####RabbitMQ安装:
yum install rabbitmq-server -y
启动服务:
[root@linux-node1 ~]# systemctl enable rabbitmq-server.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to /usr/lib/systemd/system/rabbitmq-server.service.
[root@linux-node1 ~]# systemctl start rabbitmq-server.service
新建Rabbitmq用户并授权:
[root@linux-node1 ~]# rabbitmqctl add_user openstack openstack -->用户及密码都是openstack。
Creating user "openstack" ...
[root@linux-node1 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
启用Rabbitmq的web管理插件:
[root@linux-node1 ~]# rabbitmq-plugins enable rabbitmq_management
重启Rabbitmq:
[root@linux-node1 ~]# systemctl restart rabbitmq-server.service
查看Rabbit的端口,其中5672是服务端口,15672是web管理端口,25672是做集群的端口:
[root@linux-node1 ~]# netstat -lntup |grep 5672
tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 46395/beam.smp
tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 46395/beam.smp
tcp6 0 0 :::5672 :::* LISTEN 46395/beam.smp
在web界面添加openstack用户,设置权限,首次登陆必须使用账号和密码必须都是guest
#####keystone安装:
yum -y install openstack-keystone httpd mod_wsgi memcached python-memcached
[root@linux-node1 ~]# openssl rand -hex 10
6e8c0240a9c33190afcc
修改配置文件:
[root@linux-node1 ~]# egrep -v "^#|^$|^\[" /etc/keystone/keystone.conf
admin_token = 6e8c0240a9c33190afcc
connection = mysql://keystone:keystone@192.168.1.140/keystone
切换到keystone用户,导入keystone数据库:
[root@linux-node1 ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
[root@linux-node1 ~]# ll /var/log/keystone/
total 8
-rw-r--r--. 1 keystone keystone 7064 Jul 16 23:12 keystone.log
连接数据库验证下:
[root@linux-node1 ~]# mysql -ukeystone -pkeystone
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 14
Server version: 5.5.47-MariaDB-log MariaDB Server
Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> use keystone;
Database changed
MariaDB [keystone]> show tables ;
+------------------------+
| Tables_in_keystone |
+------------------------+
| access_token |
| assignment |
| config_register |
| consumer |
| credential |
| domain |
| endpoint |
| endpoint_group |
| federation_protocol |
| group |
| id_mapping |
| identity_provider |
| idp_remote_ids |
| mapping |
| migrate_version |
| policy |
| policy_association |
| project |
| project_endpoint |
| project_endpoint_group |
| region |
| request_token |
| revocation_event |
| role |
| sensitive_config |
| service |
| service_provider |
| token |
| trust |
| trust_role |
| user |
| user_group_membership |
| whitelisted_config |
+------------------------+
33 rows in set (0.00 sec)
验证成功!!!
[root@linux-node1 ~]# grep ^[a-z] /etc/keystone/keystone.conf
admin_token = 6e8c0240a9c33190afcc
verbose = true ---->开启debug模式
connection = mysql://keystone:keystone@192.168.1.140/keystone
driver = sql ---->在[revoke]栏,sql回滚
provider = uuid ---->在[token]栏,开启并使用唯一识别码
driver = memcache ---->在[token]栏,把token写入memcache
添加一个apache的wsgi-keystone配置文件,其中5000端口是提供该服务的,35357是为admin提供管理用的:
[root@linux-node1 ~]# cat /etc/httpd/conf.d/wsgi-keystone.conf
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
配置apache的servername,如果不配置servername,会影响keystone服务:
[root@linux-node1 ~]# grep 'ServerName' /etc/httpd/conf/httpd.conf
# ServerName gives the name and port that the server uses to identify itself.
ServerName 192.168.1.140:80
启动memcache、http服务:
[root@linux-node1 ~]# systemctl enable memcached.service httpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@linux-node1 ~]# systemctl start memcached.service httpd.service
查看端口:
[root@linux-node1 ~]# netstat -tnlp |grep httpd
tcp6 0 0 :::5000 :::* LISTEN 50290/httpd
tcp6 0 0 :::80 :::* LISTEN 50290/httpd
tcp6 0 0 :::35357 :::* LISTEN 50290/httpd
创建用户并连接keystone,在这里可以使用两种方式,通过keystone –help后家参数的方式,或者使用环境变量env的方式,下面就将使用环境变量的方式,分别设置了token,API及控制版本(SOA种很适用):
[root@linux-node1 ~]# export OS_TOKEN=6e8c0240a9c33190afcc
[root@linux-node1 ~]# export OS_URL=http://192.168.1.140:35357/v3
[root@linux-node1 ~]# export OS_IDENTITY_API_VERSION=3
创建admin项目(project):
[root@linux-node1 ~]# openstack project create --domain default --description "Admin Project" admin
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Admin Project |
| domain_id | default |
| enabled | True |
| id | 8b58ede01bce4892bd9afe132e8d3b9d |
| is_domain | False |
| name | admin |
| parent_id | None |
+-------------+----------------------------------+
创建admin用户(user)并设置密码为admin(生产环境一定设置一个复杂的):
[root@linux-node1 ~]# openstack user create --domain default --password-prompt admin
User Password: --->admin
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8460cda9179f48d0aadc6d23262aedd0 |
| name | admin |
+-----------+----------------------------------+
创建admin的角色(role):
[root@linux-node1 ~]# openstack role create admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | 97bb10021baa4b67b7b86aebc04f3f73 |
| name | admin |
+-------+----------------------------------+
把admin用户加到admin项目,赋予admin角色,把角色,项目,用户关联起来:
[root@linux-node1 ~]# openstack role add --project admin --user admin admin
创建一个普通用户demo,demo项目,角色为普通用户(uesr),并把它们关联起来:
[root@linux-node1 ~]# openstack project create --domain default --description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 430b4459d41d426a97a5495fd9a6c8b6 |
| is_domain | False |
| name | demo |
| parent_id | None |
+-------------+----------------------------------+
为demo用户设置密码为demo:
[root@linux-node1 ~]# openstack user create --domain default --password=demo demo
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | df20f2d1b944402dad4d232d6a7efe04 |
| name | demo |
+-----------+----------------------------------+
创建普通用户demo的角色(role):
[root@linux-node1 ~]# openstack role create user
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | d5444dba6766474eb8dc03aacd163ecc |
| name | user |
+-------+----------------------------------+
把demo用户加到demo项目,赋予普通用户user角色,把角色,项目,用户关联起来:
[root@linux-node1 ~]# openstack role add --project demo --user demo user
创建一个service的项目,此服务用来管理nova,neuturn,glance等组件的服务:
[root@linux-node1 ~]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | 184af6c3497c47328a8420a890d8a505 |
| is_domain | False |
| name | service |
| parent_id | None |
+-------------+----------------------------------+
查看创建的用户,角色,项目:
[root@linux-node1 ~]# openstack user list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 8460cda9179f48d0aadc6d23262aedd0 | admin |
| df20f2d1b944402dad4d232d6a7efe04 | demo |
+----------------------------------+-------+
[root@linux-node1 ~]# openstack project list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 184af6c3497c47328a8420a890d8a505 | service |
| 430b4459d41d426a97a5495fd9a6c8b6 | demo |
| 8b58ede01bce4892bd9afe132e8d3b9d | admin |
+----------------------------------+---------+
[root@linux-node1 ~]# openstack role list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 97bb10021baa4b67b7b86aebc04f3f73 | admin |
| d5444dba6766474eb8dc03aacd163ecc | user |
+----------------------------------+-------+
注册keystone服务,虽然keystone本身是搞注册的,但是自己也需要注册服务,创建keystone认证:
[root@linux-node1 ~]# openstack service create --name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | 7a6b2a29824a44688d3a9a4bf124da11 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
分别创建三种类型的endpoint,分别为public:对外可见;internal内部使用;admin管理使用:
openstack endpoint create --region RegionOne identity public http://192.168.1.140:5000/v2.0
openstack endpoint create --region RegionOne identity internal http://192.168.1.140:5000/v2.0
openstack endpoint create --region RegionOne identity admin http://192.168.1.140:35357/v2.0
[root@linux-node1 ~]# openstack endpoint create --region RegionOne identity public http://192.168.1.140:5000/v2.0
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | e1acc647cc5742528309b104c1dd7386 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7a6b2a29824a44688d3a9a4bf124da11 |
| service_name | keystone |
| service_type | identity |
| url | http://192.168.1.140:5000/v2.0 |
+--------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne identity internal http://192.168.1.140:5000/v2.0
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | e519daccb2e6425a8008e95d8136cddb |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7a6b2a29824a44688d3a9a4bf124da11 |
| service_name | keystone |
| service_type | identity |
| url | http://192.168.1.140:5000/v2.0 |
+--------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne identity admin http://192.168.1.140:35357/v2.0
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 574cea7b35ee43e0bafcbfad39e2d064 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7a6b2a29824a44688d3a9a4bf124da11 |
| service_name | keystone |
| service_type | identity |
| url | http://192.168.1.140:35357/v2.0 |
+--------------+----------------------------------+
查看创建的endpoint:
[root@linux-node1 ~]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+---------+---------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------
| 574cea7b35ee43e0bafcbfad39e2d064 | RegionOne | keystone | identity | True | admin | http://192.168.1.140:35357/v2.0 |
| e1acc647cc5742528309b104c1dd7386 | RegionOne | keystone | identity | True | public | http://192.168.1.140:5000/v2.0 |
| e519daccb2e6425a8008e95d8136cddb | RegionOne | keystone | identity | True | internal | http://192.168.1.140:5000/v2.0 |
+----------------------------------+-----------+--------------+--------------+---------+-----------
注意:若上面操作创建错误,可以执行openstack endpoint delete id号 删除。
链接到keystone,请求token,在这里由于已经添加了用户名和密码,就不在需要使用token了,所以就一定要取消环境变量了(否则会冲突):
[root@linux-node1 ~]# unset OS_TOKEN
[root@linux-node1 ~]# unset OS_URL
[root@linux-node1 ~]# openstack --os-auth-url http://192.168.1.140:35357/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name admin --os-username admin --os-auth-type password token issue
Password: ---->输入密码admin
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2016-07-17T04:33:35.105560Z |
| id | 1ea5321e11064785939557c97324536a |
| project_id | 8b58ede01bce4892bd9afe132e8d3b9d |
| user_id | 8460cda9179f48d0aadc6d23262aedd0 |
+------------+----------------------------------+
至此keystone创建成功!!!!
配置keystone环境变量,方便执行命令:
配置admin和demo用户的环境变量,并添加执行权限,以后执行命令,直接source一下就行了。
[root@linux-node1 ~]# cat admin-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://192.168.1.140:35357/v3
export OS_IDENTITY_API_VERSION=3
[root@linux-node1 ~]# cat demo-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=demo
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.1.140:5000/v3
export OS_IDENTITY_API_VERSION=3
[root@linux-node1 ~]# chmod +x admin-openrc.sh
[root@linux-node1 ~]# chmod +x demo-openrc.sh
[root@linux-node1 ~]# source admin-openrc.sh
[root@linux-node1 ~]# openstack token issue
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2016-07-17T04:42:08.318360Z |
| id | d4c95a863d854511b5ee9397ba5ecb9d |
| project_id | 8b58ede01bce4892bd9afe132e8d3b9d |
| user_id | 8460cda9179f48d0aadc6d23262aedd0 |
+------------+----------------------------------+
#####Glance部署:
安装glance
[root@linux-node1 ~]# yum -y install openstack-glance python-glance python-glanceclient
修改glance-api和glance-registry的配置文件,同步数据库:
[root@linux-node1 ~]# grep -n ^connection /etc/glance/glance-api.conf
538:connection=mysql://glance:glance@192.168.1.140/glance
[root@linux-node1 ~]# grep -n ^connection /etc/glance/glance-registry.conf
363:connection=mysql://glance:glance@192.168.1.140/glance
同步下MySQL数据库:
[root@linux-node1 ~]# su -s /bin/sh -c "glance-manage db_sync" glance
No handlers could be found for logger "oslo_config.cfg" --->此提示可以忽略
检查导入glance库的表情况:
[root@linux-node1 ~]# mysql -h 192.168.1.140 -uglance -pglance
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 21
Server version: 5.5.47-MariaDB-log MariaDB Server
Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> use glance;
Database changed
MariaDB [glance]> show tables;
+----------------------------------+
| Tables_in_glance |
+----------------------------------+
| artifact_blob_locations |
| artifact_blobs |
| artifact_dependencies |
| artifact_properties |
| artifact_tags |
| artifacts |
| image_locations |
| image_members |
| image_properties |
| image_tags |
| images |
| metadef_namespace_resource_types |
| metadef_namespaces |
| metadef_objects |
| metadef_properties |
| metadef_resource_types |
| metadef_tags |
| migrate_version |
| task_info |
| tasks |
+----------------------------------+
20 rows in set (0.01 sec)
配置glance连接keystone,对于keystone,每个服务都要有一个用户连接keystone:
[root@linux-node1 ~]# source admin-openrc.sh
[root@linux-node1 ~]# openstack user create --domain default --password=glance glance
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8f855c54e76545cfa37aafe45a4e903b |
| name | glance |
+-----------+----------------------------------+
把glance用户添加到service项目,并赋予admin角色的权限
[root@linux-node1 ~]# openstack role add --project service --user glance admin
修改glance-api配置文件,结合keystone和mysql:
[keystone_authtoken]下面加上如下内容:
auth_uri = http://192.168.1.140:5000
auth_url = http://192.168.1.140:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = glance
[root@linux-node1 ~]# grep ^[a-Z] /etc/glance/glance-api.conf
verbose=True
notification_driver = noop
connection=mysql://glance:glance@192.168.1.140/glance
default_store=file
filesystem_store_datadir=/var/lib/glance/images/
auth_uri = http://192.168.1.140:5000
auth_url = http://192.168.1.140:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = glance
flavor=keystone
修改glance-registry配置文件,结合keystone和mysql:
[root@linux-node1 ~]# grep ^[a-Z] /etc/glance/glance-registry.conf
connection=mysql://glance:glance@192.168.1.140/glance
auth_uri = http://192.168.1.140:5000
auth_url = http://192.168.1.140:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = glance
flavor=keystone
对glance设置开机启动并启动glance服务:
[root@linux-node1 ~]# systemctl enable openstack-glance-api.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service.
[root@linux-node1 ~]# systemctl enable openstack-glance-registry.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service to /usr/lib/systemd/system/openstack-glance-registry.service.
[root@linux-node1 ~]# systemctl start openstack-glance-api.service
[root@linux-node1 ~]# systemctl start openstack-glance-registry.service
查看galnce占用端口情况,其中9191是registry占用端口,9292是api占用端口:
[root@linux-node1 ~]# netstat -lntup|egrep "9191|9292"
tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 60249/python2
tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 60322/python2
使glance服务在keystone上注册,才可以允许其他服务调用glance:
[root@linux-node1 ~]# openstack service create --name glance --description "OpenStack Image service" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image service |
| enabled | True |
| id | 1ed6ef3a166f4a4abadb5f8bdfaf7b3f |
| name | glance |
| type | image |
+-------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne image public http://192.168.1.140:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 8175af137c0b48b38e5772ec21de5272 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 1ed6ef3a166f4a4abadb5f8bdfaf7b3f |
| service_name | glance |
| service_type | image |
| url | http://192.168.1.140:9292 |
+--------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne image internal http://192.168.1.140:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 4c515c60b31a4d56b37837e7f0618dde |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 1ed6ef3a166f4a4abadb5f8bdfaf7b3f |
| service_name | glance |
| service_type | image |
| url | http://192.168.1.140:9292 |
+--------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne image admin http://192.168.1.140:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6485e7c67fc14437845383ccdb11b337 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 1ed6ef3a166f4a4abadb5f8bdfaf7b3f |
| service_name | glance |
| service_type | image |
| url | http://192.168.1.140:9292 |
+--------------+----------------------------------+
在admin和demo中加入glance的环境变量,告诉其他服务glance使用的环境变量,一定要在admin-openrc.sh的路径下执行:
[root@linux-node1 ~]# echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh
export OS_IMAGE_API_VERSION=2
[root@linux-node1 ~]# tail -1 admin-openrc.sh
export OS_IMAGE_API_VERSION=2
[root@linux-node1 ~]# tail -1 demo-openrc.sh
export OS_IMAGE_API_VERSION=2
如果出现以下情况,表示glance配置成功,由于没有镜像,所以看不到:
[root@linux-node1 ~]# glance image-list
+----+------+
| ID | Name |
+----+------+
+----+------+
下载一个镜像:
[root@linux-node1 ~]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
[root@linux-node1 ~]# ls -lh cirros-0.3.4-x86_64-disk.img
-rw-r--r--. 1 root root 13M May 8 2015 cirros-0.3.4-x86_64-disk.img
上传镜像到glance,要在上一步所下载的镜像当前目录执行:
[root@linux-node1 ~]# glance image-create --name "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress
[=============================>] 100%
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2016-07-17T09:33:59Z |
| disk_format | qcow2 |
| id | 007d7dfc-aba2-4d50-8e37-723a74d2104e |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 8b58ede01bce4892bd9afe132e8d3b9d |
| protected | False |
| size | 13287936 |
| status | active |
| tags | [] |
| updated_at | 2016-07-17T09:34:01Z |
| virtual_size | None |
| visibility | public |
+------------------+--------------------------------------+
[root@linux-node1 ~]# ll /var/lib/glance/images/007d7dfc-aba2-4d50-8e37-723a74d2104e
-rw-r-----. 1 glance glance 13287936 Jul 17 17:34 /var/lib/glance/images/007d7dfc-aba2-4d50-8e37-723a74d2104e
[root@linux-node1 ~]# file /var/lib/glance/images/007d7dfc-aba2-4d50-8e37-723a74d2104e
/var/lib/glance/images/007d7dfc-aba2-4d50-8e37-723a74d2104e: QEMU QCOW Image (v2), 41126400 bytes
查看上传镜像:
[root@linux-node1 ~]# glance image-list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| 007d7dfc-aba2-4d50-8e37-723a74d2104e | cirros |
+--------------------------------------+--------+
######Nova控制节点的部署
安装:
[root@linux-node1 ~]# yum -y install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
Nova相关配置:
1)数据库
[root@linux-node1 ~]# grep -n ^connection /etc/nova/nova.conf
1742:connection=mysql://nova:nova@192.168.1.140/nova
同步数据库:
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" nova
登陆数据库查看:
[root@linux-node1 ~]# mysql -unova -pnova
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 41
Server version: 5.5.47-MariaDB-log MariaDB Server
Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> use nova;
Database changed
MariaDB [nova]> show tables;
+--------------------------------------------+
| Tables_in_nova |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| block_device_mapping |
| bw_usage_cache |
| cells |
| certificates |
| compute_nodes |
| console_pools |
| consoles |
| dns_domains |
| fixed_ips |
| floating_ips |
| instance_actions |
| instance_actions_events |
| instance_extra |
| instance_faults |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_id_mappings |
| instance_info_caches |
| instance_metadata |
| instance_system_metadata |
| instance_type_extra_specs |
| instance_type_projects |
| instance_types |
| instances |
| key_pairs |
| migrate_version |
| migrations |
| networks |
| pci_devices |
| project_user_quotas |
| provider_fw_rules |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| s3_images |
| security_group_default_rules |
| security_group_instance_association |
| security_group_rules |
| security_groups |
| services |
| shadow_agent_builds |
| shadow_aggregate_hosts |
| shadow_aggregate_metadata |
| shadow_aggregates |
| shadow_block_device_mapping |
| shadow_bw_usage_cache |
| shadow_cells |
| shadow_certificates |
| shadow_compute_nodes |
| shadow_console_pools |
| shadow_consoles |
| shadow_dns_domains |
| shadow_fixed_ips |
| shadow_floating_ips |
| shadow_instance_actions |
| shadow_instance_actions_events |
| shadow_instance_extra |
| shadow_instance_faults |
| shadow_instance_group_member |
| shadow_instance_group_policy |
| shadow_instance_groups |
| shadow_instance_id_mappings |
| shadow_instance_info_caches |
| shadow_instance_metadata |
| shadow_instance_system_metadata |
| shadow_instance_type_extra_specs |
| shadow_instance_type_projects |
| shadow_instance_types |
| shadow_instances |
| shadow_key_pairs |
| shadow_migrate_version |
| shadow_migrations |
| shadow_networks |
| shadow_pci_devices |
| shadow_project_user_quotas |
| shadow_provider_fw_rules |
| shadow_quota_classes |
| shadow_quota_usages |
| shadow_quotas |
| shadow_reservations |
| shadow_s3_images |
| shadow_security_group_default_rules |
| shadow_security_group_instance_association |
| shadow_security_group_rules |
| shadow_security_groups |
| shadow_services |
| shadow_snapshot_id_mappings |
| shadow_snapshots |
| shadow_task_log |
| shadow_virtual_interfaces |
| shadow_volume_id_mappings |
| shadow_volume_usage_cache |
| snapshot_id_mappings |
| snapshots |
| tags |
| task_log |
| virtual_interfaces |
| volume_id_mappings |
| volume_usage_cache |
+--------------------------------------------+
105 rows in set (0.00 sec)
2)rabbitMQ相关
[root@linux-node1 ~]# grep -n ^[r] /etc/nova/nova.conf
1422:rpc_backend=rabbit
2924:rabbit_host=192.168.1.140
2928:rabbit_port=5672
2940:rabbit_userid=openstack
2944:rabbit_password=openstack
3)keystone相关
创建nova用户,并加入到service项目中,赋予admin权限:
[root@linux-node1 ~]# source admin-openrc.sh
[root@linux-node1 ~]# openstack user create --domain default --password=nova nova
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | a86eea86753343f7a57f8fc740730e96 |
| name | nova |
+-----------+----------------------------------+
[root@linux-node1 ~]# openstack role add --project service --user nova admin
[keystone_authtoken]下面添加如下内容:
auth_uri = http://192.168.1.140:5000
auth_url = http://192.168.1.140:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = nova
[root@linux-node1 ~]# grep -n ^[a-z] /etc/nova/nova.conf
198:my_ip=192.168.1.140 --->变量设置,方便调用
344:enabled_apis=osapi_compute,metadata --->禁用ec2的API
506:auth_strategy=keystone --->(使用keystone验证,分清处这个是default模块下的)
838:network_api_class=nova.network.neutronv2.api.API ---> 网络使用neutron的,中间的‘.’代表目录结构
930:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver ---> (以前的类的名称 LinuxBridgeInterfaceDriver,现在叫做NeutronLinuxBridgeInterfaceDrive)
1064:security_group_api=neutron --->设置安全组sg为neutron
1241:firewall_driver=nova.virt.firewall.NoopFirewallDriver --->(关闭防火墙)
1423:rpc_backend=rabbit ---> 使用rabbitmq消息队列
1743:connection=mysql://nova:nova@192.168.1.140/nova --->连接数据库
1944:host=$my_ip ---> glance的地址
2126:auth_uri = http://192.168.1.140:5000
2127:auth_url = http://192.168.1.140:35357
2128:auth_plugin = password
2129:project_domain_id = default
2130:user_domain_id = default
2131:project_name = service --->使用service项目
2132:username = nova
2133:password = nova
2752:lock_path=/var/lib/nova/tmp --->锁路径
2933:rabbit_host=192.168.1.140 --->指定rabbit主机
2937:rabbit_port=5672 --->rabbitmq端口
2949:rabbit_userid=openstack --->rabbitmq用户
2953:rabbit_password=openstack --->rabbitmq用户
3320:vncserver_listen=$my_ip --->vnc监听地址
3325:vncserver_proxyclient_address=$my_ip --->代理客户端地址
启动nova的全部服务:
[root@linux-node1 ~]# systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@linux-node1 ~]# systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
在keystone上注册nova,并检查控制节点的nova服务是否配置成功:
[root@linux-node1 ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 95f2f0bdf9bb4b4a94a06bb9bfcbcb53 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute public http://192.168.1.140:8774/v2/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field | Value |
+--------------+--------------------------------------------+
| enabled | True |
| id | a9baebc1b4dd445e99eef273d00dce24 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 95f2f0bdf9bb4b4a94a06bb9bfcbcb53 |
| service_name | nova |
| service_type | compute |
| url | http://192.168.1.140:8774/v2/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute internal http://192.168.1.140:8774/v2/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field | Value |
+--------------+--------------------------------------------+
| enabled | True |
| id | a07998f9aa034d66b1deb774d352e455 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 95f2f0bdf9bb4b4a94a06bb9bfcbcb53 |
| service_name | nova |
| service_type | compute |
| url | http://192.168.1.140:8774/v2/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute admin http://192.168.1.140:8774/v2/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field | Value |
+--------------+--------------------------------------------+
| enabled | True |
| id | a072ecc3091b4562b5de6b0536bb1c0f |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 95f2f0bdf9bb4b4a94a06bb9bfcbcb53 |
| service_name | nova |
| service_type | compute |
| url | http://192.168.1.140:8774/v2/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack host list
+-------------------------+-------------+----------+
| Host Name | Service | Zone |
+-------------------------+-------------+----------+
| linux-node1.example.com | conductor | internal |
| linux-node1.example.com | consoleauth | internal |
| linux-node1.example.com | scheduler | internal |
| linux-node1.example.com | cert | internal |
+-------------------------+-------------+----------+
计算节点部署
关闭防火墙及selinux
[root@linux-node2 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.140 linux-node1 linux-node1.example.com
192.168.1.150 linux-node2 linux-node2.example.com
使用阿里的镜像源:
[root@linux-node2 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@linux-node2 ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
基本的安装:
[root@linux-node2 ~]#yum -y install epel-release
[root@linux-node2 ~]#yum install centos-release-openstack-liberty -y
[root@linux-node2 ~]#yum install python-openstackclient -y
Nova在计算节点linux-node2.example.com上的安装:
[root@linux-node2 ~]# yum install openstack-nova-compute sysfsutils -y
Neutron在计算节点linux-node2.example.com上的安装:
[root@linux-node2 ~]# yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset -y
时间同步的部署
[root@linux-node2 ~]# yum install -y chrony
[root@linux-node2 ~]# cat /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 192.168.1.140 iburst (只保留这一个server行,其他的都删除,也就是控制节点的时间)
chrony开机自启动,并且启动:
[root@linux-node1 ~]# systemctl enable chronyd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/chronyd.service to /usr/lib/systemd/system/chronyd.service.
[root@linux-node1 ~]# systemctl start chronyd.service
设置Centos7的时区
[root@linux-node2 ~]# timedatectl set-timezone Asia/Shanghai
验证时间同步情况:
[root@linux-node2 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* linux-node1 3 6 177 35 +37us[ +101us] +/- 42ms
[root@linux-node2 ~]# date
Tue Jul 19 21:03:53 CST 2016
#####开始nova计算节点部署
1)nova-compute一般运行在计算节点上,通过message queue接收并管理VM的生命周期。
2)nova-compute通过libvirt管理KVM,通过XenAPI管理Xen等。
更改计算节点上的配置文件,直接使用控制节点的配置文件拷贝到计算节点:
[root@linux-node1 ~]# scp /etc/nova/nova.conf root@192.168.1.150:/etc/nova/
[root@linux-node2 ~]# grep -n ^[a-Z] /etc/nova/nova.conf
198:my_ip=192.168.1.150 --->改成本机ip
344:enabled_apis=osapi_compute,metadata
506:auth_strategy=keystone
838:network_api_class=nova.network.neutronv2.api.API
930:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
1064:security_group_api=neutron
1241:firewall_driver=nova.virt.firewall.NoopFirewallDriver
1423:rpc_backend=rabbit
1743:connection=mysql://nova:nova@192.168.1.140/nova
1944:host=192.168.1.140
2126:auth_uri = http://192.168.1.140:5000
2127:auth_url = http://192.168.1.140:35357
2128:auth_plugin = password
2129:project_domain_id = default
2130:user_domain_id = default
2131:project_name = service
2132:username = nova
2133:password = nova
2310:virt_type=kvm --->使用kvm虚拟机,需要cpu支持,可通过grep "vmx" /proc/cpuinfo查看
2752:lock_path=/var/lib/nova/tmp
2933:rabbit_host=192.168.1.140
2937:rabbit_port=5672
2949:rabbit_userid=openstack
2953:rabbit_password=openstack
3311:novncproxy_base_url=http://192.168.1.140:6080/vnc_auto.html --->指定novncproxy的IP地址和端口
3320:vncserver_listen=0.0.0.0 --->vnc监听0.0.0.0
3325:vncserver_proxyclient_address=$my_ip
3329:enabled=true --->启用vnc
3333:keymap=en-us
启动计算节点的libvirt和nova-compute:
[root@linux-node2 ~]# systemctl enable libvirtd openstack-nova-compute
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.
[root@linux-node2 ~]# systemctl start libvirtd openstack-nova-compute
在控制节点中查看注册的host,最后一个compute即是注册的host:
[root@linux-node1 ~]# source admin-openrc.sh
[root@linux-node1 ~]# openstack host list
+-------------------------+-------------+----------+
| Host Name | Service | Zone |
+-------------------------+-------------+----------+
| linux-node1.example.com | conductor | internal |
| linux-node1.example.com | consoleauth | internal |
| linux-node1.example.com | scheduler | internal |
| linux-node1.example.com | cert | internal |
| linux-node2.example.com | compute | nova |
+-------------------------+-------------+----------+
在控制节点中测试nova和glance连接正常:
[root@linux-node1 ~]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------+--------+--------+
| 007d7dfc-aba2-4d50-8e37-723a74d2104e | cirros | ACTIVE | |
+--------------------------------------+--------+--------+--------+
nova链接keystone正常:
[root@linux-node1 ~]# nova endpoints
WARNING: glance has no endpoint in ! Available endpoints for this service:
+-----------+----------------------------------+
| glance | Value |
+-----------+----------------------------------+
| id | 4c515c60b31a4d56b37837e7f0618dde |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.140:9292 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| glance | Value |
+-----------+----------------------------------+
| id | 6485e7c67fc14437845383ccdb11b337 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.140:9292 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| glance | Value |
+-----------+----------------------------------+
| id | 8175af137c0b48b38e5772ec21de5272 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.140:9292 |
+-----------+----------------------------------+
WARNING: keystone has no endpoint in ! Available endpoints for this service:
+-----------+----------------------------------+
| keystone | Value |
+-----------+----------------------------------+
| id | 574cea7b35ee43e0bafcbfad39e2d064 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.140:35357/v2.0 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| keystone | Value |
+-----------+----------------------------------+
| id | e1acc647cc5742528309b104c1dd7386 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.140:5000/v2.0 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| keystone | Value |
+-----------+----------------------------------+
| id | e519daccb2e6425a8008e95d8136cddb |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.140:5000/v2.0 |
+-----------+----------------------------------+
WARNING: nova has no endpoint in ! Available endpoints for this service:
+-----------+---------------------------------------------------------------+
| nova | Value |
+-----------+---------------------------------------------------------------+
| id | a072ecc3091b4562b5de6b0536bb1c0f |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.140:8774/v2/8b58ede01bce4892bd9afe132e8d3b9d |
+-----------+---------------------------------------------------------------+
+-----------+---------------------------------------------------------------+
| nova | Value |
+-----------+---------------------------------------------------------------+
| id | a07998f9aa034d66b1deb774d352e455 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.140:8774/v2/8b58ede01bce4892bd9afe132e8d3b9d |
+-----------+---------------------------------------------------------------+
+-----------+---------------------------------------------------------------+
| nova | Value |
+-----------+---------------------------------------------------------------+
| id | a9baebc1b4dd445e99eef273d00dce24 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.140:8774/v2/8b58ede01bce4892bd9afe132e8d3b9d |
+-----------+---------------------------------------------------------------+
#####Neturn 服务部署
Nova-Network-------->Quantum---------->Neutron
1)控制节点部署:
[root@linux-node1 ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset -y
注册neutron服务:
[root@linux-node1 ~]# source admin-openrc.sh
[root@linux-node1 ~]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | fddbd5b7fa324e199a385df839d0931f |
| name | neutron |
| type | network |
+-------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne network public http://192.168.1.140:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | b8c03535c9d04e6395e93ce98fe92e28 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | fddbd5b7fa324e199a385df839d0931f |
| service_name | neutron |
| service_type | network |
| url | http://192.168.1.140:9696 |
+--------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne network internal http://192.168.1.140:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | d4c9a741ddc8474b8ff285db1bb4768c |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | fddbd5b7fa324e199a385df839d0931f |
| service_name | neutron |
| service_type | network |
| url | http://192.168.1.140:9696 |
+--------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne network admin http://192.168.1.140:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | d120ed30b1404867a5728bb8adecda25 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | fddbd5b7fa324e199a385df839d0931f |
| service_name | neutron |
| service_type | network |
| url | http://192.168.1.140:9696 |
+--------------+----------------------------------+
创建neutron用户,并添加到service项目,给予admin权限:
[root@linux-node1 ~]# openstack user create --domain default --password=neutron neutron
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8298804f466d4fe099f96c9f45e50a8a |
| name | neutron |
+-----------+----------------------------------+
[root@linux-node1 ~]# openstack role add --project service --user neutron admin
修改neturn配置文件:
[root@linux-node1 ~]# grep -n ^[a-Z] /etc/neutron/neutron.conf
20:state_path = /var/lib/neutron
60:core_plugin = ml2
77:service_plugins = router
92:auth_strategy = keystone
360:notify_nova_on_port_status_changes = True
364:notify_nova_on_port_data_changes = True
367:nova_url = http://192.168.1.140:8774/v2
573:rpc_backend=rabbit
726:auth_uri = http://192.168.1.140:5000
727:auth_url = http://192.168.1.140:35357
728:auth_plugin = password
729:project_domain_id = default
730:user_domain_id = default
731:project_name = service
732:username = neutron
733:password = neutron
741:connection = mysql://neutron:neutron@192.168.1.140:3306/neutron
784:auth_url = http://192.168.1.140:35357
785:auth_plugin = password
786:project_domain_id = default
787:user_domain_id = default
788:region_name = RegionOne
789:project_name = service
790:username = nova
791:password = nova
827:lock_path = $state_path/lock
1007:rabbit_host = 192.168.1.140
1011:rabbit_port = 5672
1023:rabbit_userid = openstack
1027:rabbit_password = openstack
修改ml2的配置文件,ml2后续会有详细说明:
[root@linux-node1 ~]# grep -n ^[a-Z] /etc/neutron/plugins/ml2/ml2_conf.ini
5:type_drivers = flat,vlan,gre,vxlan,geneve --->各种驱动
12:tenant_network_types = vlan,gre,vxlan,geneve --->网络类型
18:mechanism_drivers = openvswitch,linuxbridge --->支持的底层驱动
27:extension_drivers = port_security --->端口安全
67:flat_networks = physnet1 --->使用单一扁平网络(和host一个网络)
120:enable_ipset = True
修改的linuxbridge配置文件:
[root@linux-node1 ~]# grep -n ^[a-Z] /etc/neutron/plugins/ml2/linuxbridge_agent.ini
9:physical_interface_mappings = physnet1:eth0 --->网卡映射eth0
16:enable_vxlan = false --->关闭vxlan
56:prevent_arp_spoofing = True
61:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
66:enable_security_group = True
修改dhcp的配置文件:
[root@linux-node1 ~]# grep -n ^[a-Z] /etc/neutron/dhcp_agent.ini
27:interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
34:dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
55:enable_isolated_metadata = true
修改metadata_agent.ini配置文件:
[root@linux-node1 ~]# grep -n ^[a-Z] /etc/neutron/metadata_agent.ini
4:auth_uri = http://192.168.1.140:5000
5:auth_url = http://192.168.1.140:35357
6:auth_region = RegionOne
7:auth_plugin = password
8:project_domain_id = default
9:user_domain_id = default
10:project_name = service
11:username = neutron
12:password = neutron
29:nova_metadata_ip = 192.168.1.140
52:metadata_proxy_shared_secret = neutron
在控制节点的nova中添加关于neutron的配置,添加如下内容到neutron模块即可:
[neutron]下面加入如下内容:
url = http://192.168.1.140:9696
auth_url = http://192.168.1.140:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
查看修改的结果:
[root@linux-node1 ~]# grep -n "^[a-Z]" /etc/nova/nova.conf --->省略了一些内容!
2560:url = http://192.168.1.140:9696
2561:auth_url = http://192.168.1.140:35357
2562:auth_plugin = password
2563:project_domain_id = default
2564:user_domain_id = default
2565:region_name = RegionOne
2566:project_name = service
2567:username = neutron
2568:password = neutron
2575:service_metadata_proxy=true
2578:metadata_proxy_shared_secret = neutron
创建ml2的软连接:
[root@linux-node1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步neutron数据库,并检查结果:
[root@linux-node1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
[root@linux-node1 ~]# mysql -uneutron -pneutron
MariaDB [(none)]> use neutron;
Database changed
MariaDB [neutron]> show tables;
+-----------------------------------------+
| Tables_in_neutron |
+-----------------------------------------+
| address_scopes |
| agents |
| alembic_version |
| allowedaddresspairs |
| arista_provisioned_nets |
| arista_provisioned_tenants |
| arista_provisioned_vms |
| brocadenetworks |
| brocadeports |
| cisco_csr_identifier_map |
| cisco_hosting_devices |
| cisco_ml2_apic_contracts |
| cisco_ml2_apic_host_links |
| cisco_ml2_apic_names |
| cisco_ml2_n1kv_network_bindings |
| cisco_ml2_n1kv_network_profiles |
| cisco_ml2_n1kv_policy_profiles |
| cisco_ml2_n1kv_port_bindings |
| cisco_ml2_n1kv_profile_bindings |
| cisco_ml2_n1kv_vlan_allocations |
| cisco_ml2_n1kv_vxlan_allocations |
| cisco_ml2_nexus_nve |
| cisco_ml2_nexusport_bindings |
| cisco_port_mappings |
| cisco_router_mappings |
| consistencyhashes |
| csnat_l3_agent_bindings |
| default_security_group |
| dnsnameservers |
| dvr_host_macs |
| embrane_pool_port |
| externalnetworks |
| extradhcpopts |
| firewall_policies |
| firewall_rules |
| firewalls |
| flavors |
| flavorserviceprofilebindings |
| floatingips |
| ha_router_agent_port_bindings |
| ha_router_networks |
| ha_router_vrid_allocations |
| healthmonitors |
| ikepolicies |
| ipallocationpools |
| ipallocations |
| ipamallocationpools |
| ipamallocations |
| ipamavailabilityranges |
| ipamsubnets |
| ipavailabilityranges |
| ipsec_site_connections |
| ipsecpeercidrs |
| ipsecpolicies |
| lsn |
| lsn_port |
| maclearningstates |
| members |
| meteringlabelrules |
| meteringlabels |
| ml2_brocadenetworks |
| ml2_brocadeports |
| ml2_dvr_port_bindings |
| ml2_flat_allocations |
| ml2_geneve_allocations |
| ml2_geneve_endpoints |
| ml2_gre_allocations |
| ml2_gre_endpoints |
| ml2_network_segments |
| ml2_nexus_vxlan_allocations |
| ml2_nexus_vxlan_mcast_groups |
| ml2_port_binding_levels |
| ml2_port_bindings |
| ml2_ucsm_port_profiles |
| ml2_vlan_allocations |
| ml2_vxlan_allocations |
| ml2_vxlan_endpoints |
| multi_provider_networks |
| networkconnections |
| networkdhcpagentbindings |
| networkgatewaydevicereferences |
| networkgatewaydevices |
| networkgateways |
| networkqueuemappings |
| networkrbacs |
| networks |
| networksecuritybindings |
| neutron_nsx_network_mappings |
| neutron_nsx_port_mappings |
| neutron_nsx_router_mappings |
| neutron_nsx_security_group_mappings |
| nexthops |
| nsxv_edge_dhcp_static_bindings |
| nsxv_edge_vnic_bindings |
| nsxv_firewall_rule_bindings |
| nsxv_internal_edges |
| nsxv_internal_networks |
| nsxv_port_index_mappings |
| nsxv_port_vnic_mappings |
| nsxv_router_bindings |
| nsxv_router_ext_attributes |
| nsxv_rule_mappings |
| nsxv_security_group_section_mappings |
| nsxv_spoofguard_policy_network_mappings |
| nsxv_tz_network_bindings |
| nsxv_vdr_dhcp_bindings |
| nuage_net_partition_router_mapping |
| nuage_net_partitions |
| nuage_provider_net_bindings |
| nuage_subnet_l2dom_mapping |
| ofcfiltermappings |
| ofcnetworkmappings |
| ofcportmappings |
| ofcroutermappings |
| ofctenantmappings |
| packetfilters |
| poolloadbalanceragentbindings |
| poolmonitorassociations |
| pools |
| poolstatisticss |
| portbindingports |
| portinfos |
| portqueuemappings |
| ports |
| portsecuritybindings |
| providerresourceassociations |
| qos_bandwidth_limit_rules |
| qos_network_policy_bindings |
| qos_policies |
| qos_port_policy_bindings |
| qosqueues |
| quotas |
| quotausages |
| reservations |
| resourcedeltas |
| router_extra_attributes |
| routerl3agentbindings |
| routerports |
| routerproviders |
| routerroutes |
| routerrules |
| routers |
| securitygroupportbindings |
| securitygrouprules |
| securitygroups |
| serviceprofiles |
| sessionpersistences |
| subnetpoolprefixes |
| subnetpools |
| subnetroutes |
| subnets |
| tz_network_bindings |
| vcns_router_bindings |
| vips |
| vpnservices |
+-----------------------------------------+
155 rows in set (0.01 sec)
重启nova-api,并启动neutron服务:
[root@linux-node1 ~]# systemctl restart openstack-nova-api
[root@linux-node1 ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
[root@linux-node1 ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
检查neutron-agent结果:
[root@linux-node1 ~]# neutron agent-list
+--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+
| 744dd33f-9fb0-4f64-96e8-62192fbce369 | DHCP agent | linux-node1.example.com | :-) | True | neutron-dhcp-agent |
| 7fdd47bf-2fe6-420f-94b4-098cf905eead | Linux bridge agent | linux-node1.example.com | :-) | True | neutron-linuxbridge-agent |
| d81831bc-ab31-466a-ae7c-189fca72548a | Metadata agent | linux-node1.example.com | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+
开始部署neutron的计算节点,在这里直接scp过去,不需要做任何更改:
Neutron在计算节点linux-node2.example.com上的安装:
[root@linux-node2 ~]# yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset -y
[root@linux-node1 config]#scp /etc/neutron/neutron.conf 192.168.1.150:/etc/neutron/
[root@linux-node1 config]#scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.1.150:/etc/neutron/plugins/ml2/
修改计算节点的nova配置,添加如下内容到neutron模块即可:
[neutron]模块下添加如下内容:
3033:url = http://192.168.1.140:9696
3034:auth_url = http://192.168.1.140:35357
3035:auth_plugin = password
3036:project_domain_id = default
3037:user_domain_id = default
3038:region_name = RegionOne
3039:project_name = service
3040:username = neutron
3041:password = neutron
3043:service_metadata_proxy = True
3044:metadata_proxy_shared_secret = neutron
复制linuxbridge_agent文件,无需更改,并创建ml2软连接:
[root@linux-node1 ~]# scp /etc/neutron/plugins/ml2/ml2_conf.ini 192.168.1.150:/etc/neutron/plugins/ml2/
[root@linux-node2 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
重启计算节点的nova-computer:
[root@linux-node2 ml2]# systemctl restart openstack-nova-compute.service
计算机点上启动linuxbridge_agent服务:
[root@linux-node2 ml2]# systemctl enable neutron-linuxbridge-agent.service
[root@linux-node2 ml2]# systemctl start neutron-linuxbridge-agent.service
控制节点端检查neutron的结果,有四个(控制节点一个,计算节点两个)结果代表正确:
[root@linux-node1 config]# neutron agent-list
[root@linux-node1 ~]# neutron agent-list
+--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+
| 744dd33f-9fb0-4f64-96e8-62192fbce369 | DHCP agent | linux-node1.example.com | :-) | True | neutron-dhcp-agent |
| 7fdd47bf-2fe6-420f-94b4-098cf905eead | Linux bridge agent | linux-node1.example.com | :-) | True | neutron-linuxbridge-agent |
| d81831bc-ab31-466a-ae7c-189fca72548a | Metadata agent | linux-node1.example.com | :-) | True | neutron-metadata-agent |
| db4809de-3738-49c6-92b5-214327027aa5 | Linux bridge agent | linux-node2.example.com | :-) | True | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+
注意:
若上面没有出现有四个(控制节点一个,计算节点两个),要查看拷贝到计算节点的文件权限问题:
[root@linux-node2 ~]# systemctl start neutron-linuxbridge-agent.service
[root@linux-node2 ~]# systemctl status neutron-linuxbridge-agent.service
● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2016-07-24 16:03:09 CST; 5s ago
Main PID: 6082 (neutron-linuxbr)
CGroup: /system.slice/neutron-linuxbridge-agent.service
└─6082 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.co...
Jul 24 16:03:10 linux-node2.example.com neutron-linuxbridge-agent[6082]: server = manager.get_server()
Jul 24 16:03:10 linux-node2.example.com neutron-linuxbridge-agent[6082]: File "/usr/lib64/python2.7/multiprocessing/...er
Jul 24 16:03:10 linux-node2.example.com neutron-linuxbridge-agent[6082]: self._authkey, self._serializer)
Jul 24 16:03:10 linux-node2.example.com neutron-linuxbridge-agent[6082]: File "/usr/lib64/python2.7/multiprocessing/...__
Jul 24 16:03:10 linux-node2.example.com neutron-linuxbridge-agent[6082]: self.listener = Listener(address=address, b...6)
Jul 24 16:03:10 linux-node2.example.com neutron-linuxbridge-agent[6082]: File "/usr/lib/python2.7/site-packages/oslo...__
Jul 24 16:03:10 linux-node2.example.com neutron-linuxbridge-agent[6082]: self._socket.bind(address)
Jul 24 16:03:10 linux-node2.example.com neutron-linuxbridge-agent[6082]: File "/usr/lib64/python2.7/socket.py", line...th
Jul 24 16:03:10 linux-node2.example.com neutron-linuxbridge-agent[6082]: return getattr(self._sock,name)(*args)
Jul 24 16:03:10 linux-node2.example.com neutron-linuxbridge-agent[6082]: socket.error: [Errno 13] Permission denied
解决办法:
[root@linux-node2 ml2]# cd /etc/neutron/
[root@linux-node2 neutron]# chown -R root.neutron plugins/
[root@linux-node2 neutron]# cd /etc/neutron/plugins/ml2/
[root@linux-node2 ml2]# ll
total 12
-rw-r-----. 1 root neutron 2775 Jul 24 15:39 linuxbridge_agent.ini
-rw-r-----. 1 root root 4870 Jul 24 16:14 ml2_conf.ini ---->看到所属组的权限是root。
[root@linux-node2 ml2]# chown root.neutron ml2_conf.ini ---->修改
之后再次启动neutron-linuxbridge-agent.service服务及再次查看状态,没有错误就OK了:
[root@linux-node2 ~]# systemctl start neutron-linuxbridge-agent.service
[root@linux-node2 ~]# systemctl status neutron-linuxbridge-agent.service
创建一台虚拟机
创建一个单一扁平网络(名字:flat),网络类型为flat,网络是共享的(share),网络提供者:physnet1,它是和eth0关联起来的:
[root@linux-node1 ~]# source admin-openrc.sh
[root@linux-node1 ~]# neutron net-create flat --shared --provider:physical_network physnet1 --provider:network_type flat
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 3d5d33dd-3f66-4d72-a7c9-21bf66be9a43 |
| mtu | 0 |
| name | flat |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | physnet1 |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant_id | 8b58ede01bce4892bd9afe132e8d3b9d |
+---------------------------+--------------------------------------+
对上一步创建的网络创建一个子网,名字为:subnet-create flat,设置dns和网关:
[root@linux-node1 ~]# neutron subnet-create flat 192.168.1.0/24 --name flat-subnet --allocation-pool start=192.168.1.200,end=192.168.1.220 --dns-nameserver 192.168.1.1 --gateway 192.168.1.1
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field | Value |
+-------------------+----------------------------------------------------+
| allocation_pools | {"start": "192.168.1.200", "end": "192.168.1.220"} |
| cidr | 192.168.1.0/24 |
| dns_nameservers | 192.168.1.1 |
| enable_dhcp | True |
| gateway_ip | 192.168.1.1 |
| host_routes | |
| id | 53c35424-226c-44a4-9e87-850faac21b23 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | flat-subnet |
| network_id | 3d5d33dd-3f66-4d72-a7c9-21bf66be9a43 |
| subnetpool_id | |
| tenant_id | 8b58ede01bce4892bd9afe132e8d3b9d |
+-------------------+----------------------------------------------------+
查看创建的网络和子网:
[root@linux-node1 ~]# neutron net-list
+--------------------------------------+------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+------+-----------------------------------------------------+
| 3d5d33dd-3f66-4d72-a7c9-21bf66be9a43 | flat | 53c35424-226c-44a4-9e87-850faac21b23 192.168.1.0/24 |
+--------------------------------------+------+-----------------------------------------------------+
[root@linux-node1 ~]# neutron subnet-list
+--------------------------------------+-------------+----------------+----------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+-------------+----------------+----------------------------------------------------+
| 53c35424-226c-44a4-9e87-850faac21b23 | flat-subnet | 192.168.1.0/24 | {"start": "192.168.1.200", "end": "192.168.1.220"} |
+--------------------------------------+-------------+----------------+----------------------------------------------------+
注:创建虚拟机之前,由于一个网络下不能存在多个dhcp,所以一定关闭其他的dhcp选项。
下面开始正式创建虚拟机,为了可以连上所创建的虚拟机,在这里要创建一对公钥和私钥,并添加到openstack中
[root@linux-node1 ~]# source demo-openrc.sh
[root@linux-node1 ~]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa): --->直接回车
[root@linux-node1 ~]# nova keypair-add --pub-key .ssh/id_rsa.pub mykey
[root@linux-node1 ~]# nova keypair-list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | e4:54:ce:9c:94:de:77:f6:d1:0c:cd:12:10:85:4e:95 |
+-------+-------------------------------------------------+
[root@linux-node1 ~]# ls .ssh/
id_rsa id_rsa.pub known_hosts
创建一个安全组,打开icmp和开放22端口:
[root@linux-node1 ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
[root@linux-node1 ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 22 | 22 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
创建虚拟机之前要进行确认虚拟机类型flavor(相当于EC2的intance的type)、需要的镜像(EC2的AMI),需要的网络(EC2的VPC),安全组(EC2的sg):
[root@linux-node1 ~]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
注意:我们这里用m1.tiny。
[root@linux-node1 ~]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------+--------+--------+
| 34b9a6a2-5d97-4ef5-b434-b09003417d0a | cirros | ACTIVE | |
+--------------------------------------+--------+--------+--------+
[root@linux-node1 ~]# neutron net-list
+--------------------------------------+------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+------+-----------------------------------------------------+
| 38ec5d30-41b4-4cd1-a36d-753f6eb686b6 | flat | 447c0b10-66d1-45ca-a607-696d99bd041a 192.168.1.0/24 |
+--------------------------------------+------+-----------------------------------------------------+
[root@linux-node1 ~]# nova secgroup-list
+--------------------------------------+---------+------------------------+
| Id | Name | Description |
+--------------------------------------+---------+------------------------+
| f0ada6c2-702f-4205-b473-6eda66a1c0c6 | default | Default security group |
+--------------------------------------+---------+------------------------+
创建一台虚拟机,类型为m1.tiny,镜像为cirros(上文wget的),网络id为neutron net-list出来的,安全组就是默认的,选择刚才创建的key-pair,虚拟机的名字为hello-instance:
[root@linux-node1 ~]# nova boot --flavor m1.tiny --image cirros --nic net-id=38ec5d30-41b4-4cd1-a36d-753f6eb686b6 --security-group default --key-name mykey hello-instance
+--------------------------------------+-----------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | LEhEKB2namKm |
| config_drive | |
| created | 2016-11-20T03:45:39Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | d64d7ede-e417-4977-bae9-d56a9678ac8e |
| image | cirros (34b9a6a2-5d97-4ef5-b434-b09003417d0a) |
| key_name | mykey |
| metadata | {} |
| name | hello-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | fad08c0bdd9f4befbd2a5e5803641e50 |
| updated | 2016-11-20T03:45:40Z |
| user_id | 691e3dc88812467e925660e81da04432 |
+--------------------------------------+-----------------------------------------------+
查看所创建的虚拟机状态:
[root@linux-node1 ~]# nova list
+--------------------------------------+----------------+--------+------------+-------------+--------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------+--------+------------+-------------+--------------------+
| d64d7ede-e417-4977-bae9-d56a9678ac8e | hello-instance | ERROR | - | NOSTATE | flat=192.168.1.201 |
+--------------------------------------+----------------+--------+------------+-------------+--------------------+
ssh连接到所创建的虚拟机
[root@linux-node1 ~]# ssh cirros@192.168.1.201
通过vnc生成URL在web界面上链接虚拟机:
[root@linux-node1 ~]# nova get-vnc-console hello-instance novnc
Dashboard演示
1)编辑dashboard的配置文件:
安装(dashboard安装在哪个地方都可以):
yum -y install openstack-dashboard
# vim /etc/openstack-dashboard/local_settings
29 ALLOWED_HOSTS = ['*', 'localhost'] 哪些主机可以访问,localhost代表列表
108 CACHES = {
109 'default': {
110 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
111 'LOCATION': '192.168.1.240:11211',
112 }
113 } 打开使用memcached
138 OPENSTACK_HOST = "192.168.1.240" 改成keystone的地址
140 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" keystone之前创建的
320 TIME_ZONE = "Asia/ShangHai" 设置时区
重启apache
[root@linux-node1 ~]# systemctl restart httpd
2)操作dashboard
登录dashboard
使用keystone的demo用户登录(只有在管理员admin权限下才能看到所有instance)
浏览器输入:http://192.168.1.240/dashboard

浙公网安备 33010602011771号