Win10+VirtualBox+Openstack Mitaka

首先VirtualBox安装的话,没有什么可演示的,去官网(https://www.virtualbox.org/wiki/Downloads)下载,或者可以去(https://www.virtualbox.org/wiki/Download_Old_Builds)下载旧版本。

接下来设置virtualbox的网络

这里需要注意的是IP地址栏中的信息,必须全部删除然后切换为英文输入法,再次输入。

接下来配置Host-Only

以下是确认没有启用DHCP

接下来就是安装ubuntu了,

点击新建虚拟机,选择linux,发行版本选择ubuntu 64 bit

这里安装过程不再演示,但是在配置网络的时候要安装如下所示配置

网卡2的配置如下

接下来就是添加存储,选择之前下载好的ubuntu-14.04.5-server-amd64.iso镜像文件,下载地址(http://mirrors.aliyun.com/ubuntu-releases/14.04/ubuntu-14.04.5-server-amd64.iso

点击“OK”之后,开启虚拟机即可开始安装

语言:English(回车)

Ubuntu: Install Ubuntu Server(回车)

接下来直接敲回车即可,直到:

由于需要使用Nat访问外网,所以这里选择eth0.回车之后,直接选择‘cancel’,回车会告警,忽略这个告警直接点击“continue”,会提示让配置网络,选择手动配置,回车:

IP address:10.0.3.10

Netmask: 255.255.255.0

Gateway:10.0.30.1

Name server addresses: 114.114.114.114

Hostname: controller

Domain name: 不设置,直接回车即可,

Full name for the new user: openstack

Username for your account: openstack

Choose a password for the new user: 123456

Re-enter password to verify: 123456

Use weak password? 选择“yes”,回车

Encrypt your home directory? 选择“No”,回车

接下来需要确认当前的时区是上海,如果是上海,选择“yes”进行下一步;不是上海选择“No”,然后在列表中选择上海。

在Partition disks选项中,选择“Guided - user entire disk",然后回车,回车,出现如下所示,选择“Yes”,回车

Configure the package manager: 不设置HTTP proxy,直接选择continue,回车

Configuring apt两步直接回车取消掉即可

Configuring taskel: No automatic updates, 回车之后选择安装OpenSSH server 

安装已完成,系统会自动重启,重启完成,关机,然后进行克隆操作:

 

 选择“完全复制”。

 

 

 

接下来开始配置系统环境,选择刚刚创建好的虚拟机,点击启动,然后找到这个网址(https://github.com/JiYou/openstack-m/blob/master/os/interfaces)这是网卡配置文件,接下来开始查看并编辑网卡配置文件interfaces

openstack@controller:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        address 10.0.3.10
        netmask 255.255.255.0
        network 10.0.3.0
        broadcast 10.0.3.255
        gateway 10.0.3.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 114.114.114.114
auto eth1
iface eth1 inet static
        address 192.168.56.10
        netmask 255.255.255.0
        gateway 192.168.56.1
        dns-nameservers 114.114.114.114

重启系统生效,然后使用xshell、putty或其他远程管理工具,我这里使用的是Gitbash,连接测试

xueji@xueji MINGW64 ~
$ ssh openstack@192.168.56.10
The authenticity of host '192.168.56.10 (192.168.56.10)' can't be established.
ECDSA key fingerprint is SHA256:DvbqAHwl6bcmX3FcvaJZ1REpRR8Oup89ST+a8WFBY7Y.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.56.10' (ECDSA) to the list of known hosts.
openstack@192.168.56.10's password:
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-31-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Tue Jan 15 00:56:14 CST 2019

  System load:  0.11               Processes:           100
  Usage of /:   0.6% of 193.78GB   Users logged in:     0
  Memory usage: 2%                 IP address for eth0: 10.0.3.10
  Swap usage:   0%                 IP address for eth1: 192.168.56.10

  Graph this data and manage this system at:
    https://landscape.canonical.com/

186 packages can be updated.
0 updates are security updates.

New release '16.04.5 LTS' available.
Run 'do-release-upgrade' to upgrade to it.

Last login: Tue Jan 15 00:56:14 2019
openstack@controller:~$ ifconfig

登录成功,

接下来开始准备openstack的包

openstack@controller:~$ sudo -s
[sudo] password for openstack:
root@controller:~# apt-get update
root@controller:~# apt-get install -y software-properties-common
root@controller:~# add-apt-repository cloud-archive:mitaka
 Ubuntu Cloud Archive for OpenStack Mitaka
 More info: https://wiki.ubuntu.com/ServerTeam/CloudArchive
Press [ENTER] to continue or ctrl-c to cancel adding it
#  回车
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
  ubuntu-cloud-keyring
0 upgraded, 1 newly installed, 0 to remove and 177 not upgraded.
Need to get 5,086 B of archives.
After this operation, 34.8 kB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu/ trusty/universe ubuntu-cloud-keyring all 2012.08.14 [5,086 B]
Fetched 5,086 B in 0s (11.0 kB/s)
Selecting previously unselected package ubuntu-cloud-keyring.
(Reading database ... 58744 files and directories currently installed.)
Preparing to unpack .../ubuntu-cloud-keyring_2012.08.14_all.deb ...
Unpacking ubuntu-cloud-keyring (2012.08.14) ...
Setting up ubuntu-cloud-keyring (2012.08.14) ...
Importing ubuntu-cloud.archive.canonical.com keyring
OK
Processing ubuntu-cloud.archive.canonical.com removal keyring
gpg: /etc/apt/trustdb.gpg: trustdb created
OK

root@controller:~# apt-get update && apt-get dist-upgrade
root@controller:~# apt-get install -y python-openstackclient

安装NTP、MySQL

root@controller:~# hostname -I
10.0.3.10 192.168.56.10
root@controller:~# tail -n -2 /etc/hosts
10.0.3.10 controller
192.168.56.10 controller

root@controller:~# vim /etc/chrony/chrony.conf
# 注释掉以下四行,接着在下面添加server controller iburst
#server 0.debian.pool.ntp.org offline minpoll 8
#server 1.debian.pool.ntp.org offline minpoll 8
#server 2.debian.pool.ntp.org offline minpoll 8
#server 3.debian.pool.ntp.org offline minpoll 8
server controller iburst

root@controller:~# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^? controller                    0   7     0   10y     +0ns[   +0ns] +/-    0ns


安装mysql
root@controller:~# apt-get install -y mariadb-server python-pymysql
在弹出的mysql数据库密码输入框中输入123456
root@controller:~# cd /etc/mysql/
root@controller:/etc/mysql# ls
conf.d  debian.cnf  debian-start  my.cnf
root@controller:/etc/mysql# cp my.cnf{,.bak}
root@controller:/etc/mysql# vim my.cnf
[mysqld]   #该行下面添加如下四行内容
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

bind-address            = 0.0.0.0  #原值是127.0.0.1
重启mysql 
root@controller:/etc/mysql# service mariadb restart
mariadb: unrecognized service
root@controller:/etc/mysql# service mysql restart
 * Stopping MariaDB database server mysqld                            [ OK ]
 * Starting MariaDB database server mysqld                            [ OK ]
 * Checking for corrupt, not cleanly closed and upgrade needing tables.
安全初始化
root@controller:/etc/mysql# mysql_secure_installation
/usr/bin/mysql_secure_installation: 393: /usr/bin/mysql_secure_installation: find_mysql_client: not found

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

You already have a root password set, so you can safely answer 'n'.

Change the root password? [Y/n] n
 ... skipping.

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] n
 ... skipping.

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n
 ... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] n
 ... skipping.

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!
测试连接
root@controller:/etc/mysql# mysql -uroot -p123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 30
Server version: 5.5.61-MariaDB-1ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)

MariaDB [(none)]> \q
Bye
root@controller:/etc/mysql# mysql -uroot -p123456 -h10.0.3.10
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 31
Server version: 5.5.61-MariaDB-1ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)

MariaDB [(none)]> \q
Bye
root@controller:/etc/mysql# mysql -uroot -p123456 -h192.168.56.10
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 32
Server version: 5.5.61-MariaDB-1ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)

MariaDB [(none)]> \q
Bye

root@controller:/etc/mysql# mysql -uroot -p123456 -h127.0.0.1
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 33
Server version: 5.5.61-MariaDB-1ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)

MariaDB [(none)]> \q
Bye

安装mongodb

root@controller:~# apt-get install -y mongodb-server mongodb-clients python-pymongo
root@controller:~# cp /etc/mongodb.conf{,.bak}
root@controller:~# vim /etc/mongodb.conf

bind_ip = 0.0.0.0 #原值127.0.0.1
smallfiles = true  #添加此行内容
root@controller:~# service mongodb stop
mongodb stop/waiting
root@controller:~# ls /var/lib/mongodb/journal/
# 如果这个目录下有prealloc开头的文件,全部删除 
root@controller:~# service mongodb start
mongodb start/running, process 5275

安装rabbitmq

root@controller:~# apt-get install -y rabbitmq-server
添加openstack用户
root@controller:~# rabbitmqctl add_user openstack 123456
Creating user "openstack" ...
赋予“openstack”用户读写权限
root@controller:~# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...

安装memecached

root@controller:~# apt-get install -y memcached python-memcache
root@controller:~# cp /etc/memcached.conf{,.bak}
root@controller:~# vim /etc/memcached.conf

-l 0.0.0.0 #原值127.0.0.1
重启memcache
root@controller:~# service memcached restart
Restarting memcached: memcached.
root@controller:~# service memcached status
 * memcached is running
root@controller:~# ps aux | grep memcached
memcache  6975  0.0  0.0  63264  2612 ?        Sl   03:08   0:00 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 0.0.0.0
root      6994  0.0  0.0  11760  2120 pts/0    S+   03:09   0:00 grep --color=auto memcached

 

 

 

 

开始安装keystone

root@controller:~# mysql -uroot -p123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 34
Server version: 5.5.61-MariaDB-1ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create database keystone;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'%' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> \q
Bye
root@controller:~# mysql -ukeystone -p123456 -h 127.0.0.1
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 35
Server version: 5.5.61-MariaDB-1ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| keystone           |
+--------------------+
2 rows in set (0.00 sec)

MariaDB [(none)]> \q
Bye
root@controller:~# mysql -ukeystone -p123456 -h 10.0.3.10
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 36
Server version: 5.5.61-MariaDB-1ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| keystone           |
+--------------------+
2 rows in set (0.00 sec)

MariaDB [(none)]> \q
Bye
root@controller:~# mysql -ukeystone -p123456 -h 192.168.56.10
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 37
Server version: 5.5.61-MariaDB-1ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| keystone           |
+--------------------+
2 rows in set (0.00 sec)

MariaDB [(none)]> \q
Bye
# 连接都没问题
接着安装keystone软件包
root@controller:~# echo "manual" > /etc/init/keystone.override
root@controller:~# apt-get install keystone apache2 libapache2-mod-wsgi

配置keystone.conf
root@controller:~# cp /etc/keystone/keystone.conf{,.bak}
root@controller:~# vim /etc/keystone/keystone.conf

admin_token = 123456
connection = mysql+pymysql://keystone:123456@controller/keystone

provider = fernet
# 同步数据库
root@controller:~#  su -s /bin/sh -c "keystone-manage db_sync" keystone
初始化fernet-keys
root@controller:~# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
2019-01-15 03:43:34.134 9896 INFO keystone.token.providers.fernet.utils [-] [fernet_tokens] key_repository does not appear to exist; attempting to create it
2019-01-15 03:43:34.135 9896 INFO keystone.token.providers.fernet.utils [-] Created a new key: /etc/keystone/fernet-keys/0
2019-01-15 03:43:34.135 9896 INFO keystone.token.providers.fernet.utils [-] Starting key rotation with 1 key files: ['/etc/keystone/fernet-keys/0']
2019-01-15 03:43:34.135 9896 INFO keystone.token.providers.fernet.utils [-] Current primary key is: 0
2019-01-15 03:43:34.136 9896 INFO keystone.token.providers.fernet.utils [-] Next primary key will be: 1
2019-01-15 03:43:34.136 9896 INFO keystone.token.providers.fernet.utils [-] Promoted key 0 to be the primary: 1
2019-01-15 03:43:34.137 9896 INFO keystone.token.providers.fernet.utils [-] Created a new key: /etc/keystone/fernet-keys/0
root@controller:~# echo $?
0

配置Apache HTTP
root@controller:~# cp /etc/apache2/apache2.conf{,.bak}
root@controller:~# vim /etc/apache2/apache2.conf
root@controller:~# grep 'ServerName' /etc/apache2/apache2.conf
ServerName controller  #末尾添加此行

配置Apache HTPP

root@controller:~# cp /etc/apache2/apache2.conf{,.bak}
root@controller:~# vim /etc/apache2/apache2.conf
root@controller:~# grep 'ServerName' /etc/apache2/apache2.conf
ServerName controller
接着创建wsgi-keystone.conf文件
root@controller:~# vim /etc/apache2/sites-available/wsgi-keystone.conf
Listen 5000
Listen 35357

<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/apache2/keystone.log
    CustomLog /var/log/apache2/keystone_access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost *:35357>
    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/apache2/keystone.log
    CustomLog /var/log/apache2/keystone_access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>
~

开启认证服务虚拟主机

root@controller:~# ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled

重启apache

root@controller:~# service apache2 restart
 * Restarting web server apache2                                      [ OK ]
root@controller:~# rm -rf /var/lib/keystone/keystone.db
root@controller:~# lsof -i:5000
COMMAND   PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
apache2 10151     root    6u  IPv6  27660      0t0  TCP *:5000 (LISTEN)
apache2 10164 www-data    6u  IPv6  27660      0t0  TCP *:5000 (LISTEN)
apache2 10165 www-data    6u  IPv6  27660      0t0  TCP *:5000 (LISTEN)
root@controller:~# lsof -i:35357
COMMAND   PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
apache2 10151     root    8u  IPv6  27664      0t0  TCP *:35357 (LISTEN)
apache2 10164 www-data    8u  IPv6  27664      0t0  TCP *:35357 (LISTEN)
apache2 10165 www-data    8u  IPv6  27664      0t0  TCP *:35357 (LISTEN)

安装python-openstackclient

root@controller:~# apt-get install -y python-openstackclient

配置rootrc环境

root@controller:~# vim rootrc
root@controller:~# cat rootrc
export OS_TOKEN=123456
export OS_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export PS1="rootrc@\u@\h:\w\$"

# 加载rootrc环境
root@controller:~# source rootrc

向keystone中注册服务

值得注意的是:35357一般为管理员登录使用,5000端口一般发布到外部用户使用

创建服务实体和API端点

adminrc@root@controller:~$source rootrc
rootrc@root@controller:~$openstack service create --name keystone --description "OpenStack Identify" identity
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Identify               |
| enabled     | True                             |
| id          | 7052e2715c874ae18dc520ec21026a34 |
| name        | keystone                         |
| type        | identity                         |
+-------------+----------------------------------+
rootrc@root@controller:~$openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ac731860b374450484034b024e643004 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 7052e2715c874ae18dc520ec21026a34 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://controller:5000/v3        |
+--------------+----------------------------------+
rootrc@root@controller:~$openstack endpoint create --region RegionOne identity public http://controller:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | d1f7296477a748ef82ad4970580d50b2 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 7052e2715c874ae18dc520ec21026a34 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://controller:5000/v3        |
+--------------+----------------------------------+
rootrc@root@controller:~$openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | df4eb1f2b08f474fa7b83ef979ebd0fb |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 7052e2715c874ae18dc520ec21026a34 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://controller:35357/v3       |
+--------------+----------------------------------+

 

 

接着创建域、项目、用户和角色

rootrc@root@controller:~$openstack domain create --description "Default Domain" default
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Default Domain                   |
| enabled     | True                             |
| id          | 1495769d2bbb44d192eee4c9b2f91ca3 |
| name        | default                          |
+-------------+----------------------------------+
rootrc@root@controller:~$openstack project create --domain default --description "Admin Project" admin
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Admin Project                    |
| domain_id   | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled     | True                             |
| id          | 29577090a0e8466ab49cc30a4305f5f8 |
| is_domain   | False                            |
| name        | admin                            |
| parent_id   | 1495769d2bbb44d192eee4c9b2f91ca3 |
+-------------+----------------------------------+
rootrc@root@controller:~$openstack user create --domain default --password admin admin
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled   | True                             |
| id        | 653177098fac40a28734093706299e66 |
| name      | admin                            |
+-----------+----------------------------------+
rootrc@root@controller:~$openstack  role create admin
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | None                             |
| id        | 6abd897a6f134b8ea391377d1617a2f8 |
| name      | admin                            |
+-----------+----------------------------------+
rootrc@root@controller:~$openstack role add --project admin --user admin admin
rootrc@root@controller:~$         #没有提示就是最好的提示了

创建service项目

rootrc@root@controller:~$openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled     | True                             |
| id          | 006a1ed36a0e4cbd8947d853b79d522c |
| is_domain   | False                            |
| name        | service                          |
| parent_id   | 1495769d2bbb44d192eee4c9b2f91ca3 |
+-------------+----------------------------------+
rootrc@root@controller:~$openstack project create --domain default --description "Demo Project" demo
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Demo Project                     |
| domain_id   | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled     | True                             |
| id          | ffc560f6a2604c3896df922115c6fc2a |
| is_domain   | False                            |
| name        | demo                             |
| parent_id   | 1495769d2bbb44d192eee4c9b2f91ca3 |
+-------------+----------------------------------+
rootrc@root@controller:~$openstack user create --domain default --password demo demo
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled   | True                             |
| id        | c4de9fac882740838aa26e9119b30cb9 |
| name      | demo                             |
+-----------+----------------------------------+
rootrc@root@controller:~$openstack role create user
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | None                             |
| id        | e69817f50d6448fe888a64e51e025351 |
| name      | user                             |
+-----------+----------------------------------+
rootrc@root@controller:~$openstack role add --project demo --user demo user
rootrc@root@controller:~$echo $?
0

验证adminrc

rootrc@root@controller:~$vim adminrc
rootrc@root@controller:~$cat adminrc
unset OS_TOKEN
unset OS_URL
unset OS_IDENTITY_API_VERSION

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1="adminrc@\u@\h:\w\$"

加载adminrc环境并尝试获取keystone token

rootrc@root@controller:~$source adminrc
adminrc@root@controller:~$openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2019-01-14T21:33:20.000000Z                                                                                                                                                             |
| id         | gAAAAABcPPIQK270ipb9EgRW7feWYLunIVPaX9cTjhvgvTvMmpG8j8K_AkwPv5UL4WUFFzfDnO30A7WflnaOyufilAi7DCmbQ2YLlsGuAzgbCRYooV5pIJTkuqbhmRJDmFX068zliOri_rXL2CsTq9um3UtCPnOj7-7LxmXcFm5LwsP6OyzY4Ts |
| project_id | 29577090a0e8466ab49cc30a4305f5f8                                                                                                                                                        |
| user_id    | 653177098fac40a28734093706299e66                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
adminrc@root@controller:~$date
Tue Jan 15 04:34:10 CST 2019

验证demorc

adminrc@root@controller:~$vim demorc
adminrc@root@controller:~$cat demorc
unset OS_TOKEN
unset OS_URL
unset OS_IDENTITY_API_VERSION

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1="demorc@\u@\h:\w\$"

获取demo用户的token

adminrc@root@controller:~$source demorc
demorc@root@controller:~$openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2019-01-14T21:40:50.000000Z                                                                                                                                                             |
| id         | gAAAAABcPPPSLXi6E581bb8P0MpmHOLg-p0_vt9YLNWXn6feHLF6QONWq3Ny8JT4ceOvkKiv5TltLA4WRyn6XghcvZn-X0tuhOl07Eh6KXxGiGtEwgZyPFO-AFhykXims1FH0Tz4lp-fI_ExelOAcT50OFeKC3bB5vlGlYgR0pmdiVj8L73Boiw |
| project_id | ffc560f6a2604c3896df922115c6fc2a                                                                                                                                                        |
| user_id    | c4de9fac882740838aa26e9119b30cb9                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
demorc@root@controller:~$date
Tue Jan 15 04:40:56 CST 2019

开始安装glance服务

demorc@root@controller:~$mysql -uroot -p123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 45
Server version: 5.5.61-MariaDB-1ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create database glance;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'%' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> \q
Bye
demorc@root@controller:~$source adminrc
adminrc@root@controller:~$

1111

rootrc@root@controller:~$source adminrc
adminrc@root@controller:~$openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image                  |
| enabled     | True                             |
| id          | 24eba17c530946fea53413104b8d2035 |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+
adminrc@root@controller:~$ps -aux | grep -v "grep" | grep keystone
keystone 10154  0.0  0.2 176340  7976 ?        Sl   03:51   0:00 (wsgi:keystone-pu -k start
keystone 10155  0.0  3.0 367836 94348 ?        Sl   03:51   0:01 (wsgi:keystone-pu -k start
keystone 10156  0.0  2.1 336084 65084 ?        Sl   03:51   0:01 (wsgi:keystone-pu -k start
keystone 10157  0.0  0.2 176332  7976 ?        Sl   03:51   0:00 (wsgi:keystone-pu -k start
keystone 10158  0.0  0.2 176332  7976 ?        Sl   03:51   0:00 (wsgi:keystone-pu -k start
keystone 10159  0.0  3.1 368860 96008 ?        Sl   03:51   0:01 (wsgi:keystone-ad -k start
keystone 10160  0.0  3.0 368348 94628 ?        Sl   03:51   0:01 (wsgi:keystone-ad -k start
keystone 10161  0.0  2.2 353988 70496 ?        Sl   03:51   0:01 (wsgi:keystone-ad -k start
keystone 10162  0.0  3.1 368604 95668 ?        Sl   03:51   0:01 (wsgi:keystone-ad -k start
keystone 10163  0.0  3.1 368604 95732 ?        Sl   03:51   0:01 (wsgi:keystone-ad -k start
adminrc@root@controller:~$lsof -i:5000
COMMAND   PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
apache2 10151     root    6u  IPv6  27660      0t0  TCP *:5000 (LISTEN)
apache2 10164 www-data    6u  IPv6  27660      0t0  TCP *:5000 (LISTEN)
apache2 10165 www-data    6u  IPv6  27660      0t0  TCP *:5000 (LISTEN)
adminrc@root@controller:~$lsof -i:35357
COMMAND   PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
apache2 10151     root    8u  IPv6  27664      0t0  TCP *:35357 (LISTEN)
apache2 10164 www-data    8u  IPv6  27664      0t0  TCP *:35357 (LISTEN)
apache2 10165 www-data    8u  IPv6  27664      0t0  TCP *:35357 (LISTEN)
adminrc@root@controller:~$tail /var/log/keystone/keystone-wsgi-admin.log

11111

adminrc@root@controller:~$openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 83d13b44fbae4abbb89b7f1a9f1519d6 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 24eba17c530946fea53413104b8d2035 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | c9708f196a6946f987652cb40b9a8aea |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 24eba17c530946fea53413104b8d2035 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

111

adminrc@root@controller:~$openstack user create --domain default --password glance glance
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled   | True                             |
| id        | b9c7a987bc494e72899d6ffa7c68c3d0 |
| name      | glance                           |
+-----------+----------------------------------+
adminrc@root@controller:~$openstack role add --project service --user glance admin
adminrc@root@controller:~$sudo -s
root@controller:~# apt-get install -y glance
root@controller:~# echo $?
0

配置glance-api.conf

root@controller:~# cp /etc/glance/glance-api.conf{,.bak}
root@controller:~# vim /etc/glance/glance-api.conf
......
connection = mysql+pymysql://glance:123456@controller/glance
......
[keystone_authtoken]

auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor = keystone

[glance_store]

stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

配置

root@controller:~# cp /etc/glance/glance-registry.conf{,.bak}
root@controller:~# vim /etc/glance/glance-registry.conf
.......
connection = mysql+pymysql://glance:123456@localhost/glance
.......
[keystone_authtoken]

auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
........
[paste_deploy]

flavor = keystone

写入镜像服务数据库中

root@controller:~# su -s /bin/sh -c "glance-manage db_sync" glance
............
2019-01-15 06:04:43.570 13286 INFO migrate.versioning.api [-] done

配置完成重启服务

root@controller:~# service glance-registry restart
glance-registry stop/waiting
glance-registry start/running, process 13322
root@controller:~# service glance-api restart
glance-api stop/waiting
glance-api start/running, process 13351

获取admin凭证来获取只有管理员能执行的命令的访问权限

root@controller:~# source adminrc
adminrc@root@controller:~$wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
adminrc@root@controller:~$ls -al cirros-0.3.4-x86_64-disk.img
-rw-r--r-- 1 root root 13287936 May  8  2015 cirros-0.3.4-x86_64-disk.img
adminrc@root@controller:~$file cirros-0.3.4-x86_64-disk.img
cirros-0.3.4-x86_64-disk.img: QEMU QCOW Image (v2), 41126400 bytes
adminrc@root@controller:~$openstack image create "cirrors" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field            | Value                                                |
+------------------+------------------------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6                     |
| container_format | bare                                                 |
| created_at       | 2019-01-14T22:55:08Z                                 |
| disk_format      | qcow2                                                |
| file             | /v2/images/39d73bcf-e60b-4caf-8469-cca17de00d7e/file |
| id               | 39d73bcf-e60b-4caf-8469-cca17de00d7e                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | cirrors                                              |
| owner            | 29577090a0e8466ab49cc30a4305f5f8                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 13287936                                             |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2019-01-14T22:55:08Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+------------------+------------------------------------------------------+

查看镜像列表

adminrc@root@controller:~$openstack image list
+--------------------------------------+---------+--------+
| ID                                   | Name    | Status |
+--------------------------------------+---------+--------+
| 39d73bcf-e60b-4caf-8469-cca17de00d7e | cirrors | active |
+--------------------------------------+---------+--------+

也可以直接去机器上glance对应的的images目录下查看

adminrc@root@controller:~$ls /var/lib/glance/images/
39d73bcf-e60b-4caf-8469-cca17de00d7e

 

遇到的问题

错误信息

adminrc@root@controller:~$openstack image create "cirrors" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
503 Service Unavailable: The server is currently unavailable. Please try again at a later time. (HTTP 503)
adminrc@root@controller:~$cd /var/log/glance/
adminrc@root@controller:/var/log/glance$ls
glance-api.log  glance-registry.log
adminrc@root@controller:/var/log/glance$tail glance-api.log
2019-01-15 06:06:06.887 13351 INFO glance.common.wsgi [-] Started child 13359
2019-01-15 06:06:06.889 13359 INFO eventlet.wsgi.server [-] (13359) wsgi starting up on http://0.0.0.0:9292
2019-01-15 06:11:59.019 13359 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}
2019-01-15 06:11:59.071 13359 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}
2019-01-15 06:11:59.071 13359 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data
2019-01-15 06:11:59.078 13359 INFO eventlet.wsgi.server [-] 10.0.3.10 - - [15/Jan/2019 06:11:59] "GET /v2/schemas/image HTTP/1.1" 503 370 0.170589
2019-01-15 06:15:01.259 13359 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}
2019-01-15 06:15:01.301 13359 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}
2019-01-15 06:15:01.302 13359 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data
2019-01-15 06:15:01.306 13359 INFO eventlet.wsgi.server [-] 10.0.3.10 - - [15/Jan/2019 06:15:01] "GET /v2/schemas/image HTTP/1.1" 503 370 0.089388
adminrc@root@controller:/var/log/glance$grep -rHn "ERROR"
adminrc@root@controller:/var/log/glance$grep -rHn "error"
glance-api.log:12:2019-01-15 06:11:59.019 13359 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}
glance-api.log:13:2019-01-15 06:11:59.071 13359 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}
glance-api.log:16:2019-01-15 06:15:01.259 13359 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}
glance-api.log:17:2019-01-15 06:15:01.301 13359 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}
adminrc@root@controller:~$openstack image create "cirrors" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
503 Service Unavailable: The server is currently unavailable. Please try again at a later time. (HTTP 503)
adminrc@root@controller:~$tail /var/log/keystone/keystone-wsgi-admin.log
2019-01-15 06:30:32.353 10159 INFO keystone.token.providers.fernet.utils [req-749b2de5-d2be-47e8-9263-083c54fe488d - - - - -] Loaded 2 encryption keys (max_active_keys=3) from: /etc/keystone/fernet-keys/
2019-01-15 06:30:32.358 10161 INFO keystone.common.wsgi [req-62e3bb30-ef7b-476a-8f49-dc062c1a9452 - - - - -] POST http://controller:35357/v3/auth/tokens
2019-01-15 06:30:32.552 10161 INFO keystone.token.providers.fernet.utils [req-62e3bb30-ef7b-476a-8f49-dc062c1a9452 - - - - -] Loaded 2 encryption keys (max_active_keys=3) from: /etc/keystone/fernet-keys/
2019-01-15 06:30:32.561 10163 INFO keystone.token.providers.fernet.utils [req-2540636c-0a56-4549-adbc-deeaf0063210 - - - - -] Loaded 2 encryption keys (max_active_keys=3) from: /etc/keystone/fernet-keys/
2019-01-15 06:30:32.682 10163 INFO keystone.common.wsgi [req-2540636c-0a56-4549-adbc-deeaf0063210 653177098fac40a28734093706299e66 29577090a0e8466ab49cc30a4305f5f8 - 1495769d2bbb44d192eee4c9b2f91ca3 1495769d2bbb44d192eee4c9b2f91ca3] GET http://controller:35357/v3/services/image
2019-01-15 06:30:32.686 10163 WARNING keystone.common.wsgi [req-2540636c-0a56-4549-adbc-deeaf0063210 653177098fac40a28734093706299e66 29577090a0e8466ab49cc30a4305f5f8 - 1495769d2bbb44d192eee4c9b2f91ca3 1495769d2bbb44d192eee4c9b2f91ca3] Could not find service: image
2019-01-15 06:30:32.691 10160 INFO keystone.token.providers.fernet.utils [req-c4a9af14-d206-4551-a693-23055fcb16e3 - - - - -] Loaded 2 encryption keys (max_active_keys=3) from: /etc/keystone/fernet-keys/
2019-01-15 06:30:32.807 10160 INFO keystone.common.wsgi [req-c4a9af14-d206-4551-a693-23055fcb16e3 653177098fac40a28734093706299e66 29577090a0e8466ab49cc30a4305f5f8 - 1495769d2bbb44d192eee4c9b2f91ca3 1495769d2bbb44d192eee4c9b2f91ca3] GET http://controller:35357/v3/services?name=image
2019-01-15 06:30:32.816 10162 INFO keystone.token.providers.fernet.utils [req-cc99a9ba-db21-4186-9c32-4eb39b931efa - - - - -] Loaded 2 encryption keys (max_active_keys=3) from: /etc/keystone/fernet-keys/
2019-01-15 06:30:32.939 10162 INFO keystone.common.wsgi [req-cc99a9ba-db21-4186-9c32-4eb39b931efa 653177098fac40a28734093706299e66 29577090a0e8466ab49cc30a4305f5f8 - 1495769d2bbb44d192eee4c9b2f91ca3 1495769d2bbb44d192eee4c9b2f91ca3] GET http://controller:35357/v3/services?type=image

解决办法

在glance-api.conf和glance-registry.conf文件中
[keystone_authtoken]
username = glance
password = 123456
这里跟glance数据库密码搞混了,应该是glance
因为上面这条命令openstack user create --domain default --password glance glance

安装nova

MariaDB [(none)]> create database nova_api;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> create database nova;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'%' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'%' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> \q
Bye

创建nova用户

adminrc@root@controller:~$openstack user create --domain default --password nova nova
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled   | True                             |
| id        | e4fc73ea1f6d47269ae4ab95ff999326 |
| name      | nova                             |
+-----------+----------------------------------+
给nova用户添加admin角色
adminrc@root@controller:~$openstack role add --project service --user nova admin

创建nova服务实体

adminrc@root@controller:~$openstack role add --project service --user nova admin
adminrc@root@controller:~$openstack service create --name nova  --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | 872de5b67b1547adb4826ca1f7ef96b3 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

创建compute服务api端点

adminrc@root@controller:~$openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field        | Value                                     |
+--------------+-------------------------------------------+
| enabled      | True                                      |
| id           | 8e42256f67e446cc88568903286ed462          |
| interface    | public                                    |
| region       | RegionOne                                 |
| region_id    | RegionOne                                 |
| service_id   | 872de5b67b1547adb4826ca1f7ef96b3          |
| service_name | nova                                      |
| service_type | compute                                   |
| url          | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+

adminrc@root@controller:~$openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field        | Value                                     |
+--------------+-------------------------------------------+
| enabled      | True                                      |
| id           | b07f3be5fff4444db57323bb04376d33          |
| interface    | internal                                  |
| region       | RegionOne                                 |
| region_id    | RegionOne                                 |
| service_id   | 872de5b67b1547adb4826ca1f7ef96b3          |
| service_name | nova                                      |
| service_type | compute                                   |
| url          | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field        | Value                                     |
+--------------+-------------------------------------------+
| enabled      | True                                      |
| id           | 91dc56e437e640c397696318ee1dcc21          |
| interface    | admin                                     |
| region       | RegionOne                                 |
| region_id    | RegionOne                                 |
| service_id   | 872de5b67b1547adb4826ca1f7ef96b3          |
| service_name | nova                                      |
| service_type | compute                                   |
| url          | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+

安装nova组件包

adminrc@root@controller:~$apt-get install -y nova-api nova-conductor nova-consoleauth  nova-novncproxy nova-scheduler

配置

adminrc@root@controller:~$cp /etc/nova/nova.conf{,.bak}
adminrc@root@controller:~$vim /etc/nova/nova.conf
[DEFAULT]
........
rpc_backend=rabbit
auth_strategy=keystone
my_ip=10.0.3.10
use_neutron=True
firewall_driver=nova.virt.firewall.NoopFirewallDriver

[database]
connection=mysql+pymysql://nova:123456@controller/nova

[api_database]
connection=mysql+pymysql://nova:123456@controller/nova_api

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = 123456

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

[vnc]
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 0.0.0.0

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

同步数据库

adminrc@root@controller:~$su -s /bin/sh -c "nova-manage api_db sync" nova
Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT".
Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.
...........
2019-01-15 07:38:43.731 21492 INFO migrate.versioning.api [-] done
adminrc@root@controller:~$echo $?
0
adminrc@root@controller:~$su -s /bin/sh -c "nova-manage db sync" nova
.......
2019-01-15 07:40:19.955 22811 INFO migrate.versioning.api [-] done
adminrc@root@controller:~$echo $?
0

重启服务

adminrc@root@controller:~$service nova-api restart
nova-api stop/waiting
nova-api start/running, process 23944
adminrc@root@controller:~$service nova-consoleauth restart
nova-consoleauth stop/waiting
nova-consoleauth start/running, process 24431
adminrc@root@controller:~$service nova-scheduler restart
nova-scheduler stop/waiting
nova-scheduler start/running, process 24670
adminrc@root@controller:~$service nova-conductor restart
nova-conductor stop/waiting
nova-conductor start/running, process 24877
adminrc@root@controller:~$service nova-novncproxy restart
nova-novncproxy stop/waiting
nova-novncproxy start/running, process 25010

查看服务是否启动起来

adminrc@root@controller:/var/log/nova$openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  3 | nova-consoleauth | controller | internal | enabled | up    | 2019-01-14T23:44:50.000000 |
|  4 | nova-scheduler   | controller | internal | enabled | up    | 2019-01-14T23:44:46.000000 |
|  5 | nova-conductor   | controller | internal | enabled | up    | 2019-01-14T23:44:49.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+

安装nova-compute节点,因为这里是单节点安装,所以nova-compute也是安装在controller节点上

adminrc@root@controller:~$apt-get install nova-compute

重新配置nova.conf

adminrc@root@controller:~$cp /etc/nova/nova.conf{,.back}
adminrc@root@controller:~$vim /etc/nova/nova.conf  #其他项保持不变
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.56.10:6080/vnc_auto.html

确定计算节点是否支持虚拟机硬件加速

adminrc@root@controller:~$egrep -c '(vmx|svm)' /proc/cpuinfo
0
# 不支持
需要更改nova-compute.conf文件
adminrc@root@controller:~$cp /etc/nova/nova-compute.conf{,.bak}
adminrc@root@controller:~$vim /etc/nova/nova-compute.conf
[libvirt]
virt_type=qemu  #原值是kvm
重启计算服务
adminrc@root@controller:~$service nova-compute restart
nova-compute stop/waiting
nova-compute start/running, process 16696
adminrc@root@controller:~$openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  3 | nova-consoleauth | controller | internal | enabled | up    | 2019-01-15T00:11:51.000000 |
|  4 | nova-scheduler   | controller | internal | enabled | up    | 2019-01-15T00:11:57.000000 |
|  5 | nova-conductor   | controller | internal | enabled | up    | 2019-01-15T00:11:50.000000 |
|  6 | nova-compute     | controller | nova     | enabled | up    | 2019-01-15T00:11:54.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
如果查看nova-api服务的话,需要
adminrc@root@controller:~$service nova-api status
nova-api start/running, process 23944

安装网络neutron服务

MariaDB [(none)]> create database neutron;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'%' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> \q

创建neutron用户

adminrc@root@controller:~$openstack user create --domain default --password neutron neutron
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled   | True                             |
| id        | 081dc309806c45198a3bd6c39bf9947f |
| name      | neutron                          |
+-----------+----------------------------------+
adminrc@root@controller:~$openstack role add --project service --user neutron admin
adminrc@root@controller:~$

创建neutron服务实体

adminrc@root@controller:~$openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | c661b602f11d45cfb068027c77fd519e |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+

创建neutron服务端点

adminrc@root@controller:~$openstack endpoint create --region RegionOne network public http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 0192ba47a7b348ec88bb5f71c82f8f4c |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c661b602f11d45cfb068027c77fd519e |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne network internal http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | bdf4b9663ccb4ef695cde0638231943a |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c661b602f11d45cfb068027c77fd519e |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne network admin http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ffc7a793985e494fa839fd76ea5bdcef |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c661b602f11d45cfb068027c77fd519e |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+

配置网络选项,网络选项有两种:

1.公共网络

2.私有网络

对于公共网络

首先安装安全组件

adminrc@root@controller:~$apt-get install -y neutron-server neutron-plugin-ml2   neutron-linuxbridge-agent neutron-dhcp-agent   neutron-metadata-agent

 

adminrc@root@controller:~$cp /etc/neutron/neutron.conf{,.bak}
adminrc@root@controller:~$vim /etc/neutron/neutron.conf
#需要更改的地方
[database]
connection = mysql+pymysql://neutron:123456@controller/neutron

[DEFAULT]
rpc_backend = rabbit
core_plugin = ml2
service_plugins =
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True


[oslo_messaging_rabbit]

rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS

[keystone_authtoken]

auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[nova]

auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

配置ML2插件

adminrc@root@controller:~$cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
adminrc@root@controller:~$vim /etc/neutron/plugins/ml2/ml2_conf.ini
# 需要更改的项
[ml2]

type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security

[ml2_type_flat]

flat_networks = provider

[securitygroup]

enable_ipset = True

配置linuxbridge.ini

adminrc@root@controller:~$cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
adminrc@root@controller:~$vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

[vxlan]
enable_vxlan = False

[securitygroup]

enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置dhcp_agent.ini

adminrc@root@controller:~$cp /etc/neutron/dhcp_agent.ini{,.bak}
adminrc@root@controller:~$vim /etc/neutron/dhcp_agent.ini
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True

配置元数据代理

adminrc@root@controller:~$cp /etc/neutron/metadata_agent.ini{,.bak}
adminrc@root@controller:~$vim /etc/neutron/metadata_agent.ini
[DEFAULT]

nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET

配置计算节点网络服务

adminrc@root@controller:~$vim  /etc/nova/nova.conf
[neutron]   末尾添加这些内容
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS

service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET

同步数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启计算API服务、Networking服务

adminrc@root@controller:~$service nova-api restart
nova-api stop/waiting
nova-api start/running, process 554
adminrc@root@controller:~$service neutron-server restart
neutron-server stop/waiting
neutron-server start/running, process 837
adminrc@root@controller:~$service neutron-server restart
neutron-server stop/waiting
neutron-server start/running, process 997
adminrc@root@controller:~$service neutron-linuxbridge-agent restart
neutron-linuxbridge-agent stop/waiting
neutron-linuxbridge-agent start/running, process 1316
adminrc@root@controller:~$service neutron-dhcp-agent restart
neutron-dhcp-agent stop/waiting
neutron-dhcp-agent start/running, process 1599
adminrc@root@controller:~$service neutron-metadata-agent restart
neutron-metadata-agent stop/waiting
neutron-metadata-agent start/running, process 1825

重启neutron-l3-agent

adminrc@root@controller:~$service neutron-l3-agent restart
neutron-l3-agent stop/waiting
neutron-l3-agent start/running, process 8271

 

重启

adminrc@root@controller:~$service nova-compute restart
nova-compute stop/waiting
nova-compute start/running, process 13155
adminrc@root@controller:~$service neutron-linuxbridge-agent restart
neutron-linuxbridge-agent stop/waiting
neutron-linuxbridge-agent start/running, process 14295

查看是否有网络创建

adminrc@root@controller:~$openstack network list

输出为空,因为还没有创建任何网络

验证neutron-server是否正常启动

adminrc@root@controller:~$neutron ext-list
+---------------------------+-----------------------------------------------+
| alias                     | name                                          |
+---------------------------+-----------------------------------------------+
| default-subnetpools       | Default Subnetpools                           |
| availability_zone         | Availability Zone                             |
| network_availability_zone | Network Availability Zone                     |
| auto-allocated-topology   | Auto Allocated Topology Services              |
| binding                   | Port Binding                                  |
| agent                     | agent                                         |
| subnet_allocation         | Subnet Allocation                             |
| dhcp_agent_scheduler      | DHCP Agent Scheduler                          |
| tag                       | Tag support                                   |
| external-net              | Neutron external network                      |
| net-mtu                   | Network MTU                                   |
| network-ip-availability   | Network IP Availability                       |
| quotas                    | Quota management support                      |
| provider                  | Provider Network                              |
| multi-provider            | Multi Provider Network                        |
| address-scope             | Address scope                                 |
| timestamp_core            | Time Stamp Fields addition for core resources |
| extra_dhcp_opt            | Neutron Extra DHCP opts                       |
| security-group            | security-group                                |
| rbac-policies             | RBAC Policies                                 |
| standard-attr-description | standard-attr-description                     |
| port-security             | Port Security                                 |
| allowed-address-pairs     | Allowed Address Pairs                         |
+---------------------------+-----------------------------------------------+

验证

adminrc@root@controller:~$neutron agent-list
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 0cafd3ff-6da0-4194-a6dd-9a60136af93a | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent        |
| 53fce606-311d-4244-8af0-efd6f9087e34 | Open vSwitch agent | controller |                   | :-)   | True           | neutron-openvswitch-agent |
| b5dffa68-a505-448f-8fa6-7d8bb16eb07a | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-agent |
| dc161e12-8b23-4f49-8170-b7d68cfe2197 | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+

创建一个实例

首先需要创建一个虚拟网络

创建一个提供者网络

adminrc@root@controller:~$neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider
Invalid input for operation: network_type value 'flat' not supported.
Neutron server returns request_ids: ['req-e9d3cb26-4156-4eb1-bc9e-9528dbbd1dc9']

根据错误提示,需要检查下ml2.conf.ini文件

[ml2]

type_drivers = flat,vlan #确认这行内容有flat

重启服务再次运行创建网络

adminrc@root@controller:~$service neutron-server restart
neutron-server stop/waiting
neutron-server start/running, process 28671
adminrc@root@controller:~$neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2019-01-15T12:45:35                  |
| description               |                                      |
| id                        | ab73ff8f-2d19-4479-811c-85c068290eeb |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1500                                 |
| name                      | provider                             |
| port_security_enabled     | True                                 |
| provider:network_type     | flat                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  |                                      |
| router:external           | False                                |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| tenant_id                 | 29577090a0e8466ab49cc30a4305f5f8     |
| updated_at                | 2019-01-15T12:45:35                  |
+---------------------------+--------------------------------------+

接着创建一个子网

adminrc@root@controller:~$neutron subnet-create --name provider --allocation-pool start=10.0.3.50,end=10.0.3.253 --dns-nameserver 114.114.114.114 --gateway 10.0.3.1 provider 10.0.3.0/24
Created a new subnet:
+-------------------+---------------------------------------------+
| Field             | Value                                       |
+-------------------+---------------------------------------------+
| allocation_pools  | {"start": "10.0.3.50", "end": "10.0.3.253"} |
| cidr              | 10.0.3.0/24                                 |
| created_at        | 2019-01-15T12:56:21                         |
| description       |                                             |
| dns_nameservers   | 114.114.114.114                             |
| enable_dhcp       | True                                        |
| gateway_ip        | 10.0.3.1                                    |
| host_routes       |                                             |
| id                | 48faef6d-ee9d-4b46-a56d-3c196a766224        |
| ip_version        | 4                                           |
| ipv6_address_mode |                                             |
| ipv6_ra_mode      |                                             |
| name              | provider                                    |
| network_id        | ab73ff8f-2d19-4479-811c-85c068290eeb        |
| subnetpool_id     |                                             |
| tenant_id         | 29577090a0e8466ab49cc30a4305f5f8            |
| updated_at        | 2019-01-15T12:56:21                         |
+-------------------+---------------------------------------------+

接着创建一个虚拟主机

adminrc@root@controller:~$openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field                      | Value   |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0       |
| disk                       | 1       |
| id                         | 0       |
| name                       | m1.nano |
| os-flavor-access:is_public | True    |
| ram                        | 64      |
| rxtx_factor                | 1.0     |
| swap                       |         |
| vcpus                      | 1       |
+----------------------------+---------+

生成一个键值对

adminrc@root@controller:~$pwd
/home/openstack
adminrc@root@controller:~$ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
8a:e5:a2:f3:f4:1e:93:1a:c1:8d:67:d1:fd:fa:4b:75 root@controller
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|       . .       |
|      . . .      |
|   . o .   .     |
|    + = S   . . E|
|     B o   . . . |
|    = *   . .    |
|  .o = o   o     |
|  .oo.o     o.   |
+-----------------+
adminrc@root@controller:~$ls -al /root/.ssh/id_rsa.pub
-rw-r--r-- 1 root root 397 Jan 15 21:13 /root/.ssh/id_rsa.pub

添加密钥对

adminrc@root@controller:~$openstack keypair create --public-key /root/.ssh/id_rsa.pub rootkey
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| fingerprint | 8a:e5:a2:f3:f4:1e:93:1a:c1:8d:67:d1:fd:fa:4b:75 |
| name        | rootkey                                         |
| user_id     | 653177098fac40a28734093706299e66                |
+-------------+-------------------------------------------------+

验证密钥对

adminrc@root@controller:~$openstack keypair list
+---------+-------------------------------------------------+
| Name    | Fingerprint                                     |
+---------+-------------------------------------------------+
| rootkey | 8a:e5:a2:f3:f4:1e:93:1a:c1:8d:67:d1:fd:fa:4b:75 |
+---------+-------------------------------------------------+

增加安全组规则

adminrc@root@controller:~$openstack security group rule create --proto icmp default
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| id                    | a4c8ad46-42eb-4397-b09f-af5dcfef2ad1 |
| ip_protocol           | icmp                                 |
| ip_range              | 0.0.0.0/0                            |
| parent_group_id       | 968f5f33-c569-46b4-9019-8a3f614ae670 |
| port_range            |                                      |
| remote_security_group |                                      |
+-----------------------+--------------------------------------+
adminrc@root@controller:~$openstack security group rule create --proto tcp --dst-port 22 default
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| id                    | 8ed34a22-9479-4074-8177-94ec284e4764 |
| ip_protocol           | tcp                                  |
| ip_range              | 0.0.0.0/0                            |
| parent_group_id       | 968f5f33-c569-46b4-9019-8a3f614ae670 |
| port_range            | 22:22                                |
| remote_security_group |                                      |
+-----------------------+--------------------------------------+

开始创建实例

# 列出可用类型
adminrc@root@controller:~$openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 0  | m1.nano   |    64 |    1 |         0 |     1 | True      |
| 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
| 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
| 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
| 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
| 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
+----+-----------+-------+------+-----------+-------+-----------+
# 列出可用镜像
adminrc@root@controller:~$openstack  image list
+--------------------------------------+---------+--------+
| ID                                   | Name    | Status |
+--------------------------------------+---------+--------+
| 39d73bcf-e60b-4caf-8469-cca17de00d7e | cirrors | active |
+--------------------------------------+---------+--------+
# 列出可用网络
adminrc@root@controller:~$openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID                                   | Name     | Subnets                              |
+--------------------------------------+----------+--------------------------------------+
| ab73ff8f-2d19-4479-811c-85c068290eeb | provider | 48faef6d-ee9d-4b46-a56d-3c196a766224 |
+--------------------------------------+----------+--------------------------------------+
# 列出可用安全组规则
adminrc@root@controller:~$openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+
| ID                                   | Name    | Description            | Project                          |
+--------------------------------------+---------+------------------------+----------------------------------+
| 968f5f33-c569-46b4-9019-8a3f614ae670 | default | Default security group | 29577090a0e8466ab49cc30a4305f5f8 |
+--------------------------------------+---------+------------------------+----------------------------------+
# 创建实例
adminrc@root@controller:~$openstack server create --flavor m1.nano --image cirros --nic net-id=ab73ff8f-2d19-4479-811c-85c068290eeb --security-group default --key-name rootkey test-instance
No image with a name or ID of 'cirros' exists.
# 好吧 又有事情了
# 再次查看可用镜像,好像发现问题所在了,我输入的是cirros,而可用镜像的name的值cirrors。
adminrc@root@controller:~$openstack  image list
+--------------------------------------+---------+--------+
| ID                                   | Name    | Status |
+--------------------------------------+---------+--------+
| 39d73bcf-e60b-4caf-8469-cca17de00d7e | cirrors | active |
+--------------------------------------+---------+--------+
adminrc@root@controller:~$openstack server create --flavor m1.nano --image cirrors --nic net-id=ab73ff8f-2d19-4479-811c-85c068290eeb --security-group default --key-name rootkey test-instance
+--------------------------------------+------------------------------------------------+
| Field                                | Value                                          |
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                         |
| OS-EXT-AZ:availability_zone          |                                                |
| OS-EXT-SRV-ATTR:host                 | None                                           |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                           |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                              |
| OS-EXT-STS:power_state               | 0                                              |
| OS-EXT-STS:task_state                | scheduling                                     |
| OS-EXT-STS:vm_state                  | building                                       |
| OS-SRV-USG:launched_at               | None                                           |
| OS-SRV-USG:terminated_at             | None                                           |
| accessIPv4                           |                                                |
| accessIPv6                           |                                                |
| addresses                            |                                                |
| adminPass                            | WeVy7yd6BXcc                                   |
| config_drive                         |                                                |
| created                              | 2019-01-15T13:35:19Z                           |
| flavor                               | m1.nano (0)                                    |
| hostId                               |                                                |
| id                                   | 9eb49f96-7d68-4628-bb37-7583e457edc6           |
| image                                | cirrors (39d73bcf-e60b-4caf-8469-cca17de00d7e) |
| key_name                             | rootkey                                        |
| name                                 | test-instance                                  |
| os-extended-volumes:volumes_attached | []                                             |
| progress                             | 0                                              |
| project_id                           | 29577090a0e8466ab49cc30a4305f5f8               |
| properties                           |                                                |
| security_groups                      | [{u'name': u'default'}]                        |
| status                               | BUILD                                          |
| updated                              | 2019-01-15T13:35:20Z                           |
| user_id                              | 653177098fac40a28734093706299e66               |
+--------------------------------------+------------------------------------------------+
创建成功

查看相关实例

adminrc@root@controller:~$openstack server list
+--------------------------------------+---------------+--------+--------------------+
| ID                                   | Name          | Status | Networks           |
+--------------------------------------+---------------+--------+--------------------+
| 9eb49f96-7d68-4628-bb37-7583e457edc6 | test-instance | ACTIVE | provider=10.0.3.51 |
+--------------------------------------+---------------+--------+--------------------+
adminrc@root@controller:~$nova image-list
+--------------------------------------+---------+--------+--------+
| ID                                   | Name    | Status | Server |
+--------------------------------------+---------+--------+--------+
| 39d73bcf-e60b-4caf-8469-cca17de00d7e | cirrors | ACTIVE |        |
+--------------------------------------+---------+--------+--------+
adminrc@root@controller:~$glance image-list
+--------------------------------------+---------+
| ID                                   | Name    |
+--------------------------------------+---------+
| 39d73bcf-e60b-4caf-8469-cca17de00d7e | cirrors |
+--------------------------------------+---------+
adminrc@root@controller:~$nova list
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| ID                                   | Name          | Status | Task State | Power State | Networks           |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| 9eb49f96-7d68-4628-bb37-7583e457edc6 | test-instance | ACTIVE | -          | Running     | provider=10.0.3.51 |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+

启动实例的命令

adminrc@root@controller:~$openstack boot --flavor m1.nano --image cirrors --nic net-id=ab73ff8f-2d19-4479-811c-85c068290eeb --security-group default --key-name rootkey test-instance

debug

adminrc@root@controller:~$openstack --debug server create  --flavor m1.nano --image cirrors --nic net-id=ab73ff8f-2d19-4479-811c-85c068290eeb --security-group default --key-name rootkey test-instance

使用虚拟控制台访问实例

adminrc@root@controller:~$openstack console url show test-instance
+-------+------------------------------------------------------------------------------------+
| Field | Value                                                                              |
+-------+------------------------------------------------------------------------------------+
| type  | novnc                                                                              |
| url   | http://192.168.56.10:6080/vnc_auto.html?token=ce586e5f-ceb1-4f7d-b039-0e44ae273686 |
+-------+------------------------------------------------------------------------------------+

提示很明显

用户名:cirros

密码:cubswin:)

使用sudo切换至root用户。

接下来查看

测试网络连通性

 

接着创建第二个

adminrc@root@controller:~$openstack server create --flavor m1.nano --image cirrors --nic net-id=ab73ff8f-2d19-4479-811c-85c068290eeb --security-group default --key-name rootkey test-instance
+--------------------------------------+------------------------------------------------+
| Field                                | Value                                          |
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                         |
| OS-EXT-AZ:availability_zone          |                                                |
| OS-EXT-SRV-ATTR:host                 | None                                           |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                           |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000002                              |
| OS-EXT-STS:power_state               | 0                                              |
| OS-EXT-STS:task_state                | scheduling                                     |
| OS-EXT-STS:vm_state                  | building                                       |
| OS-SRV-USG:launched_at               | None                                           |
| OS-SRV-USG:terminated_at             | None                                           |
| accessIPv4                           |                                                |
| accessIPv6                           |                                                |
| addresses                            |                                                |
| adminPass                            | QrFxY7UnvuJV                                   |
| config_drive                         |                                                |
| created                              | 2019-01-15T14:05:15Z                           |
| flavor                               | m1.nano (0)                                    |
| hostId                               |                                                |
| id                                   | 203a1f48-1f98-44ca-a3fa-883a9cea514a           |
| image                                | cirrors (39d73bcf-e60b-4caf-8469-cca17de00d7e) |
| key_name                             | rootkey                                        |
| name                                 | test-instance                                  |
| os-extended-volumes:volumes_attached | []                                             |
| progress                             | 0                                              |
| project_id                           | 29577090a0e8466ab49cc30a4305f5f8               |
| properties                           |                                                |
| security_groups                      | [{u'name': u'default'}]                        |
| status                               | BUILD                                          |
| updated                              | 2019-01-15T14:05:15Z                           |
| user_id                              | 653177098fac40a28734093706299e66               |
+--------------------------------------+------------------------------------------------+
查看
adminrc@root@controller:~$nova list
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| ID                                   | Name          | Status | Task State | Power State | Networks           |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| 203a1f48-1f98-44ca-a3fa-883a9cea514a | test-instance | ACTIVE | -          | Running     | provider=10.0.3.52 |
| 9eb49f96-7d68-4628-bb37-7583e457edc6 | test-instance | ACTIVE | -          | Running     | provider=10.0.3.51 |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+

此时已经创建好了两台虚拟实例,并且已经处于running状态。

实例2我们使用命令行演示下

adminrc@root@controller:~$ping -c 2 10.0.3.52
PING 10.0.3.52 (10.0.3.52) 56(84) bytes of data.
64 bytes from 10.0.3.52: icmp_seq=1 ttl=64 time=28.5 ms
64 bytes from 10.0.3.52: icmp_seq=2 ttl=64 time=0.477 ms

--- 10.0.3.52 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.477/14.505/28.534/14.029 ms
adminrc@root@controller:~$nova list
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| ID                                   | Name          | Status | Task State | Power State | Networks           |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| 203a1f48-1f98-44ca-a3fa-883a9cea514a | test-instance | ACTIVE | -          | Running     | provider=10.0.3.52 |
| 9eb49f96-7d68-4628-bb37-7583e457edc6 | test-instance | ACTIVE | -          | Running     | provider=10.0.3.51 |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+

使用openstack console url show 查看

adminrc@root@controller:~$openstack console url show test-instance
More than one server exists with the name 'test-instance'. 
# 因为此时有两个server,所以使用id来展示即可
adminrc@root@controller:~$openstack console url show 203a1f48-1f98-44ca-a3fa-883a9cea514a
+-------+------------------------------------------------------------------------------------+
| Field | Value                                                                              |
+-------+------------------------------------------------------------------------------------+
| type  | novnc                                                                              |
| url   | http://192.168.56.10:6080/vnc_auto.html?token=42c43635-884c-482e-ac08-d1e6c6d2789b |
+-------+------------------------------------------------------------------------------------+

 

# 注意这里不知道为什么ssh不可以,按说配置了安全组规则后可以使用ssh cirros@10.0.3.52直接登上去,但是会提示输入密码,这一步暂时是个问题。。。。

哦...目前只知道使用这种方法获取用户名及密码

使用命令行测试

adminrc@root@controller:~$ssh cirros@10.0.3.52
cirros@10.0.3.52's password:# cubswin:)

$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:07:21:DE
          inet addr:10.0.3.52  Bcast:10.0.3.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe07:21de/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:103 errors:0 dropped:0 overruns:0 frame:0
          TX packets:176 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:17870 (17.4 KiB)  TX bytes:17279 (16.8 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$ ping -c 2 10.0.3.1
PING 10.0.3.1 (10.0.3.1): 56 data bytes
64 bytes from 10.0.3.1: seq=0 ttl=255 time=45.026 ms
64 bytes from 10.0.3.1: seq=1 ttl=255 time=1.050 ms

--- 10.0.3.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.050/23.038/45.026 ms
$ ping -c 2 www.qq.com
PING www.qq.com (61.129.7.47): 56 data bytes
64 bytes from 61.129.7.47: seq=0 ttl=53 time=5.527 ms
64 bytes from 61.129.7.47: seq=1 ttl=53 time=5.363 ms

--- www.qq.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 5.363/5.445/5.527 ms

测试两个实例之间的连通性

$ sudo -s
$ hostname
cirros
$ ping -c 2 10.0.3.51
PING 10.0.3.51 (10.0.3.51): 56 data bytes
64 bytes from 10.0.3.51: seq=0 ttl=64 time=28.903 ms
64 bytes from 10.0.3.51: seq=1 ttl=64 time=1.205 ms

--- 10.0.3.51 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.205/15.054/28.903 m

对于私有网络服务

安装组件

root@controller:~# apt-get install -y neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

由于这步是在原有的公共网络服务基础上配置的,有些配置文件需要更改

确认配置neutron.conf文件信息

root@controller:~# ls /etc/neutron/neutron.*
neutron.conf      neutron.conf.bak
root@controller:~# vim default
root@controller:~# cat default
core_plugin = ml2   #注意首行顶格写没有空行才行
service_plugins = router
allow_overlapping_ips = True
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True

root@controller:~# grep "`cat default`" /etc/neutron/neutron.conf
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
rpc_backend = rabbit


root@controller:~# grep "^connection" /etc/neutron/neutron.conf
connection = mysql+pymysql://neutron:123456@controller/neutron
root@controller:~# grep "core_plugin" /etc/neutron/neutron.conf
core_plugin = ml2
root@controller:~# grep "service_plugins" /etc/neutron/neutron.conf
service_plugins =
root@controller:~# sed -i "s/service_plugins\=/service_plugins\ =\ router/g" /etc/neutron/neutron.conf
root@controller:~# grep "service_plugins" /etc/neutron/neutron.conf          service_plugins = router
root@controller:~# grep "allow_overlapping_ips" /etc/neutron/neutron.conf    #allow_overlapping_ips = false
root@controller:~# sed -i "s/\#allow_overlapping_ips\ =\ false/allow_overlapping_ips\ =\ True/g" /etc/neutron/neutron.conf
root@controller:~# grep "allow_overlapping_ips" /etc/neutron/neutron.conf    allow_overlapping_ips = True
root@controller:~# grep "rpc_backend = rabbit" /etc/neutron/neutron.conf
rpc_backend = rabbit
root@controller:~# grep "rabbit_host = controller" /etc/neutron/neutron.conf
rabbit_host = controller
root@controller:~# grep "rabbit_userid = openstack" /etc/neutron/neutron.conf
rabbit_userid = openstack
root@controller:~# grep "rabbit_password = 123456" /etc/neutron/neutron.conf rabbit_password = 123456
root@controller:~# cat keystone_authtoken
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
root@controller:~# grep "`cat keystone_authtoken`" /etc/neutron/neutron.conf
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service

root@controller:~# grep "`cat oslo_messaging_rabbit`" /etc/neutron/neutron.conf
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = 123456

root@controller:~# vim nova
root@controller:~# cat nova
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

root@controller:~# grep "`cat nova`" /etc/neutron/neutron.conf
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

也可以这样

root@controller:~# vim neutron
root@controller:~# cat neutron
^\[database\]
connection = mysql+pymysql://neutron:123456@controller/neutron
^\[DEFAULT\]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
^\[oslo_messaging_rabbit\]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = 123456
^\[keystone_authtoken\]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
^\[nova\]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

root@controller:~# grep "`cat neutron`" /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
rpc_backend = rabbit
[database]
connection = mysql+pymysql://neutron:123456@controller/neutron
[keystone_authtoken]
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack

 

确认ml2_conf.ini

root@controller:~# cat ml2
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
flat_networks = provider
vni_ranges = 1:1000
enable_ipset = True
# 在/etc/neutron/plugins/ml2/ml2_conf.ini添加上述内容,也可以一项一项找,然后取消注释更改为上述对应的值

完事之后配置linuxbridge_agent.ini

root@controller:~# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.public_net}
root@controller:~# vim linuxbridge
root@controller:~# cat linuxbridge
# 将linuxbridge_agent.ini文件中的以下选项按以下配置,没有的选项请添加
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = True
local_ip = 10.0.3.10
l2_population = True
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置layer-3代理

root@controller:~# cat l3_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
root@controller:~# vim /etc/neutron/l3_agent.ini
#将此文件中的与l3_agent.ini文件中的对应的选项按如上配置

配置DHCP代理

root@controller:~# cp /etc/neutron/dhcp_agent.ini{,.back}
root@controller:~# vim /etc/neutron/dhcp_agent.ini
root@controller:~# cat dhcp_agent
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
# 将dhcp_agent.ini文件中的选项内容按dehcp_agent中的内容填写

配置元数据代理

root@controller:~# cat metadata_agent
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
root@controller:~# grep "`cat metadata_agent`" /etc/neutron/metadata_agent.ini
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET

为计算节点配置网络服务

root@controller:~# cp /etc/nova/nova.conf{,.public_net}
root@controller:~# vim nova
root@controller:~# cat nova
^\[neutron\]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
root@controller:~# grep "`cat nova`" /etc/nova/nova.conf
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET

完成安装,同步数据库

root@controller:~# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
No handlers could be found for logger "oslo_config.cfg"
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  Running upgrade for neutron ...
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  OK
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  Running upgrade for neutron-fwaas ...
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  OK
root@controller:~# echo $?
0

重启服务

root@controller:~# ls /etc/init.d/ | grep nova
nova-api
nova-compute
nova-conductor
nova-consoleauth
nova-novncproxy
nova-scheduler
root@controller:~# ls /etc/init.d/ | grep nova | xargs -i service {} restart
nova-api stop/waiting
nova-api start/running, process 29688
nova-compute stop/waiting
nova-compute start/running, process 29741
nova-conductor stop/waiting
nova-conductor start/running, process 29797
nova-consoleauth stop/waiting
nova-consoleauth start/running, process 29841
nova-novncproxy stop/waiting
nova-novncproxy start/running, process 29865
nova-scheduler stop/waiting
nova-scheduler start/running, process 29922
重启网络服务
root@controller:~# ls /etc/init.d/ | grep neutron
neutron-dhcp-agent
neutron-l3-agent
neutron-linuxbridge-agent
neutron-linuxbridge-cleanup
neutron-metadata-agent
neutron-openvswitch-agent
neutron-ovs-cleanup
neutron-server
root@controller:~# ls /etc/init.d/ | grep neutron | xargs -i service {} restart
neutron-dhcp-agent stop/waiting
neutron-dhcp-agent start/running, process 31792
neutron-l3-agent stop/waiting
neutron-l3-agent start/running, process 31813
neutron-linuxbridge-agent stop/waiting
neutron-linuxbridge-agent start/running, process 31832
stop: Unknown instance:
start: Job failed to start
neutron-metadata-agent stop/waiting
neutron-metadata-agent start/running, process 31904
neutron-openvswitch-agent stop/waiting
neutron-openvswitch-agent start/running, process 31927
neutron-ovs-cleanup stop/waiting
neutron-ovs-cleanup start/running
neutron-server stop/waiting
neutron-server start/running, process 32097

验证

root@controller:~# source adminrc
adminrc@root@controller:~$neutron ext-list
+---------------------------+-----------------------------------------------+
| alias                     | name                                          |
+---------------------------+-----------------------------------------------+
| default-subnetpools       | Default Subnetpools                           |
| network-ip-availability   | Network IP Availability                       |
| network_availability_zone | Network Availability Zone                     |
| auto-allocated-topology   | Auto Allocated Topology Services              |
| ext-gw-mode               | Neutron L3 Configurable external gateway mode |
| binding                   | Port Binding                                  |
| agent                     | agent                                         |
| subnet_allocation         | Subnet Allocation                             |
| l3_agent_scheduler        | L3 Agent Scheduler                            |
| tag                       | Tag support                                   |
| external-net              | Neutron external network                      |
| net-mtu                   | Network MTU                                   |
| availability_zone         | Availability Zone                             |
| quotas                    | Quota management support                      |
| l3-ha                     | HA Router extension                           |
| provider                  | Provider Network                              |
| multi-provider            | Multi Provider Network                        |
| address-scope             | Address scope                                 |
| extraroute                | Neutron Extra Route                           |
| timestamp_core            | Time Stamp Fields addition for core resources |
| router                    | Neutron L3 Router                             |
| extra_dhcp_opt            | Neutron Extra DHCP opts                       |
| security-group            | security-group                                |
| dhcp_agent_scheduler      | DHCP Agent Scheduler                          |
| router_availability_zone  | Router Availability Zone                      |
| rbac-policies             | RBAC Policies                                 |
| standard-attr-description | standard-attr-description                     |
| port-security             | Port Security                                 |
| allowed-address-pairs     | Allowed Address Pairs                         |
| dvr                       | Distributed Virtual Router                    |
+---------------------------+-----------------------------------------------+
adminrc@root@controller:~$neutron agent-list
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 0cafd3ff-6da0-4194-a6dd-9a60136af93a | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent        |
| 53fce606-311d-4244-8af0-efd6f9087e34 | Open vSwitch agent | controller |                   | :-)   | True           | neutron-openvswitch-agent |
| 7afb1ed4-9542-4521-b1f8-4e0c6f06fe71 | L3 agent           | controller | nova              | :-)   | True           | neutron-l3-agent          |
| b5dffa68-a505-448f-8fa6-7d8bb16eb07a | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-agent |
| dc161e12-8b23-4f49-8170-b7d68cfe2197 | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
adminrc@root@controller:~$

创建虚拟网络,这里首先需要创建提供者网络,创建提供者网络的步骤与公有网络创建提供者网络的步骤一样,这里由于没有进行虚拟机快照还原操作,所以之前在公有网络配置的时候provider已经存在了,这里为了方便,首先删除掉公有网络创建的虚拟网络和两个实例

# 删除实例
adminrc@root@controller:~$openstack server list
+--------------------------------------+---------------+--------+--------------------+
| ID                                   | Name          | Status | Networks           |
+--------------------------------------+---------------+--------+--------------------+
| 203a1f48-1f98-44ca-a3fa-883a9cea514a | test-instance | ACTIVE | provider=10.0.3.52 |
| 9eb49f96-7d68-4628-bb37-7583e457edc6 | test-instance | ACTIVE | provider=10.0.3.51 |
+--------------------------------------+---------------+--------+--------------------+

adminrc@root@controller:~$openstack server delete 203a1f48-1f98-44ca-a3fa-883a9cea514a
adminrc@root@controller:~$echo $?
0
adminrc@root@controller:~$openstack server delete 9eb49f96-7d68-4628-bb37-7583e457edc6
adminrc@root@controller:~$echo $?
0
# 删除虚拟网络
adminrc@root@controller:~$neutron net-list
+--------------------------------------+----------+--------------------------------------------------+
| id                                   | name     | subnets                                          |
+--------------------------------------+----------+--------------------------------------------------+
| ab73ff8f-2d19-4479-811c-85c068290eeb | provider | 48faef6d-ee9d-4b46-a56d-3c196a766224 10.0.3.0/24 |
+--------------------------------------+----------+--------------------------------------------------+
adminrc@root@controller:~$neutron net-delete ab73ff8f-2d19-4479-811c-85c068290eeb
Deleted network: ab73ff8f-2d19-4479-811c-85c068290eeb
adminrc@root@controller:~$neutron net-list

adminrc@root@controller:~$neutron subnet-list

创建网络提供者

adminrc@root@controller:~$neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2019-01-16T00:52:17                  |
| description               |                                      |
| id                        | a600cdf0-352a-4c85-b90a-eba0ee4282fd |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1500                                 |
| name                      | provider                             |
| port_security_enabled     | True                                 |
| provider:network_type     | flat                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  |                                      |
| router:external           | False                                |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| tenant_id                 | 29577090a0e8466ab49cc30a4305f5f8     |
| updated_at                | 2019-01-16T00:52:17                  |
+---------------------------+--------------------------------------+
创建子网
adminrc@root@controller:~$neutron subnet-create --name provider --allocation-pool start=10.0.3.50,end=10.0.3.254 --dns-nameserver 114.114.114.114 --gateway 10.0.3.1 provider 10.0.3.0/24
Created a new subnet:
+-------------------+---------------------------------------------+
| Field             | Value                                       |
+-------------------+---------------------------------------------+
| allocation_pools  | {"start": "10.0.3.50", "end": "10.0.3.254"} |
| cidr              | 10.0.3.0/24                                 |
| created_at        | 2019-01-16T00:55:38                         |
| description       |                                             |
| dns_nameservers   | 114.114.114.114                             |
| enable_dhcp       | True                                        |
| gateway_ip        | 10.0.3.1                                    |
| host_routes       |                                             |
| id                | b19d9f26-e32e-4bb8-a53e-55eb1154cefe        |
| ip_version        | 4                                           |
| ipv6_address_mode |                                             |
| ipv6_ra_mode      |                                             |
| name              | provider                                    |
| network_id        | a600cdf0-352a-4c85-b90a-eba0ee4282fd        |
| subnetpool_id     |                                             |
| tenant_id         | 29577090a0e8466ab49cc30a4305f5f8            |
| updated_at        | 2019-01-16T00:55:38                         |
+-------------------+---------------------------------------------+

接着创建私有网络,这里遇到一个小错误

adminrc@root@controller:~$source demorc
demorc@root@controller:~$neutron net-create selfservice
Unable to create the network. No tenant network is available for allocation.
Neutron server returns request_ids: ['req-c2deaa15-c2eb-48b7-9510-644b3ae4f686']
# 排错
demorc@root@controller:~$ neutron net-list
+--------------------------------------+----------+--------------------------------------------------+
| id                                   | name     | subnets                                          |
+--------------------------------------+----------+--------------------------------------------------+
| a600cdf0-352a-4c85-b90a-eba0ee4282fd | provider | b19d9f26-e32e-4bb8-a53e-55eb1154cefe 10.0.3.0/24 |
+--------------------------------------+----------+--------------------------------------------------+
demorc@root@controller:~$neutron subnet-list
+--------------------------------------+----------+-------------+---------------------------------------------+
| id                                   | name     | cidr        | allocation_pools                            |
+--------------------------------------+----------+-------------+---------------------------------------------+
| b19d9f26-e32e-4bb8-a53e-55eb1154cefe | provider | 10.0.3.0/24 | {"start": "10.0.3.50", "end": "10.0.3.254"} |
+--------------------------------------+----------+-------------+---------------------------------------------+
demorc@root@controller:~$tail  /var/log/neutron/neutron-server.log
2019-01-16 08:57:14.834 18459 ERROR neutron.api.v2.resource   File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 209, in create_network_segments
2019-01-16 08:57:14.834 18459 ERROR neutron.api.v2.resource     segment = self._allocate_tenant_net_segment(session)
2019-01-16 08:57:14.834 18459 ERROR neutron.api.v2.resource   File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 245, in _allocate_tenant_net_segment
2019-01-16 08:57:14.834 18459 ERROR neutron.api.v2.resource     raise exc.NoNetworkAvailable()
2019-01-16 08:57:14.834 18459 ERROR neutron.api.v2.resource NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.
2019-01-16 08:57:14.834 18459 ERROR neutron.api.v2.resource
2019-01-16 08:57:14.846 18459 INFO neutron.wsgi [req-c2deaa15-c2eb-48b7-9510-644b3ae4f686 c4de9fac882740838aa26e9119b30cb9 ffc560f6a2604c3896df922115c6fc2a - - -] 10.0.3.10 - - [16/Jan/2019 08:57:14] "POST /v2.0/networks.json HTTP/1.1" 503 384 0.565548
2019-01-16 09:00:32.517 18459 INFO neutron.wsgi [req-d15a0c85-1248-4744-9989-6580c476d12a c4de9fac882740838aa26e9119b30cb9 ffc560f6a2604c3896df922115c6fc2a - - -] 10.0.3.10 - - [16/Jan/2019 09:00:32] "GET /v2.0/networks.json HTTP/1.1" 200 752 0.559720
2019-01-16 09:00:32.636 18459 INFO neutron.wsgi [req-6d8fe235-340d-4fe5-897c-f8eee16e3b5e c4de9fac882740838aa26e9119b30cb9 ffc560f6a2604c3896df922115c6fc2a - - -] 10.0.3.10 - - [16/Jan/2019 09:00:32] "GET /v2.0/subnets.json?fields=id&fields=cidr&id=b19d9f26-e32e-4bb8-a53e-55eb1154cefe HTTP/1.1" 200 297 0.115075
2019-01-16 09:01:19.646 18459 INFO neutron.wsgi [req-891d5624-a86e-4374-a81d-641e5cfc0043 c4de9fac882740838aa26e9119b30cb9 ffc560f6a2604c3896df922115c6fc2a - - -] 10.0.3.10 - - [16/Jan/2019 09:01:19] "GET /v2.0/subnets.json HTTP/1.1" 200 776 0.436610
demorc@root@controller:~$
demorc@root@controller:~$vim /etc/neutron/plugins/ml2/ml2_conf.ini
# 确保vni_ranges = 1:1000在[ml2_type_vxlan]下,而不是在其他项目下
[ml2_type_vxlan]

vni_ranges = 1:1000

重启nova和neutron服务后再次创建

demorc@root@controller:~$grep -rHn "vni_ranges" /etc/neutron/
/etc/neutron/plugins/ml2/ml2_conf.ini:186:vni_ranges = 1:1000
/etc/neutron/plugins/ml2/ml2_conf.ini:206:#vni_ranges =
/etc/neutron/plugins/ml2/ml2_conf.ini.bak:164:#vni_ranges =
/etc/neutron/plugins/ml2/ml2_conf.ini.bak:206:#vni_ranges =
demorc@root@controller:~$vim /etc/neutron/plugins/ml2/ml2_conf.ini
demorc@root@controller:~$vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
demorc@root@controller:~$vim /etc/neutron/plugins/ml2/ml2_conf.ini
demorc@root@controller:~$ls /etc/init.d/ | grep nova | xargs -i service {} restart
nova-api stop/waiting
nova-api start/running, process 5094
nova-compute stop/waiting
nova-compute start/running, process 5151
nova-conductor stop/waiting
nova-conductor start/running, process 5213
nova-consoleauth stop/waiting
nova-consoleauth start/running, process 5263
nova-novncproxy stop/waiting
nova-novncproxy start/running, process 5282
nova-scheduler stop/waiting
nova-scheduler start/running, process 5325
demorc@root@controller:~$ls /etc/init.d/ | grep neutron | xargs -i service {} restart
neutron-dhcp-agent stop/waiting
neutron-dhcp-agent start/running, process 5655
neutron-l3-agent stop/waiting
neutron-l3-agent start/running, process 5697
neutron-linuxbridge-agent stop/waiting
neutron-linuxbridge-agent start/running, process 5719
stop: Unknown instance:
start: Job failed to start
neutron-metadata-agent stop/waiting
neutron-metadata-agent start/running, process 5782
neutron-openvswitch-agent stop/waiting
neutron-openvswitch-agent start/running, process 5814
neutron-ovs-cleanup stop/waiting
neutron-ovs-cleanup start/running
neutron-server stop/waiting
neutron-server start/running, process 5994
demorc@root@controller:~$neutron net-list
+--------------------------------------+----------+--------------------------------------------------+
| id                                   | name     | subnets                                          |
+--------------------------------------+----------+--------------------------------------------------+
| b7369bde-908a-4dc4-b4af-a4bc5e1a2b8e | provider | 68f14924-15c4-4b0d-bcfc-011fd5a6de12 10.0.3.0/24 |
+--------------------------------------+----------+--------------------------------------------------+
demorc@root@controller:~$neutron subnet-list
+--------------------------------------+----------+-------------+---------------------------------------------+
| id                                   | name     | cidr        | allocation_pools                            |
+--------------------------------------+----------+-------------+---------------------------------------------+
| 68f14924-15c4-4b0d-bcfc-011fd5a6de12 | provider | 10.0.3.0/24 | {"start": "10.0.3.50", "end": "10.0.3.254"} |
+--------------------------------------+----------+-------------+---------------------------------------------+
demorc@root@controller:~$neutron net-create selfservice
Created a new network:
+-------------------------+--------------------------------------+
| Field                   | Value                                |
+-------------------------+--------------------------------------+
| admin_state_up          | True                                 |
| availability_zone_hints |                                      |
| availability_zones      |                                      |
| created_at              | 2019-01-16T01:39:37                  |
| description             |                                      |
| id                      | 66eb76af-e111-4cae-adc6-2df95ad29faf |
| ipv4_address_scope      |                                      |
| ipv6_address_scope      |                                      |
| mtu                     | 1450                                 |
| name                    | selfservice                          |
| port_security_enabled   | True                                 |
| router:external         | False                                |
| shared                  | False                                |
| status                  | ACTIVE                               |
| subnets                 |                                      |
| tags                    |                                      |
| tenant_id               | ffc560f6a2604c3896df922115c6fc2a     |
| updated_at              | 2019-01-16T01:39:37                  |
+-------------------------+--------------------------------------+

创建子网

demorc@root@controller:~$neutron subnet-create --name selfservice  --dns-nameserver 114.114.114.114 --gateway 192.168.56.1  selfservice 192.168.56.0/24
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field             | Value                                              |
+-------------------+----------------------------------------------------+
| allocation_pools  | {"start": "192.168.56.2", "end": "192.168.56.254"} |
| cidr              | 192.168.56.0/24                                    |
| created_at        | 2019-01-16T01:45:08                                |
| description       |                                                    |
| dns_nameservers   | 114.114.114.114                                    |
| enable_dhcp       | True                                               |
| gateway_ip        | 192.168.56.1                                       |
| host_routes       |                                                    |
| id                | 9c8f506c-46bd-44d8-a8a5-e160bf2ddf93               |
| ip_version        | 4                                                  |
| ipv6_address_mode |                                                    |
| ipv6_ra_mode      |                                                    |
| name              | selfservice                                        |
| network_id        | 66eb76af-e111-4cae-adc6-2df95ad29faf               |
| subnetpool_id     |                                                    |
| tenant_id         | ffc560f6a2604c3896df922115c6fc2a                   |
| updated_at        | 2019-01-16T01:45:08                                |
+-------------------+----------------------------------------------------+

第二个子网

demorc@root@controller:~$neutron subnet-create --name selfservice  --dns-nameserver 114.114.114.114 --gateway 172.16.1.1  selfservice 172.16.1.0/24

Created a new subnet:
+-------------------+------------------------------------------------+
| Field             | Value                                          |
+-------------------+------------------------------------------------+
| allocation_pools  | {"start": "172.16.1.2", "end": "172.16.1.254"} |
| cidr              | 172.16.1.0/24                                  |
| created_at        | 2019-01-16T01:48:32                            |
| description       |                                                |
| dns_nameservers   | 114.114.114.114                                |
| enable_dhcp       | True                                           |
| gateway_ip        | 172.16.1.1                                     |
| host_routes       |                                                |
| id                | ec079b98-a585-40c0-9b4c-340c943642eb           |
| ip_version        | 4                                              |
| ipv6_address_mode |                                                |
| ipv6_ra_mode      |                                                |
| name              | selfservice                                    |
| network_id        | 66eb76af-e111-4cae-adc6-2df95ad29faf           |
| subnetpool_id     |                                                |
| tenant_id         | ffc560f6a2604c3896df922115c6fc2a               |
| updated_at        | 2019-01-16T01:48:32                            |
+-------------------+------------------------------------------------+

创建路由

demorc@root@controller:~$source  adminrc
adminrc@root@controller:~$neutron net-update provider --router:external
Updated network: provider
adminrc@root@controller:~$source demorc
demorc@root@controller:~$neutron router-create router
Created a new router:
+-------------------------+--------------------------------------+
| Field                   | Value                                |
+-------------------------+--------------------------------------+
| admin_state_up          | True                                 |
| availability_zone_hints |                                      |
| availability_zones      |                                      |
| description             |                                      |
| external_gateway_info   |                                      |
| id                      | 8770421b-2f3b-4d33-9acf-562b36b5b31b |
| name                    | router                               |
| routes                  |                                      |
| status                  | ACTIVE                               |
| tenant_id               | ffc560f6a2604c3896df922115c6fc2a     |
+-------------------------+--------------------------------------+
demorc@root@controller:~$neutron router-list
+--------------------------------------+--------+-----------------------+
| id                                   | name   | external_gateway_info |
+--------------------------------------+--------+-----------------------+
| 8770421b-2f3b-4d33-9acf-562b36b5b31b | router | null                  |
+--------------------------------------+--------+-----------------------+

为路由器添加一个私网子网接口

demorc@root@controller:~$neutron router-interface-add router selfservice
Multiple subnet matches found for name 'selfservice', use an ID to be more specific.
demorc@root@controller:~$neutron subnet-list
+--------------------------------------+-------------+-----------------+----------------------------------------------------+
| id                                   | name        | cidr            | allocation_pools                                   |
+--------------------------------------+-------------+-----------------+----------------------------------------------------+
| 68f14924-15c4-4b0d-bcfc-011fd5a6de12 | provider    | 10.0.3.0/24     | {"start": "10.0.3.50", "end": "10.0.3.254"}        |
| 9c8f506c-46bd-44d8-a8a5-e160bf2ddf93 | selfservice | 192.168.56.0/24 | {"start": "192.168.56.2", "end": "192.168.56.254"} |
| ec079b98-a585-40c0-9b4c-340c943642eb | selfservice | 172.16.1.0/24   | {"start": "172.16.1.2", "end": "172.16.1.254"}     |
+--------------------------------------+-------------+-----------------+----------------------------------------------------+
demorc@root@controller:~$neutron router-interface-add router 9c8f506c-46bd-44d8-a8a5-e160bf2ddf93
Added interface 329ffea0-b8f2-4724-a6b7-19556a312b75 to router router.

为路由器添加一个公有网络的网关

demorc@root@controller:~$neutron router-gateway-set router provider
Set gateway for router router

验证

列出网络命名空间

demorc@root@controller:~$source adminrc
adminrc@root@controller:~$ip netns
qrouter-8770421b-2f3b-4d33-9acf-562b36b5b31b
qdhcp-66eb76af-e111-4cae-adc6-2df95ad29faf
qdhcp-b7369bde-908a-4dc4-b4af-a4bc5e1a2b8e
adminrc@root@controller:~$neutron router-port-list router

列出路由器上的端口来确定公网网关的IP地址

adminrc@root@controller:~$neutron router-port-list router
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| 329ffea0-b8f2-4724-a6b7-19556a312b75 |      | fa:16:3e:36:8e:3c | {"subnet_id": "9c8f506c-46bd-44d8-a8a5-e160bf2ddf93", "ip_address": "192.168.56.1"} |
| a0b37442-a41b-4526-b492-59f05637b371 |      | fa:16:3e:02:33:fd | {"subnet_id": "68f14924-15c4-4b0d-bcfc-011fd5a6de12", "ip_address": "10.0.3.51"}    |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+

ping测试

adminrc@root@controller:~$ping -c 2 192.168.56.1
PING 192.168.56.1 (192.168.56.1) 56(84) bytes of data.
64 bytes from 192.168.56.1: icmp_seq=1 ttl=64 time=0.221 ms
64 bytes from 192.168.56.1: icmp_seq=2 ttl=64 time=0.237 ms

--- 192.168.56.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.221/0.229/0.237/0.008 ms
# 这里说明以下,上面创建了两个子网,一个192.168.56.0/24和172.16.1.0/24,为路由器添加私网子网接口的时候的步骤中,我使用的是192.168.56.0/24这个网段,所以这里只能ping同192,不能ping同172

创建虚主机

# 由于环境还是公有网络的环境,所以这里先删除之前创建m1.nano(可能更改其他规格也可以,我没尝试)
adminrc@root@controller:~$openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 0  | m1.nano   |    64 |    1 |         0 |     1 | True      |
| 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
| 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
| 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
| 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
| 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
+----+-----------+-------+------+-----------+-------+-----------+
adminrc@root@controller:~$openstack flavor delete m1.nano
adminrc@root@controller:~$openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
| 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
| 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
| 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
| 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
+----+-----------+-------+------+-----------+-------+-----------+
adminrc@root@controller:~$openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field                      | Value   |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0       |
| disk                       | 1       |
| id                         | 0       |
| name                       | m1.nano |
| os-flavor-access:is_public | True    |
| ram                        | 64      |
| rxtx_factor                | 1.0     |
| swap                       |         |
| vcpus                      | 1       |
+----------------------------+---------+

生成一个键值对

adminrc@root@controller:~$ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
95:be:58:f6:be:9b:66:9b:db:54:e1:ee:1a:fb:26:b1 root@controller
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|           .     |
|          o    . |
|         o    . .|
|        S +    ..|
|         + o ... |
|        . . ..+. |
|           .oE+. |
|           oOB*o |
+-----------------+
adminrc@root@controller:~$source demorc
demorc@root@controller:~$openstack keypair create --public-key /root/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| fingerprint | 95:be:58:f6:be:9b:66:9b:db:54:e1:ee:1a:fb:26:b1 |
| name        | mykey                                           |
| user_id     | c4de9fac882740838aa26e9119b30cb9                |
+-------------+-------------------------------------------------+
demorc@root@controller:~$openstack keypair list
+-------+-------------------------------------------------+
| Name  | Fingerprint                                     |
+-------+-------------------------------------------------+
| mykey | 95:be:58:f6:be:9b:66:9b:db:54:e1:ee:1a:fb:26:b1 |
+-------+-------------------------------------------------+

增加安全组规则

# 允许ICMP(ping)
demorc@root@controller:~$openstack security group rule create --proto icmp default
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| id                    | b76e25be-c17e-48b3-8bbd-8505c3637900 |
| ip_protocol           | icmp                                 |
| ip_range              | 0.0.0.0/0                            |
| parent_group_id       | 82cd1a2f-5eaa-4616-a6d4-480daf27cf3d |
| port_range            |                                      |
| remote_security_group |                                      |
+-----------------------+--------------------------------------+
# 允许SSH访问
demorc@root@controller:~$openstack security group rule create --proto tcp --dst-port 22 default
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| id                    | 32096d51-9e2a-45f2-a65a-27ef3c1bb2b5 |
| ip_protocol           | tcp                                  |
| ip_range              | 0.0.0.0/0                            |
| parent_group_id       | 82cd1a2f-5eaa-4616-a6d4-480daf27cf3d |
| port_range            | 22:22                                |
| remote_security_group |                                      |
+-----------------------+--------------------------------------+

开始创建实例

demorc@root@controller:~$openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 0  | m1.nano   |    64 |    1 |         0 |     1 | True      |
| 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
| 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
| 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
| 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
| 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
+----+-----------+-------+------+-----------+-------+-----------+
demorc@root@controller:~$openstack image list
+--------------------------------------+---------+--------+
| ID                                   | Name    | Status |
+--------------------------------------+---------+--------+
| 39d73bcf-e60b-4caf-8469-cca17de00d7e | cirrors | active |
+--------------------------------------+---------+--------+
demorc@root@controller:~$openstack network list
+--------------------------------------+-------------+----------------------------------------------------------------------------+
| ID                                   | Name        | Subnets                                                                    |
+--------------------------------------+-------------+----------------------------------------------------------------------------+
| 66eb76af-e111-4cae-adc6-2df95ad29faf | selfservice | 9c8f506c-46bd-44d8-a8a5-e160bf2ddf93, ec079b98-a585-40c0-9b4c-340c943642eb |
| b7369bde-908a-4dc4-b4af-a4bc5e1a2b8e | provider    | 68f14924-15c4-4b0d-bcfc-011fd5a6de12                                       |
+--------------------------------------+-------------+----------------------------------------------------------------------------+
demorc@root@controller:~$openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+
| ID                                   | Name    | Description            | Project                          |
+--------------------------------------+---------+------------------------+----------------------------------+
| 82cd1a2f-5eaa-4616-a6d4-480daf27cf3d | default | Default security group | ffc560f6a2604c3896df922115c6fc2a |
+--------------------------------------+---------+------------------------+----------------------------------+
#确保以上几项都可用
# flavor的话用的是m1.nano
# net-id的话用的是selservice对应的ID
demorc@root@controller:~$openstack server create --flavor m1.nano --image cirrors --nic net-id=66eb76af-e111-4cae-adc6-2df95ad29faf --security-group default --key-name mykey selfservice-instance
+--------------------------------------+------------------------------------------------+
| Field                                | Value                                          |
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                         |
| OS-EXT-AZ:availability_zone          |                                                |
| OS-EXT-STS:power_state               | 0                                              |
| OS-EXT-STS:task_state                | scheduling                                     |
| OS-EXT-STS:vm_state                  | building                                       |
| OS-SRV-USG:launched_at               | None                                           |
| OS-SRV-USG:terminated_at             | None                                           |
| accessIPv4                           |                                                |
| accessIPv6                           |                                                |
| addresses                            |                                                |
| adminPass                            | uFD7TkvHjsax                                   |
| config_drive                         |                                                |
| created                              | 2019-01-16T02:25:45Z                           |
| flavor                               | m1.nano (0)                                    |
| hostId                               |                                                |
| id                                   | 4c954e71-8e73-49e1-a67f-20c007d582d3           |
| image                                | cirrors (39d73bcf-e60b-4caf-8469-cca17de00d7e) |
| key_name                             | mykey                                          |
| name                                 | selfservice-instance                           |
| os-extended-volumes:volumes_attached | []                                             |
| progress                             | 0                                              |
| project_id                           | ffc560f6a2604c3896df922115c6fc2a               |
| properties                           |                                                |
| security_groups                      | [{u'name': u'default'}]                        |
| status                               | BUILD                                          |
| updated                              | 2019-01-16T02:25:46Z                           |
| user_id                              | c4de9fac882740838aa26e9119b30cb9               |
+--------------------------------------+------------------------------------------------+

查看实例状态

demorc@root@controller:~$openstack server list
+--------------------------------------+----------------------+--------+--------------------------+
| ID                                   | Name                 | Status | Networks                 |
+--------------------------------------+----------------------+--------+--------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | selfservice=192.168.56.3 |
+--------------------------------------+----------------------+--------+--------------------------+

使用nova list查看

demorc@root@controller:~$nova list
+--------------------------------------+----------------------+--------+------------+-------------+--------------------------+
| ID                                   | Name                 | Status | Task State | Power State | Networks                 |
+--------------------------------------+----------------------+--------+------------+-------------+--------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | -          | Running     | selfservice=192.168.56.3 |
+--------------------------------------+----------------------+--------+------------+-------------+--------------------------+

关闭、启动、删除实例

demorc@root@controller:~$openstack server list
+--------------------------------------+----------------------+---------+--------------------------+
| ID                                   | Name                 | Status  | Networks                 |
+--------------------------------------+----------------------+---------+--------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | SHUTOFF | selfservice=192.168.56.3 |
+--------------------------------------+----------------------+---------+--------------------------+
demorc@root@controller:~$openstack server list                +--------------------------------------+----------------------+--------+--------------------------+
| ID                                   | Name                 | Status | Networks                 |
+--------------------------------------+----------------------+--------+--------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | selfservice=192.168.56.3 |
+--------------------------------------+----------------------+--------+--------------------------+
demorc@root@controller:~$openstack server stop 4c954e71-8e73-49e1-a67f-20c007d582d3
demorc@root@controller:~$openstack server delete 4c954e71-8e73-49e1-a67f-20c007d582d3

 

 

使用虚拟控制台访问实例

demorc@root@controller:~$openstack console url show selfservice-instance
+-------+------------------------------------------------------------------------------------+
| Field | Value                                                                              |
+-------+------------------------------------------------------------------------------------+
| type  | novnc                                                                              |
| url   | http://192.168.56.10:6080/vnc_auto.html?token=82177d68-c9fb-4c3c-85d6-6d42db50c864 |
+-------+------------------------------------------------------------------------------------+

浏览器直接粘贴上面的url即可

 

由于是单节点安装,所以这里想要ping实例的话需要

demorc@root@controller:~$ip netns
qrouter-8770421b-2f3b-4d33-9acf-562b36b5b31b  #复制此行
qdhcp-66eb76af-e111-4cae-adc6-2df95ad29faf
qdhcp-b7369bde-908a-4dc4-b4af-a4bc5e1a2b8e
demorc@root@controller:~$ip netns exec qrouter-8770421b-2f3b-4d33-9acf-562b36b5b31b ip a | grep "inet"
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 192.168.56.1/24 brd 192.168.56.255 scope global qr-329ffea0-b8
    inet6 fe80::f816:3eff:fe36:8e3c/64 scope link
    inet 10.0.3.51/24 brd 10.0.3.255 scope global qg-a0b37442-a4
    inet6 fe80::f816:3eff:fe02:33fd/64 scope link
demorc@root@controller:~$ip netns exec qrouter-8770421b-2f3b-4d33-9acf-562b36b5b31b ping 192.168.56.3
PING 192.168.56.3 (192.168.56.3) 56(84) bytes of data.
64 bytes from 192.168.56.3: icmp_seq=1 ttl=64 time=8.95 ms
64 bytes from 192.168.56.3: icmp_seq=2 ttl=64 time=0.610 ms
64 bytes from 192.168.56.3: icmp_seq=3 ttl=64 time=0.331 ms
64 bytes from 192.168.56.3: icmp_seq=4 ttl=64 time=0.344 ms
^C
--- 192.168.56.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.331/2.560/8.955/3.693 ms

创建浮动IP,用来远程连接

demorc@root@controller:~$source adminrc
adminrc@root@controller:~$openstack ip floating create provider
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| fixed_ip    | None                                 |
| id          | 00315ef2-1684-42ae-825b-0f94ed098de8 |
| instance_id | None                                 |
| ip          | 10.0.3.52                            |
| pool        | provider                             |
+-------------+--------------------------------------+

为实例分配浮动IP

查看浮动IP

adminrc@root@controller:~$openstack ip floating list
+--------------------------------------+---------------------+------------------+------+
| ID                                   | Floating IP Address | Fixed IP Address | Port |
+--------------------------------------+---------------------+------------------+------+
| 00315ef2-1684-42ae-825b-0f94ed098de8 | 10.0.3.52           | None             | None |
+--------------------------------------+---------------------+------------------+------+
为实例添加浮动IP
adminrc@root@controller:~$openstack ip floating add 10.0.3.52  4c954e71-8e73-49e1-a67f-20c007d582d3
Unable to associate floating IP 10.0.3.52 to fixed IP 192.168.56.3 for instance 4c954e71-8e73-49e1-a67f-20c007d582d3. Error: Bad floatingip request: Port 454451d2-6c5d-411c-8ad0-d6f5908259a6 is associated with a different tenant than Floating IP 00315ef2-1684-42ae-825b-0f94ed098de8 and therefore cannot be bound..
Neutron server returns request_ids: ['req-58f751d8-ab56-41d3-bb99-de2307ed9c67'] (HTTP 400) (Request-ID: req-330493bd-f040-4b24-a08b-8384b162ea60)
# 报错原因是admirc用户创建的floating ip是不能绑定给demorc用户实例
# 解决办法,删掉floating IP 使用demorc用户重新创建floating IP
adminrc@root@controller:~$ openstack ip floating list
+--------------------------------------+---------------------+------------------+------+
| ID                                   | Floating IP Address | Fixed IP Address | Port |
+--------------------------------------+---------------------+------------------+------+
| 00315ef2-1684-42ae-825b-0f94ed098de8 | 10.0.3.52           | None             | None |
+--------------------------------------+---------------------+------------------+------+
adminrc@root@controller:~$openstack ip floating delete 00315ef2-1684-42ae-825b-0f94ed098de8
adminrc@root@controller:~$openstack ip floating list

adminrc@root@controller:~$source demorc
demorc@root@controller:~$openstack ip floating create provider
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| fixed_ip    | None                                 |
| id          | 72d37905-4e1d-45a4-a010-a041968a0220 |
| instance_id | None                                 |
| ip          | 10.0.3.53                            |
| pool        | provider                             |
+-------------+--------------------------------------+
demorc@root@controller:~$openstack ip floating add 10.0.3.53 selfservice-instance
demorc@root@controller:~$openstack server list
+--------------------------------------+----------------------+--------+-------------------------------------+
| ID                                   | Name                 | Status | Networks                            |
+--------------------------------------+----------------------+--------+-------------------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | selfservice=192.168.56.3, 10.0.3.53 |
+--------------------------------------+----------------------+--------+-------------------------------------+

测试浮动IP

demorc@root@controller:~$ping -c 2 10.0.3.53
PING 10.0.3.53 (10.0.3.53) 56(84) bytes of data.
64 bytes from 10.0.3.53: icmp_seq=1 ttl=63 time=3.40 ms
64 bytes from 10.0.3.53: icmp_seq=2 ttl=63 time=0.415 ms

--- 10.0.3.53 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.415/1.912/3.409/1.497 ms
demorc@root@controller:~$su -
root@controller:~# ssh cirros@10.0.3.53
The authenticity of host '10.0.3.53 (10.0.3.53)' can't be established.
RSA key fingerprint is e2:77:a9:e6:90:87:a9:db:14:cb:95:5c:81:9a:4e:c7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.3.53' (RSA) to the list of known hosts.
$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:30:6D:63
          inet addr:192.168.56.3  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe30:6d63/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:114 errors:0 dropped:0 overruns:0 frame:0
          TX packets:142 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:15412 (15.0 KiB)  TX bytes:15024 (14.6 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$ ping -c 2 www.qq.com
PING www.qq.com (61.129.7.47): 56 data bytes
64 bytes from 61.129.7.47: seq=0 ttl=52 time=7.461 ms
64 bytes from 61.129.7.47: seq=1 ttl=52 time=6.463 ms

--- www.qq.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 6.463/6.962/7.461 ms
$ exit
Connection to 10.0.3.53 closed.

浮动IP的意义:当用户创建的实例处于私有网络的时候,此时又想让实例访问外网,这就需要通过绑定floating IP来实现私有网络中的实例访问公网。

demorc@root@controller:~$nova list
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
| ID                                   | Name                 | Status | Task State | Power State | Networks                            |
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | -          | Running     | selfservice=192.168.56.3, 10.0.3.53 |
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
demorc@root@controller:~$ping -c 2 10.0.3.53
PING 10.0.3.53 (10.0.3.53) 56(84) bytes of data.
64 bytes from 10.0.3.53: icmp_seq=1 ttl=63 time=3.31 ms
64 bytes from 10.0.3.53: icmp_seq=2 ttl=63 time=0.550 ms

--- 10.0.3.53 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.550/1.934/3.319/1.385 ms
demorc@root@controller:~$ssh -i /root/.ssh/id_rsa cirros@10.0.3.53
$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:30:6D:63
          inet addr:192.168.56.3  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe30:6d63/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:249 errors:0 dropped:0 overruns:0 frame:0
          TX packets:235 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:30532 (29.8 KiB)  TX bytes:27110 (26.4 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$ exit
Connection to 10.0.3.53 closed.

openstack安装dashboard

root@controller:~# apt-get install -y openstack-dashboard

配置dashboard

root@controller:~# cp /etc/openstack-dashboard/local_settings.py{,.bak}
root@controller:~# vim /etc/openstack-dashboard/local_settings.py
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_"

OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = '*'

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': '10.0.3.10:11211',
    }
}

OPENSTACK_API_VERSIONS = {
    "data-processing": 1.1,
    "identity": 3,
    "volume": 2,
    "compute": 2,
}

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'

OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
    'enable_quotas': True,
    'enable_ipv6': True,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': True,
    'enable_firewall': True,
    'enable_vpn': True,
    'enable_fip_topology_check': True,

    'default_ipv4_subnet_pool_label': None,

    'default_ipv6_subnet_pool_label': None,
    'profile_support': None,
    'supported_provider_types': ['*'],
    'supported_vnic_types': ['*'],
}

TIME_ZONE = "Asia/Shanghai"
    

重启apache2

root@controller:~# service apache2 reload
 * Reloading web server apache2                                *
root@controller:~# echo $?
0

浏览器测试

# 如果不记得admin密码可以查看这个文件
openstack@controller:~$ cat adminrc
unset OS_TOKEN
unset OS_URL
unset OS_IDENTITY_API_VERSION

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1="adminrc@\u@\h:\w\$"

验证demo用户

使用demo用户查看网络拓扑

查看相关信息

查看routers的信息

使用admin查看相关信息

安装cinder

首先需要给虚拟机添加一块新硬盘,添加步骤不再演示,一路默认下一步即可。

开始准备Cinder安装环境

root@controller:~# mysql -uroot -p123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 43
Server version: 5.5.61-MariaDB-1ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create database cinder;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> grant all privileges on cinder.* to 'cinder'@'lcoalhost' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> grant all privileges on cinder.* to 'cinder'@'%' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> \q
Bye

切换到adminrc环境

# 创建一个cinder用户
root@controller:~# source adminrc
adminrc@root@controller:~$openstack user create --domain default --password cinder cinder
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled   | True                             |
| id        | 74153e9abf694f2f9ecd2203b71e2529 |
| name      | cinder                           |
+-----------+----------------------------------+
添加admin角色到cinder用户上
adminrc@root@controller:~$openstack role add --project service --user cinder admin
创建 cinder 和 cinderv2 服务实体
adminrc@root@controller:~$openstack service create --name cinder --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 3f13455162a145e28096ce110be1213e |
| name        | cinder                           |
| type        | volume                           |
+-------------+----------------------------------+
adminrc@root@controller:~$openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 9fefead9767048e1b632bb7026c55380 |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+

创建块设备存储服务API入口点

dminrc@root@controller:~$openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | d45e4cd8fb7945968d5e644a74dc62e3        |
| interface    | public                                  |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 3f13455162a145e28096ce110be1213e        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | fcf99a2a72c94d81b472f4c75ea952c8        |
| interface    | internal                                |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 3f13455162a145e28096ce110be1213e        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | e611a9caabf640dfbcd93b7b750180da        |
| interface    | admin                                   |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 3f13455162a145e28096ce110be1213e        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | ecd1248c63844473aba74c6af3554a00        |
| interface    | admin                                   |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 9fefead9767048e1b632bb7026c55380        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 862a463ef202433e95e2e1c80030af59        |
| interface    | public                                  |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 9fefead9767048e1b632bb7026c55380        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 89fcc47679e94213a0ec2d8eabed95db        |
| interface    | internal                                |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 9fefead9767048e1b632bb7026c55380        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+

安装安全配置组件

adminrc@root@controller:~$apt-get install -y cinder-api cinder-scheduler

开始配置cinder

adminrc@root@controller:~$cp /etc/cinder/cinder.conf{,.bak}
adminrc@root@controller:~$vim /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
auth_strategy = keystone
rpc_backend = rabbit
my_ip = 10.0.3.10

[database]

connection = mysql+pymysql://cinder:123456@controller/cinder

[keystone_authtoken]

auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

[oslo_messaging_rabbit]

rabbit_host = controller
rabbit_userid = openstack
rabbit_password = 123456
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

确认配置无误后,同步数据库

adminrc@root@controller:~$su -s /bin/bash -c "cinder-manage db sync" cinder
Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.
2019-01-17 10:42:23.140 10824 WARNING py.warnings [-] /usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py:241: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning

2019-01-17 10:42:23.203 10824 INFO migrate.versioning.api [-] 0 -> 1...
.........
2019-01-17 10:42:25.097 10824 INFO migrate.versioning.api [-] done

配置计算节点使用块设备存储

adminrc@root@controller:~$cp /etc/nova/nova.conf{,.private}
adminrc@root@controller:~$vim /etc/nova/nova.conf
# 文件末尾添加
[cinder]
os_region_name = RegionOne
# 保存退出后,重启nova-api和cinder服务
adminrc@root@controller:~$service nova-api restart
nova-api stop/waiting
nova-api start/running, process 11615
adminrc@root@controller:~$service cinder-
cinder-api        cinder-scheduler
adminrc@root@controller:~$ls /etc/init.d/ | grep cinder
cinder-api
cinder-scheduler
adminrc@root@controller:~$ls /etc/init.d/ | grep cinder | xargs -i service {} restart
cinder-api stop/waiting
cinder-api start/running, process 11773
cinder-scheduler stop/waiting
cinder-scheduler start/running, process 11812

安装lvm2

adminrc@root@controller:~$apt-get install -y lvm2

创建LVM物理卷、卷组

adminrc@root@controller:~$pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created
adminrc@root@controller:~$vgcreate cinder-volumes /dev/sdb
  Volume group "cinder-volumes" successfully created

配置

adminrc@root@controller:~$cp /etc/lvm/lvm.conf{,.bak}
adminrc@root@controller:~$vim /etc/lvm/lvm.conf

filter = [ "a/sdb/", "r/.*/"]  #将原值修改为这个值

安装安全组件

adminrc@root@controller:~$apt-get install cinder-volume

配置cinder.conf

adminrc@root@controller:~$cat /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
auth_strategy = keystone
rpc_backend = rabbit
my_ip = 10.0.3.10
enabled_backends = lvm
glance_api_servers = http://controller:9292

[database]

connection = mysql+pymysql://cinder:123456@controller/cinder

[keystone_authtoken]

auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

[oslo_messaging_rabbit]

rabbit_host = controller
rabbit_userid = openstack
rabbit_password = 123456
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

重启服务

adminrc@root@controller:~$service tgt restart
tgt stop/waiting
tgt start/running, process 24646
adminrc@root@controller:~$service cinder-volume restart
cinder-volume stop/waiting
cinder-volume start/running, process 24696

验证

adminrc@root@controller:~$cinder service-list
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |      Host      | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |   controller   | nova | enabled |   up  | 2019-01-17T03:20:00.000000 |        -        |
|  cinder-volume   |   controller   | nova | enabled |  down | 2019-01-17T03:18:52.000000 |        -        |
|  cinder-volume   | controller@lvm | nova | enabled |   up  | 2019-01-17T03:20:01.000000 |        -        |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+

# 不知道为什么一个状态是down

切换到demo用户

adminrc@root@controller:~$source demorc
demorc@root@controller:~$openstack volume create --size 1 volume1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2019-01-17T04:07:56.366573           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 240ee7be-49bb-48bc-8bb3-1c44196b5ad9 |
| multiattach         | False                                |
| name                | volume1                              |
| properties          |                                      |
| replication_status  | disabled                             |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | None                                 |
| updated_at          | None                                 |
| user_id             | c4de9fac882740838aa26e9119b30cb9     |
+---------------------+--------------------------------------+
demorc@root@controller:~$openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID                                   | Display Name | Status    | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 240ee7be-49bb-48bc-8bb3-1c44196b5ad9 | volume1      | available |    1 |             |
+--------------------------------------+--------------+-----------+------+-------------+

添加卷到一个实例上

demorc@root@controller:~$nova list
+--------------------------------------+----------------------+---------+------------+-------------+-------------------------------------+
| ID                                   | Name                 | Status  | Task State | Power State | Networks                            |
+--------------------------------------+----------------------+---------+------------+-------------+-------------------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | SHUTOFF | -          | Shutdown    | selfservice=192.168.56.3, 10.0.3.53 |
+--------------------------------------+----------------------+---------+------------+-------------+-------------------------------------+
demorc@root@controller:~$nova start 4c954e71-8e73-49e1-a67f-20c007d582d3
Request to start server 4c954e71-8e73-49e1-a67f-20c007d582d3 has been accepted.
demorc@root@controller:~$nova list
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
| ID                                   | Name                 | Status | Task State | Power State | Networks                            |
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | -          | Running     | selfservice=192.168.56.3, 10.0.3.53 |
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
demorc@root@controller:~$ping -c 2 10.0.3.53
PING 10.0.3.53 (10.0.3.53) 56(84) bytes of data.
64 bytes from 10.0.3.53: icmp_seq=1 ttl=63 time=9.45 ms
64 bytes from 10.0.3.53: icmp_seq=2 ttl=63 time=0.548 ms
demorc@openstack@controller:~$nova list
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
| ID                                   | Name                 | Status | Task State | Power State | Networks                            |
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | -          | Running     | selfservice=192.168.56.3, 10.0.3.53 |
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
demorc@openstack@controller:~$openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID                                   | Display Name | Status    | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 240ee7be-49bb-48bc-8bb3-1c44196b5ad9 | volume1      | available |    1 |             |
+--------------------------------------+--------------+-----------+------+-------------+
# 复制下来实例的ID和volume1的ID
demorc@root@controller:~$openstack server add volume 4c954e71-8e73-49e1-a67f-20c007d582d3 240ee7be-49bb-48bc-8bb3-1c44196b5ad9
再次查看volume1的状态,可以看出正在使用
demorc@root@controller:~$openstack volume list
+--------------------------------------+--------------+--------+------+-----------------------------------------------+
| ID                                   | Display Name | Status | Size | Attached to                                   |
+--------------------------------------+--------------+--------+------+-----------------------------------------------+
| 240ee7be-49bb-48bc-8bb3-1c44196b5ad9 | volume1      | in-use |    1 | Attached to selfservice-instance on /dev/vdb  |
+--------------------------------------+--------------+--------+------+-----------------------------------------------+

创建并格式化新创建的磁盘

demorc@root@controller:~$ssh cirros@10.0.3.53
$ sudo -s
$ fdisk -l

Disk /dev/vda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *       16065     2088449     1036192+  83  Linux

Disk /dev/vdb: 1073 MB, 1073741824 bytes
16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/vdb doesn't contain a valid partition table
$ mkfs.ext4 /dev/sdb
$ mkfs.ext4 /dev/vdb
mke2fs 1.42.2 (27-Mar-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information:done
$ ls /mnt/
lost+found
$ touch /mnt/test
$ ls /mnt/
lost+found  test
$ exit
$ exit
Connection to 10.0.3.53 closed.
demorc@root@controller:~$exit
exit

 

 

(仅供学习使用,如有侵权请留言,我会第一时间删除相关内容)

posted on 2019-01-17 12:39  Lucky_7  阅读(743)  评论(0编辑  收藏  举报

导航