OpenStack Mitaka HA部署方案

               OpenStack Mitaka HA 部署

本环境说明:本环境并未使用neutron L3 功能,而是使用计算节点桥接外部网络,实现分布式网络。本文并没有dhcp的服务。本节点南北向流量采用VLAN模式。

文档说明: 本文档仅作为参考,谨慎使用。本文档为原创文档,转载需加上本文地址。

1.      基本环境

1) openstack 系统部署状况

 

主机名

用途

主机类型

openstack网络环境

管理

192.168.5.0/24

 

外网(万兆)

 

 

controller01

分发节点1

物理机

192.168.5.10

 

 

 

x.x.x.x

控制节点1

数据库节点1

消息节点1

网络节点1

 

controller02

分发节点2

物理机

192.168.5.11

 

控制节点2

数据库节点2

消息节点2

网络节点2

 

controller03

分发节点3

物理机

192.168.5.12

 

控制节点3

数据库节点3

消息节点3

网络节点3

 

controller

VIP

虚拟IP

192.168.5.13

 

 

nova01

计算节点

物理机

192.168.5.14

 

 

nova02

计算节点

物理机

192.168.5.15

 

 

2) 服务器规划概况

a)   使用三台服务器做为各项控制服务节点,运行数据库、rabbitmq、控制服务、分发服务、网络服务

b)   前端使用HAProxy做分发代理服务

c)   集群软件使用pacemaker

d)   HAProxy A/P模式

e)   计算节点共有2台

f)   controller02做为数据库主节点

3) 软件版本

项目

版本

操作系统

CentOS Linux 7.2

云平台

OpenStack-Mitaka

4) 系统访问路径

项目

访问URL

云平台访问地址

 

yum源访问地址

http:// 192.168.5.10

ntp访问地址

192.168.5.10

5) 系统用户

用户名

密码

角色

root

密码

系统管理员用户

cloud

密码

系统普通用户

admin

密码

云平台管理员用户

2.      系统安装

2.1.  系统版本

系统版本

CentOS Linux 7.2

2.2.  系统空间分配

分区

大小

/boot

500M

/

542G

swap

16G

 

2.3.  主机解析

192.168.5.10  controller01

192.168.5.11  controller02

192.168.5.12  controller03

192.168.5.14  nova01

192.168.5.15  nova02

192.168.5.13  controller

 

3.      NTP服务配置

ntp服务配置在controller01服务器上,采用本地时间源,所有服务器与其同步时间。

3.1.  ntp服务端配置

1)   安装NTP软件

yum install ntp

 

2)   编辑配置文件/etc/ntp.conf

 

restrict default nomodify notrap noquery

restrict 127.0.0.1

restrict 172.29.5.0 mask 255.255.255.0 nomodify

server  127.127.1.0

fudge   127.127.1.0 stratum 10

 

3)   启动NTP服务,并设置开机启动

 systemctl start ntpd.service

 

3.2.  配置ntp客户端

1)   同步一次系统时钟

ntpdate 192.168.5.10

 

2)   各节点,使用crontab定时任务同步系统时钟

使用命令crontab –e 编辑定时任务,添加以下任务:

*/1 * * * * /usr/sbin/ntpdate 192.168.5.10 > /dev/null 2>&1 &

*/1 * * * * /usr/sbin/hwclock  -w  > /dev/null 2>&1 &

 

4.      系统参数设置

4.1.  ulimit参数优化,修改打开文件数与CPU使用线程

cat /etc/security/limits.d/20-nproc.conf

*       soft    nproc     65536

*       hard    nproc    65536

*       soft    nofile    65536

*       hard    nofile   65536

*              soft        stack       65536

*              hard        stack       65536

root       soft    nproc     unlimited

root       hard    nproc     unlimited

 

5.      分发代理服务器pacemaker+haproxy配置

5.1.  配置haproxy

1) haproxy安装

yum install haproxy

日志路径在 tailf /var/log/messages

3) 配置文件

注意:监听地址需要为VIP的IP地址

vim /etc/haproxy/haproxy.cfg

备注:balance可以根据自己的情况选择,官方默认推荐source/ roundrobin,本例中使用的是source

5.2.  配置pacemaker

yum install -y pcs pacemaker corosync fence-agents-all resource-agents

corosync-cmapctl runtime.totem.pg.mrp.srp.members

systemctl enable pcsd

systemctl start pcsd

echo hacluster | passwd --stdin 密码

pcs cluster auth controller01 controller02 controller03 -u hacluster -p 密码 --force

pcs cluster setup --force --name openstack-cluster controller01 controller02 controller03

pcs cluster start --all

pcs property set stonith-enabled=false

pcs property set no-quorum-policy=ignore

pcs resource create vip ocf:heartbeat:IPaddr2 params ip=192.168.10.100 cidr_netmask="24" op monitor interval="30s"

 

 

1) 配置HAProxy配置文件

vim /etc/haproxy/haproxy.cfg

defaults

local2.*                       /var/log/haproxy.log

    log                            127.0.0.1 local2

 

    chroot      /var/lib/haproxy

    pidfile     /var/run/haproxy.pid

    maxconn     4000

    user        haproxy

    group       haproxy

    daemon

stats socket /var/lib/haproxy/stats

 

defaults

    log  global

    maxconn  4000

    option  redispatch

    retries  3

    timeout  http-request 10s

    timeout  queue 1m

    timeout  connect 10s

    timeout  client 1m

    timeout  server 1m

    timeout  check 10s

 Listen stats 0.0.0.0:9000

  # haproxy status

   mode http

   stats enable

   stats uri /haproxy_stats

   stats realm Haproxy\ Statistics

   stats auth haproxy:haproxy

   stats admin IF TRUE

 

 listen dashboard_cluster

  bind 192.168.5.13:80

  balance  source

  option  tcpka

  option  httpchk

  option  tcplog

  server controller01 192.168.5.10:80 check inter 2000 rise 2 fall 5

  server controller02 192.168.5.11:80 check inter 2000 rise 2 fall 5

  server controller03 192.168.5.12:80 check inter 2000 rise 2 fall 5

 

 listen galera_cluster

  bind 192.168.5.13:3306

  mode tcp

  option  httpchk

server controller01 192.168.5.10:3306 check port 9200 backup inter 2000 rise 2 fall 5

server controller02 192.168.5.11:3306 check port 9200 inter 2000 rise 2 fall 5

server controller03 192.168.5.12.41:3306 check port 9200 backup inter 2000 rise 2 fall 5

 

 listen glance_api_cluster

  bind 192.168.5.13:9292

  balance  source

  option  tcpka

  option  httpchk

  option  tcplog

  server controller01 192.168.5.10:9292 check inter 2000 rise 2 fall 5

  server controller02 192.168.5.11:9292 check inter 2000 rise 2 fall 5

  server controller03 192.168.5.12:9292 check inter 2000 rise 2 fall 5

 

 listen glance_registry_cluster

  bind 192.168.5.13:9191

  balance  source

  option  tcpka

  option  tcplog

  server controller01 192.168.5.10:9191 check inter 2000 rise 2 fall 5

  server controller02 192.168.5.11:9191 check inter 2000 rise 2 fall 5

  server controller03 192.168.5.12:9191 check inter 2000 rise 2 fall 5

 

 listen keystone_admin_cluster

  bind 192.168.5.13:35357

  balance  source

  option  tcpka

  option  httpchk

  option  tcplog

  server controller01 192.168.5.10:35357 check inter 2000 rise 2 fall 5

  server controller02 192.168.5.11:35357 check inter 2000 rise 2 fall 5

  server controller03 192.168.5.12:35357 check inter 2000 rise 2 fall 5

 

 listen keystone_public_internal_cluster

  bind 192.168.5.13:5000

  balance  source

  option  tcpka

  option  httpchk

  option  tcplog

  server controller01 192.168.5.10:5000 check inter 2000 rise 2 fall 5

  server controller02 192.168.5.11:5000 check inter 2000 rise 2 fall 5

  server controller03 192.168.5.12:5000 check inter 2000 rise 2 fall 5

 

 listen nova_compute_api_cluster

  bind 192.168.5.13:8774

  balance  source

  option  tcpka

  option  httpchk

  option  tcplog

  server controller01 192.168.5.10:8774 check inter 2000 rise 2 fall 5

  server controller02 192.168.5.11:8774 check inter 2000 rise 2 fall 5

  server controller03 192.168.5.12:8774 check inter 2000 rise 2 fall 5

 

 listen nova_metadata_api_cluster

  bind 192.168.5.13:8775

  balance  source

  option  tcpka

  option  tcplog

  server controller01 192.168.5.10:8775 check inter 2000 rise 2 fall 5

  server controller02 192.168.5.11:8775 check inter 2000 rise 2 fall 5

  server controller03 192.168.5.12:8775 check inter 2000 rise 2 fall 5

 

 listen nova_vncproxy_cluster

  bind 192.168.5.13:6080

  balance  source

  option  tcpka

  option  tcplog

  server controller01 192.168.5.10:6080 check inter 2000 rise 2 fall 5

  server controller02 192.168.5.11:6080 check inter 2000 rise 2 fall 5

  server controller03 192.168.5.12:6080 check inter 2000 rise 2 fall 5

 

 listen neutron_api_cluster

  bind 192.168.5.13:9696

  balance  source

  option  tcpka

  option  httpchk

  option  tcplog

  server controller01 192.168.5.10:9696 check inter 2000 rise 2 fall 5

  server controller02 192.168.5.11:9696 check inter 2000 rise 2 fall 5

  server controller03 192.168.5.12:9696 check inter 2000 rise 2 fall 5
haproxy配置文件

 


6.      数据库配置
 

我们使用controller01、controller02、controller03三台服务器做集群,将controller02做为主节点使用。

6.1.  安装数据库软件

需要三台机器分别安装galera和mariadb软件

yum install -y mariadb mariadb-server-galera mariadb-galera-common galera rsync

 

6.2.  初始化主节点controller02数据库

#初始化密码为密码

systemctl start mariadb.service

mysql_secure_installation

 

每台数据库服务器都需要配置

6.3.  配置galera-cluster

1) 关闭主节点controller02上的数据库

systemctl stop mariadb.service

 

2) 配置/etc/my.cnf.d/openstack.cnf文件,加入以下内容

vim /etc/my.cnf.d/openstack.cnf

[mysqld]

datadir=/var/lib/mysql

socket=/var/lib/mysql/mysql.sock

user=mysql

bind-address = 192.168.5.10

skip-name-resolve

default-storage-engine = innodb

max_connections = 4096

binlog_format=ROW

innodb_autoinc_lock_mode=2

innodb_flush_log_at_trx_commit=2

innodb_buffer_pool_size = 256M

innodb_flush_log_at_trx_commit=0

 

wsrep_on=ON

wsrep_provider=/usr/lib64/galera/libgalera_smm.so

wsrep_provider_options="pc.recovery=TRUE;gcache.size=300M"

wsrep_cluster_name="trendystack_cluster"

wsrep_cluster_address="gcomm://controller01,controller02,controller03"

wsrep_node_name=controller01

wsrep_node_address=192.168.5.10

wsrep_sst_method=rsync

wsrep_sst_auth=root:密码
数据库集群配置

拷贝配置文件到其他控制节点,并做相应修改

 

3) 启动主节点controller02数据库

galera_new_cluster

 

4) 检查启动是否成功

tail -f /var/log/mariadb/mariadb.log

150701 19:54:17 [Note] WSREP: wsrep_load(): loading provider library 'none'

150701 19:54:17 [Note] /usr/libexec/mysqld: ready for connections.

Version: '5.5.40-MariaDB-wsrep'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MariaDB Server, wsrep_25.11.r4026

#出现ready for connections,证明我们启动成功

5) 其它节点启动

systemctl start mariadb.service

 

6) 后期维护

    所有服务器统一使用

systemctl start mariadb.service

 

 

6.4.  配置HAproxy健康检查

#每台数据库服务器都需要配置

1) 安装xinted服务

yum install xinetd –y

 

2) 创建clustercheck数据库用户

mysql -uroot –p密码

GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' IDENTIFIED BY '密码';

FLUSH PRIVILEGES;

 

3) 创建 clustercheck 配置文件

vim /etc/sysconfig/clustercheck

MYSQL_USERNAME="clustercheck"

MYSQL_PASSWORD=’密码’

MYSQL_HOST="localhost"

MYSQL_PORT="3306"


4) 创建HAproxy监控服务

vim /etc/xinetd.d/galera-monitor

service mysqlchk

{

   port = 9200

   disable = no

   socket_type = stream

   protocol = tcp

   wait = no   

   user = root

   group = root

   groups = yes

   server = /usr/bin/clustercheck

   type = UNLISTED

   per_source = UNLIMITED

   log_on_success =

   log_on_failure = HOST

   flags = REUSE

}

 

# 配置服务端口

vim /etc/services

mysqlchk    9200/tcp    # MySQL check

 

 

5) 启动xinetd

 systemctl daemon-reload

 systemctl enable xinetd

 systemctl start xinetd

 

7.      memcached安装配置

#每一个控制节点都需要安装

7.1.  安装软件

yum install memcached python-memcached –y

 

7.2.  修改配置文件

# controller01上修改配置

cat /etc/sysconfig/memcached

PORT="11211"

USER="memcached"

MAXCONN="1024"

CACHESIZE="64"

OPTIONS="-l 192.168.5.10,::1"

 

# controller02上修改配置

cat /etc/sysconfig/memcached

PORT="11211"

USER="memcached"

MAXCONN="1024"

CACHESIZE="64"

OPTIONS="-l 192.168.5.11,::1"

 

# controller03上修改配置

cat /etc/sysconfig/memcached

PORT="11211"

USER="memcached"

MAXCONN="1024"

CACHESIZE="64"

OPTIONS="-l 192.168.5.12,::1"

 

7.3.  启动服务

systemctl enable memcached.service

systemctl start memcached.service

 

 

8.      RabbitMQ安装配置

8.1.  安装软件

#分别在controller1、controller2和controller03三台服务器上安装。

yum install rabbitmq-server –y

 

修改连接数限制:

[root@controller01 ~]# cat /etc/security/limits.d/20-nproc.conf

*       soft    nproc     65536

*       hard    nproc    65536

*       soft    nofile    65536

*       hard    nofile   65536

*              soft        stack       65536

*              hard        stack       65536

root       soft    nproc     unlimited

root       hard    nproc     unlimited

 

 

[root@controller01 ~]#ulimit -n 65535

[root@controller01 ~]#cat /usr/lib/systemd/system/rabbitmq-server.service

[Service]

LimitNOFILE=65535  #在启动脚本中添加此参数

 

[root@controller01 ~]#systemctl daemon-reload

[root@controller01 ~]#systemctl restart rabbitmq-server.service

[root@controller01 ~]#rabbitctl status

{file_descriptors,[{total_limit,10140},

                    {total_used,2135},

                    {sockets_limit,9124},

                    {sockets_used,2133}]}

 

8.2.  配置RabbitMQ

1)  在controller01上启动使用以下命令启动rabbitmq

systemctl enable rabbitmq-server.service

systemctl start rabbitmq-server

 

2)  从controller01上复制cookie到其他的节点

systemctl stop rabbitmq-server

scp /var/lib/rabbitmq/.erlang.cookie controller02:/var/lib/rabbitmq/.erlang.cookie

scp /var/lib/rabbitmq/.erlang.cookie  controller03:/var/lib/rabbitmq/.erlang.cookie

 

3)  在每个目标节点上确认 erlang.cookie 文件的用户,组和权限

chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie

chmod 400 /var/lib/rabbitmq/.erlang.cookie

 

4)       修改参数

cat /usr/lib/systemd/system/rabbitmq-server.service

[Service]

LimitNOFILE=65535

 

5)  设置rabbitmq开机自启并启动其他节点的rabbitmq-server

systemctl enable rabbitmq-server

systemctl start rabbitmq-server

 

6)  使用以下命令确认rabbitmq-server在每个节点正确运行

rabbitmqctl cluster_status

rabbitmqctl stop_app

 

7)  除第一个节点(controller01)外,其他两节点执行以下命令加入集群

rabbitmqctl join_cluster --ram rabbit@controller01

rabbitmqctl start_app

 

8)  确认集群状态

rabbitmqctl cluster_status

 

9)  为了确保所有队列除了名字自动生成的可以在所有运行的节点上镜像,设置 ha-mode 策略,在任意节点上执行

rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}'

 

8.3.  创建与授权rabbitmq用户

在任意节点是执行以下命令创建openstack用户,并设置权限

rabbitmqctl add_user openstack ‘密码’
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

 

9.      认证服务配置

注意:由于前端使用haproxy做分发代理,访问接口IP使用VIP地址

9.1.  配置数据库

1)  root用户登录数据库

mysql -u root –p密码

 

2)  创建 keystone 数据库

CREATE DATABASE keystone;

 

3)  设置数据库权限

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '密码';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '密码';

 

9.2.  配置认证服务组件

以下操作需要在controller01、controller02和controller03上执行

1)  创建token

 

openssl rand -hex 10

 

2)  运行以下命令安装软件包

 

yum install openstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached –y

 

3)  修改 /etc/keystone/keystone.conf 文件

openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token 5b2af9b84ad4d2a693ad

openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:密码@controller/keystone

#openstack-config --set /etc/keystone/keystone.conf token provider uuid

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_ha_queues true

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_durable_queues true

openstack-config --set /etc/keystone/keystone.conf f oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_password p密码

 

# 拷贝配置文件到其他控制节点

scp /etc/keystone/keystone.conf controller02:/etc/keystone/keystone.conf

scp /etc/keystone/keystone.conf controller03:/etc/keystone/keystone.conf
keystone配置文

4)  导入认证服务数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

 

9.3.  配置apache http服务

以下操作需要在controller01、controller02和controller03上执行

1)  编辑 /etc/httpd/conf/httpd.conf 文件,使ServerName选项指向各节点IP

vim  /etc/httpd/conf/httpd.conf

ServerName controller01

Listen controller01:80

 

2)  创建 /etc/httpd/conf.d/wsgi-keystone.conf 文件,需要修改端口号:

Listen 192.168.5.10:5000

Listen 192.168.5.10:35357

<VirtualHost 192.168.5.10:5000>

    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

    WSGIProcessGroup keystone-public

    WSGIScriptAlias / /usr/bin/keystone-wsgi-public

    WSGIApplicationGroup %{GLOBAL}

    WSGIPassAuthorization On

    ErrorLogFormat "%{cu}t %M"

    ErrorLog /var/log/httpd/keystone-error.log

    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>

        Require all granted

    </Directory>

</VirtualHost>

<VirtualHost 192.168.5.10:35357>

    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

    WSGIProcessGroup keystone-admin

    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin

    WSGIApplicationGroup %{GLOBAL}

    WSGIPassAuthorization On

    ErrorLogFormat "%{cu}t %M"

    ErrorLog /var/log/httpd/keystone-error.log

    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>

        Require all granted

    </Directory>

</VirtualHost>
wsgi-keystone. conf

 

9.4.  创建service entity 和 API endpoint

9.4.1. 配置前准备

1)  配置认证token

export OS_TOKEN= 5b2af9b84ad4d2a693ad

 

#此处要和keystone.conf中的ADMIN_TOKEN对应

2)  配置endpoint URL

export OS_URL=http://controller:35357/v3

 

3)  配置认证API版本

export OS_IDENTITY_API_VERSION=3

 

9.4.2. 创建service entity 和 API endpoint

1)  创建service entity 和 API endpoint

openstack service create --name keystone --description "OpenStack Identity" identity
openstack endpoint create --region RegionOne identity public http://controller:5000/v3
openstack endpoint create --region RegionOne identity internal http://controller:5000/v3

  openstack endpoint create --region RegionOne identity admin http://controller:35357/v3

 

9.4.3. 创建项目、用户与角色

9.4.3.1. admin项目配置

1)  创建default domain

openstack domain create --description "Default Domain" default

openstack project create --domain default --description "Admin Project" admin

  openstack user create --domain default --password-prompt admin

  openstack role create admin

  openstack role add --project admin --user admin admin

 

9.4.3.2. service项目配置

openstack project create --domain default --description "Service Project" service

 

9.4.3.3. demo项目配置

1)  创建demo 项目

openstack project create --domain default --description "Demo Project" demo

 

2)  创建demo用户(password: 密码)

openstack user create --domain default --password-prompt demo

 

3)  创建user 角色

openstack role create user

 

4)  为demo项目和用户添加user的角色

openstack role add --project demo --user demo user

 

 

9.4.3.4 创建集群服务

pcs resource create  openstack-keystone systemd:httpd --clone interleave=true

 

 

9.5.  验证操作

以下操作需要在三个控制节点上执行

1) 为了安全,禁用临时身份验证令牌机制,编辑 /etc/keystone/keystone-paste.ini文件,从 [pipeline:public_api]、[pipeline:admin_api],和 [pipeline:api_v3]选项中移除admin_token_auth。

2) 移除临时的OS_TOKEN 和 OS_URL环境变量

unset OS_TOKEN OS_URL

3) 用admin用户通过请求认证token

openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue

 

4) 用demo用户通过请求认证token

openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue

 

 

9.6.  创建openstack client 环境脚本

9.6.1. 创建 admin-openrc.sh文件,加入以下内容:

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=密码
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
 

9.6.2. 创建demo-openrc.sh文件,加入以下内容:

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=密码
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

 

9.7.  加载环境变量脚本

1)  加载admin-openrc.sh文件到环境变量,使用admin的认证信息

source admin-openrc.sh

 

2)  请求token

openstack token issue

 

10.镜像服务配置

10.1.     数据库配置

1)  使用root用户登录数据库

mysql -u root –p密码

 

2)  创建 glance 数据库

CREATE DATABASE glance;

 

3)  设置数据库权限

GRANT ALL PRIVILEGES ON glance.* TO glance@'localhost' IDENTIFIED BY '密码';

GRANT ALL PRIVILEGES ON glance.* TO glance@'%' IDENTIFIED BY '密码';

 

10.2.     source admin文件认证

source admin-openrc.sh

 

10.3.     创建服务认证

1)  创建glance用户(password:密码)

openstack user create –domain default --password-prompt glance

 

2)  给glance 用户和service 项目增加admin角色

openstack role add --project service --user glance admin

 

3)  创建glance service entity

openstack service create --name glance --description "OpenStack Image service" image

 

10.4.     创建镜像服务的API endpoint

openstack endpoint create –region RegionOne image public http://controller:9292

openstack endpoint create –region RegionOne image internal http://controller:9292

openstack endpoint create –region RegionOne image admin http://controller:9292

 

 

10.5.     镜像服务组件安装

以下操作需要在controller01、controller02、controller03上执行

10.5.1. 安装软件包

yum install openstack-glance python-glance python-glanceclient –y

 

10.5.2. 编辑/etc/glance/glance-api.conf文件

 

openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:密码@controller/glance

 

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password 密码

 

openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone

 

openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http

openstack-config --set /etc/glance/glance-api.conf glance_store default_store file

openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

 

openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_host controller

openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host controller01

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_ha_queues true

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/glance/glance-api.conf /etc/glance/glance-api.confoslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_durable_queues true

openstack-config --set /etc/glance/glance-api.conf f oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password p密码
glance-api.conf

 

 

 

10.5.3. 编辑/etc/glance/glance-registry.conf文件

openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:密码@controller/glance

 

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password p密码

openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

openstack-config --set /etc/glance/glance-registry.conf DEFAULT registry_host controller

openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host controller01

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_ha_queues true

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_durable_queues true

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password p密码
glance-registry.conf

 

10.5.4. 同步镜像服务的数据库(任一节点上执行)

su -s /bin/sh -c "glance-manage db_sync" glance

 

 

10.5.5 创建集群服务

pcs resource create openstack-glance-registry systemd:openstack-glance-registry --clone interleave=true

pcs resource create openstack-glance-api systemd:openstack-glance-api --clone interleave=true

pcs constraint order start openstack-keystone-clone then openstack-glance-registry-clone

pcs constraint order start openstack-glance-registry-clone then openstack-glance-api-clone

pcs constraint colocation add openstack-glance-api-clone with openstack-glance-registry-clone

 

 

10.6.     验证操作

1)   在每一个controller节点上执行以下命令

echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh

 

2)   Source admin认证文件

source admin-openrc.sh

 

3)   创建临时目录

mkdir /tmp/images

 

4)   下载镜像

wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

 

5)   上传镜像到glance

glance image-create --name "cirros-0.3.4-x86_64" --file /tmp/images/cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public –progress

 

6)   确认上传的镜像

glance image-list

 

7)   移除临时目录

rm -r /tmp/images

 

11.计算节点配置

11.1.     数据库配置

1)  使用root用户登录数据库

mysql -hcontroller -uroot -p密码

 

创建nova数据库

CREATE DATABASE nova;

CREATE DATABASE nova_api;

GRANT ALL PRIVILEGES ON nova.* TO 'nova_api'@'localhost' IDENTIFIED BY '密码';

GRANT ALL PRIVILEGES ON nova.* TO 'nova_api'@'%' IDENTIFIED BY '密码';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '密码';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '密码';

 

11.2.     Source admin的认证文件

source admin-openrc.sh

 

11.3.     创建服务认证

openstack user create --domain default --password-prompt nova

openstack role add --project service --user nova admin

openstack service create --name nova --description "OpenStack Compute" compute

 

11.4.     创建计算服务的API  endpoint

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s

openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s

openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s

 

11.5.     计算节点控制组件安装

以下操作需要在controller01、controller02和controller03上执行

11.5.1. 安装软件包

yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler

 

11.5.2. 编辑/etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata

openstack-config --set /etc/nova/nova.conf DEFAULT memcached_servers controller01:11211,controller02:11211,controller03:11211

scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter

scheduler_available_filters = nova.scheduler.filters.all_filters

 

openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:密码@controller/nova_api

openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:密码@controller/nova

 

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues true

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_durable_queues true

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password 密码

 

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357

openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password

openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service

openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova

openstack-config --set /etc/nova/nova.conf keystone_authtoken password 密码

 

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.5.10

openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True

openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver

openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 192.168.5.10

openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.5.10

openstack-config --set /etc/nova/nova.conf vnc novncproxy_host 192.168.5.10

openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292

openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp

openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen 192.168.5.10

openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 192.168.5.10

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696

openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357

openstack-config --set /etc/nova/nova.conf neutron auth_type password

openstack-config --set /etc/nova/nova.conf neutron project_domain_name default

openstack-config --set /etc/nova/nova.conf neutron user_domain_name default

openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne

openstack-config --set /etc/nova/nova.conf neutron project_name service

openstack-config --set /etc/nova/nova.conf neutron username neutron

openstack-config --set /etc/nova/nova.conf neutron password 密码

 

openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True

openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret 密码
nova.conf

 

11.5.3. 同步数据库(任一控制节点执行)

su -s /bin/sh -c "nova-manage api_db sync" nova

su -s /bin/sh -c "nova-manage db sync" nova

 

11.5.4.  创建集群服务

pcs resource create openstack-nova-consoleauth systemd:openstack-nova-consoleauth --clone interleave=true

pcs resource create openstack-nova-novncproxy systemd:openstack-nova-novncproxy --clone interleave=true

pcs resource create openstack-nova-api systemd:openstack-nova-api --clone interleave=true

pcs resource create openstack-nova-scheduler systemd:openstack-nova-scheduler --clone interleave=true

pcs resource create openstack-nova-conductor systemd:openstack-nova-conductor --clone interleave=true

pcs constraint order start openstack-keystone-clone then openstack-nova-consoleauth-clone

pcs constraint order start openstack-nova-consoleauth-clone then openstack-nova-novncproxy-clone

pcs constraint colocation add openstack-nova-novncproxy-clone with openstack-nova-consoleauth-clone

pcs constraint order start openstack-nova-novncproxy-clone then openstack-nova-api-clone

pcs constraint colocation add openstack-nova-api-clone with openstack-nova-novncproxy-clone

pcs constraint order start openstack-nova-api-clone then openstack-nova-scheduler-clone

pcs constraint colocation add openstack-nova-scheduler-clone with openstack-nova-api-clone

pcs constraint order start openstack-nova-scheduler-clone then openstack-nova-conductor-clone

pcs constraint colocation add openstack-nova-conductor-clone with openstack-nova-scheduler-clone

 

11.6.     计算节点安装

以下操作需要在所有nova节点上执行

11.6.1. 安装软件包

yum install openstack-nova-compute –y

 

11.6.2. 编辑/etc/nova/nova.conf

注意my_ip修改为本机的管理IP

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.5.14

openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True

openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver

openstack-config --set /etc/nova/nova.conf DEFAULT memcached_servers controller01:11211,controller02:11211,controller03:11211

 

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues true

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_durable_queues true

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password 密码

 

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357

openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password

openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service

openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova

openstack-config --set /etc/nova/nova.conf keystone_authtoken password 密码

 

openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp

 

openstack-config --set /etc/nova/nova.conf vnc enabled True

openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0

openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.5.14

openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://192.168.5.13:6080/vnc_auto.html

 

openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292

openstack-config --set /etc/nova/nova.conf libvirt virt_type  $(count=$(egrep -c '(vmx|svm)' /proc/cpuinfo); if [ $count -eq 0 ];then   echo "qemu"; else   echo "kvm"; fi)
nova.conf

 

 

11.6.3. 启动服务

systemctl enable libvirtd.service openstack-nova-compute.service

systemctl start libvirtd.service openstack-nova-compute.service

 

 

12.DashBoard配置

以下操作需要在controller01、controller02和controller03上执行

12.1.     安装软件包

yum install openstack-dashboard

 

12.2.     编辑 /etc/openstack-dashboard/local_settings 文件加入以下内容

vim /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "192.168.5.10"

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

ALLOWED_HOSTS = ['*', ]

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'

 

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

 

CACHES = {

    'default': {

         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

         'LOCATION': ' controller01:11211','controller02:11211','controller03:11211',

    }

}

TIME_ZONE = "Asia/Shanghai"

 

OPENSTACK_API_VERSIONS = {

    "identity": 3,

    "volume": 2,

    "compute": 2,

}
local_settings

 

 

13.网络服务配置

###以下操做只需在任一controller节点上执行###

13.1.     网络服务控制节点配置

13.1.1. 创建数据库

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '密码';

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '密码';

 

13.1.2. Source admin认证文件

source admin-openrc.sh

13.1.3. 创建服务认证

openstack user create --domain default --password-prompt neutron

openstack role add --project service --user neutron admin

openstack service create --name neutron --description "OpenStack Networking" network

openstack endpoint create --region RegionOne network public http://controller:9696

openstack endpoint create --region RegionOne network internal http://controller:9696

openstack endpoint create --region RegionOne network admin http://controller:9696

 

13.1.4. 网络服务控制组件安装

以下操作需要在controller01、controller02和controller03上执行

yum install openstack-neutron openstack-neutron-ml2 ebtables openstack-neutron-openvswitch.noarch

 

13.1.5. 网络服务控制组件配置

以下操作需要在controller01、controller02和controller03上执行

1)   编辑 /etc/neutron/neutron.conf 文件

 
openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host 192.168.5.10

openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:密码@controller/neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2

openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins

 

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues true

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_durable_queues true

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password 密码

 

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password 密码

 

openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True

openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True

openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357

openstack-config --set /etc/neutron/neutron.conf nova auth_type password

openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default

openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default

openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne

openstack-config --set /etc/neutron/neutron.conf nova project_name service

openstack-config --set /etc/neutron/neutron.conf nova username nova

openstack-config --set /etc/neutron/neutron.conf nova password 密码

openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 3

 

openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
neutron.conf

 

 

2)   配置ML2插件,编辑 /etc/neutron/plugins/ml2/ml2_conf.ini 文件

[ml2]

type_drivers = flat ,vlan

tenant_network_types =

mechanism_drivers = openvswitch,l2population

extension_drivers = port_security

[ml2_type_flat]

flat_networks = external

[ml2_type_gre]

[ml2_type_vlan]

network_vlan_ranges = upstream:1:4090,downstream:1:4090

[ml2_type_vxlan]

[securitygroup]

enable_security_group = True

firewall_driver = iptables_hybrid
ml2_conf.ini

 

 

3)   配置openvswitch, 编辑/etc/neutron/plugins/ml2/openvswitch_agent.ini文件

[agent]

l2_population = True

[securitygroup]

firewall_driver = iptables_hybrid

enable_security_group = True

enable_ipset = true
openvswitch_agent.ini

 

 

1) 配置/etc/neutron/metadata_agent.ini文件

[DEFAULT]

nova_metadata_ip = controller

metadata_proxy_shared_secret = 密码
metadata_agent.ini

 

 

2) ML2配置文件链接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 

13.1.6. 初始化数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

 

13.1.7. 创建集群服务

pcs resource create neutron-server systemd:neutron-server op start timeout=90 --clone interleave=true

pcs constraint order start openstack-keystone-clone then neutron-server-clone

 

pcs resource create neutron-scale ocf:neutron:NeutronScale --clone globally-unique=true clone-max=3 interleave=true

pcs constraint order start neutron-server-clone then neutron-scale-clone

 

pcs resource create neutron-ovs-cleanup ocf:neutron:OVSCleanup --clone interleave=true

pcs resource create neutron-netns-cleanup ocf:neutron:NetnsCleanup --clone interleave=true

pcs resource create neutron-openvswitch-agent systemd:neutron-openvswitch-agent --clone interleave=true

pcs resource create neutron-metadata-agent systemd:neutron-metadata-agent  --clone interleave=true

 

pcs constraint order start neutron-scale-clone then neutron-ovs-cleanup-clone

pcs constraint colocation add neutron-ovs-cleanup-clone with neutron-scale-clone

pcs constraint order start neutron-ovs-cleanup-clone then neutron-netns-cleanup-clone

pcs constraint colocation add neutron-netns-cleanup-clone with neutron-ovs-cleanup-clone

pcs constraint order start neutron-netns-cleanup-clone then neutron-openvswitch-agent-clone

pcs constraint colocation add neutron-openvswitch-agent-clone with neutron-netns-cleanup-clone

pcs constraint order start neutron-openvswitch-agent-clone then neutron-metadata-agent-clone

pcs constraint colocation add neutron-metadata-agent-clone with neutron-openvswitch-agent-clone

 

 

13.1.8. 系统参数配置

1、编辑 /etc/sysctl.conf 文件,加入以下内容

  

net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

net.ipv4.ip_nonlocal_bind = 1

net.ipv6.conf.all.disable_ipv6 = 1

 

2、执行以下命令,使修改生效

sysctl –p

 

13.2.     计算节点网络服务配置

以下操作要在所有计算节点上执行

13.2.1. 编辑 /etc/sysctl.conf 文件

1) vim /etc/sysctl.conf

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

 

2) 执行以下命令,使修改生效

sysctl –p

 

13.2.2. 网络组件安装

yum install openstack-neutron-openvswitch -y

 

13.2.3. 网络组件配置

编辑 /etc/neutron/neutron.conf 文件,在对应选项下加入以下内容

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues true

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_durable_queues true

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password 密码

 

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password 密码

 

openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
neutron.conf

 

 

13.2.4. 配置ML2 插件

编辑/etc/neutron/plugins/ml2/openvswitch_agent.ini文件,在对应选项下加入以下内容

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group True

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_ipset True

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver iptables_hybrid

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings upstream:br-upstream,downstream:br-downstream

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population True
openvswitch_agent.ini

 

13.2.5. 创建网桥

ovs-vsctl add-br br-upstream

ovs-vsctl add-br br-downstream

vim ifcfg-ens9f0

TYPE=OVSPort

DEVICETYPE=ovs

OVS-BRIDGE=br-upstream

NAME=ens9f0

DEVICE=ens9f0

ONBOOT=yes

 

vim ifcfg-ens9f1

TYPE=OVSPort

DEVICETYPE=ovs

OVS-BRIDGE=br-downstream

NAME=ens9f1

DEVICE=ens9f1

ONBOOT=yes

 

 

 

13.2.6. 启动服务

 

systemctl restart openstack-nova-compute.service

systemctl enable neutron-openvswitch-agent.service

systemctl start neutron-openvswitch-agent.service

 

 

14.Glance共享存储

    

  1) 在controller02上安装nfs server

      yum -y install nfs-utils rpcbind –y

    2)创建共享目录

       mkdir /opt/glance/images/ -p

    3)配置nfs server

       vim /etc/exports

              /opt/glance/images/ 172.29.5.0/24(rw,no_root_squash,no_all_squash,sync)

      exportfs -r

    4)启动nfs server

       systemctl enable rpcbind.service

    systemctl start rpcbind.service

    systemctl enable nfs-server.service

    systemctl start nfs-server.service

    5)控制节点挂在nfs

       mount -t nfs 192.168.5.11:/opt/glance/images/ /var/lib/glance/iamges/

    chown -R glance.glance /opt/glance/images/

 

 

 

 

 

 

 

 

 

     

 

posted @ 2017-07-21 16:09  wanstack  阅读(651)  评论(0)    收藏  举报