代码改变世界

Openstack组件部署 — Nova_Install and configure a compute node

2016-06-30 01:13  云物互联  阅读(284)  评论(0编辑  收藏  举报

目录

前文列表

Openstack组件部署 — Overview和前期环境准备
Openstack组建部署 — Environment of Controller Node
Openstack组件部署 — Keystone功能介绍与认证实现流程
Openstack组件部署 — Keystone Install & Create service entity and API endpoints
Openstack组件部署 — keystone(domain, projects, users, and roles)
Openstack组件实现原理 — Keystone认证功能
Openstack组建部署 — Glance Install
Openstack组件实现原理 — Glance架构(V1/V2)
Openstack组件部署 — Nova overview
Openstack组件部署 — Nova_安装和配置Controller Node

Prerequisites 先决条件

从这一篇博文开始,Openstack组建部署进入了多节点的阶段,我们再重新回顾一下当初我们拟定的Network拓扑
这里写图片描述

IP Address Config:
这里写图片描述

多节点部署首先要确保节点之间能够通信和成功解析主机名,并且按照建议重新浏览Openstack组件部署 — Overview和前期环境准备 来对节点进行部署环境初始化操作。

Step1.关闭防火墙

systemctl mask iptables.service
systemctl mask ip6tables.service
systemctl mask ebtables.service
systemctl mask firewalld.service 

Step2.设置主机名

hostnamectl set-hostname compute1.jmilk.com

Step3.关闭Selinux

Step4.按照IP Address Config来设置Static_IP

nmcli connection modify eth0 ipv4.addresses "192.168.1.10/24 192.168.1.1" ipv4.dns "192.168.1.5" ipv4.method manual

注意:当我们需要连接到外网下载RDO的时,我们需要将DNS IP指向外网DNS Server。Example:
vim /etc/resolv.conf

search jmilk.com
nameserver 202.106.195.68
nameserver 202.106.46.151

**Step5.**Install OpenStack预备包

#1. 安装yum-plugin-priorities包,防止高优先级软件被低优先级软件覆盖
yum install yum-plugin-priorities 

#2. 安装EPEL扩展yum源,是一个RHEL系列的高质量软件源,可能版本号会被修改
yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-7.noarch.rpm 

#3. 安装extras repository 和 RDO repository 
yum install centos-release-openstack-mitaka
yum install https://rdoproject.org/repos/rdo-release.rpm

#4. 更新系统
yum update -y

#5. 重启系统
reboot

#6. 安装openstack-selinux自动管理SELinux 
yum install  openstack-selinux 

#7. 安装Openstack client
yum install python-openstackclient -y

Step6.配置DNS service或修改hosts文件,添加所有的Network拓扑节点的IP域名解析。

Step7. 配置NTP Client时间同步客户端
vim /etc/chrony.conf

#注释其他以server开头的配置项,并添加下列配置,使用Controller Node上的NTP Server
server comtroller.jmilk.com iburst  

重启NTP service

systemctl enable chronyd.service
systemctl start chronyd.service

Install and configure a compute node

官档:This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or VMs. For simplicity, this configuration uses the QEMU hypervisor with the KVM extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes.
粗译:这个章节讲述了怎样在Compute Node上安装和配置Compute serviceCompute service能够支持使用多种类型的hypervisors(虚拟化管理系统)技术来部署虚拟机。为了方便起见,Compute Node会配置使用QEMU hypervisorKVM extension去实现虚拟机的hardware acceleration(硬件加速)。一般在旧的硬件设备中,会更多的使用QEMU hypervisor。你可以按照下述的指令来水平扩展你的环境和部署更多的Compute Nodes

Note:This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the example architectures section. Each additional compute node requires a unique IP address.
注意:该章节假设你是在第一个Compute Node上配置Compute service。如果你希望配置更多的Compute Node,可以使用类似的方式去部署更多的Compute Node。每一个额外的Compute Node都需要有一个唯一的IP Address。

Install the packages

安装软件包

yum install openstack-nova-compute

Edit the /etc/nova/nova.conf file

In the [DEFAULT] and [oslo_messaging_rabbit] sections, configure RabbitMQ message queue access:
配置RabbitMQ消息队列访问
vim /etc/nova/nova.conf

[DEFAULT]
rpc_backend = rabbit

[oslo_messaging_rabbit]
rabbit_host = controller.jmilk.com
rabbit_userid = openstack
rabbit_password = fanguiju

In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:
配置Keystone认证服务访问

[DEFAULT]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller.jmilk.com:5000
auth_url = http://controller.jmilk.com:35357
memcached_servers = controller.jmilk.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = fanguiju

Note: Comment out or remove any other options in the [keystone_authtoken] section.
注意:注释或删除[keystone_authtoken]配置节点中所有的其他选项。

In the [DEFAULT] section, configure the my_ip option:

[DEFAULT]
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

Note:Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on your compute node.
注意:将参数MANAGEMENT_INTERFACE_IP_ADDRESS替换成Compute Node的management network interface的IP地址,即Compute Node的IP地址。
Example:

[DEFAULT]
my_ip = 192.168.1.10

In the [DEFAULT] section, enable support for the Networking service:

[DEFAULT]
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

Note:By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
注意:默认的,计算机使用一个内部的防火墙服务,由于Neteorking包含了一个防火墙服务,你必须通过nova.virt.firewall.NoopFirewallDriver防火墙驱动来关闭Compute Node操作系统中自带防火墙服务

In the [vnc] section, enable and configure remote console access:
开启并配置远程控制台代理访问

[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller.jmilk.com:6080/vnc_auto.html

The server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node.
Compute Node的监听组件会监听所有的IP Address,但是Compute Node的代理监听组件只会监听Compute Node中的管理接口IP Address(Compute Node IPAddress)。这个novncproxy_base_url表明你可以使用一个Web浏览器去远程访问这台Compute Node上实例的接口控制台位置。

Note:If the web browser to access remote consoles resides on a host that cannot resolve the controller hostname, you must replace controller with the management interface IP address of the controller node.
注意:如果Web浏览器访问远程的控制台存在于一个不能够被成功解析的controller node hostname,你必须将controller参数值替换成Controller Node IP Address

In the [glance] section, configure the location of the Image service API:
配置Glance镜像服务器访问路径

[glance]
api_servers = http://controller.jmilk.com:9292

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

Finalize installation

Determine whether your compute node supports hardware acceleration for virtual machines:

egrep -c '(vmx|svm)' /proc/cpuinfo

If this command returns a value of one or greater, your compute node supports hardware acceleration which typically requires no additional configuration.
如果这条指令返回一个或大于一个的值,你的Compute Node就支持hardware acceleration(硬件加速),这样的话通常不需要进行额外的配置。

If this command returns a value of zero, your compute node does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.
如果这条指令执行后不返回值,你的Compute Node就不支持hardware acceleration(硬件加速),所以你必须配置libvirt去使用QEMU来代替KVM虚拟化。
Example:

[root@compute1 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
0
  • Edit the [libvirt] section in the /etc/nova/nova.conf file as follows:
    启用qemu来支持虚拟化
    vim /etc/nova/nova.conf
[libvirt]
virt_type = qemu

Start the Compute service including its dependencies and configure them to start automatically when the system boots:

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

ERROR:执行start openstack-nova-compute.service失败
Troubleshooting

  • 查看日志 vim /var/log/nova/nova-compute.log
2016-06-30 10:56:01.802 2846 ERROR oslo.messaging._drivers.impl_rabbit [req-b850e200-ae71-47e0-97e0-e48810633ccd - - - - -] AMQP server on controller.jmilk.com:5672 is unreachable: [Errno 111] ECONN REFUSED. Trying again in 32 seconds.

从日志中可以看出是AMQP不能通过Port:5672连接,我们使用的Queue方案是rabbitmq

  • 在Controller Node上查看端口信息,如果没有打开Port:5672。则检查rabbitmq-server.serivce是否正常运行。

  • 在Controller Node上查看rabbitmq的运行状态

[root@controller ~]# rabbitmqctl status
Status of node rabbit@controller ...
Error: unable to connect to node rabbit@controller: nodedown

DIAGNOSTICS:
===========

attempted to contact: [rabbit@controller]

rabbit@controller:
  * connected to epmd (port 4369) on controller
  * epmd reports: node 'rabbit' not running at all
                  no other nodes on controller
  * suggestion: start the node

current node details:
- node name: 'rabbitmq-cli-25@controller'
- home dir: /var/lib/rabbitmq
- cookie hash: +lRccUmBW2uDMa6+zRfabA==

上面的Output指明了rabbitmq-server.service发生了nodedown错误。

  • 修改/etc/hosts文件(Controller & Compute1两个节点都要修改)
    Openstack集群中的每个节点需要互相访问,所以每个节点中的hosts文件应包含集群内所有节点的解析记录。而且rabbitmq-server.service使用Hostname:controller来访问Controller Node,所以还需要额外添加解析记录controller
    vim /etc/hosts
#添加解析记录:(Controller Node IP Address)192.168.1.5 ==> controller
192.168.1.5 controller.jmilk.com controller
  • 在Controller Node上重启rabbitmq-server.service,并查看端口开启情况。
[root@controller ~]# netstat -uptan | grep beam
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      4382/beam.smp       
tcp        0      0 127.0.0.1:40823         127.0.0.1:4369          ESTABLISHED 4382/beam.smp       
tcp6       0      0 :::5672                 :::*                    LISTEN      4382/beam.smp       
tcp6       0      0 127.0.0.1:5672          127.0.0.1:59724         ESTABLISHED 4382/beam.smp       
tcp6       0      0 127.0.0.1:5672          127.0.0.1:59723         ESTABLISHED 4382/beam.smp       
tcp6       0      0 127.0.0.1:5672          127.0.0.1:59722         ESTABLISHED 4382/beam.smp       
tcp6       0      0 192.168.1.5:5672        192.168.1.10:44852      ESTABLISHED 4382/beam.smp       
tcp6       0      0 127.0.0.1:5672          127.0.0.1:59725         ESTABLISHED 4382/beam.smp       
tcp6       0      0 127.0.0.1:5672          127.0.0.1:59726         ESTABLISHED 4382/beam.smp       
tcp6       0      0 192.168.1.5:5672        192.168.1.10:44858      ESTABLISHED 4382/beam.smp       
tcp6       0      0 192.168.1.5:5672        192.168.1.10:44854      ESTABLISHED 4382/beam.smp       
tcp6       0      0 127.0.0.1:5672          127.0.0.1:59721         ESTABLISHED 4382/beam.smp       
tcp6       0      0 192.168.1.5:5672        192.168.1.10:44860      ESTABLISHED 4382/beam.smp   

Port:5672已经开启。

  • 在Compute Node上重启openstack-nova-compute.service
    如果openstack-nova-compute.service还是不能启动并报出同样的错误的话,则在Controller Node上执行下面的操作,通过防火墙开放Port:5672 。
systemctl restart iptables.service
iptables -I INPUT -p tcp --dport 5672 -j ACCEPT
service iptables save
systemctl restart iptables.service