OpenStack Havana 部署在Ubuntu 12.04 Server 【OVS+GRE】(三)——计算节点的安装

 

序:OpenStack Havana 部署在Ubuntu 12.04 Server 【OVS+GRE】

 

计算节点:

     

1.准备结点

  • 安装好ubuntu 12.04 Server 64bits后,进入root模式进行安装:
sudo su - 
  • 添加Havana源:
#apt-get install python-software-properties
#add-apt-repository cloud-archive:havana
  • 升级系统:
apt-get update
apt-get upgrade
apt-get dist-upgrade
  • 安装ntp服务:
apt-get install ntp
  • 配置ntp服务从控制节点同步时间:
 
sed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf

#Set the network node to follow up your conroller node
sed -i 's/server ntp.ubuntu.com/server 10.10.10.2/g' /etc/ntp.conf

service ntp restart
 

 

2.配置网络

  • 如下配置网络/etc/network/interfaces:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
address 10.10.10.4
netmask 255.255.255.0

auto eth0:1
iface eth0:1 inet static
address 10.20.20.4
netmask 255.255.255.0

 

#因为安装包的时候,还是通过apt-get来下载的,所以还是要分配一个可以连外网的IP,安装部署测试完成之后,可以将其down掉

auto eth0:2     
iface eth0:2 inet static
address 192.168.122.4
netmask 255.255.255.0
gateway 192.168.122.1
dns-nameservers 192.168.122.1

  • 开启路由转发:
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
sysctl -p

3.KVM

  • 确保你的硬件启动virtualization,如果不支持也没关系,因为我们本身就在KVM虚拟机上配置,后面将虚拟化器指定为QEMU即可:
apt-get install cpu-checker
kvm-ok
  • 安装kvm并配置它:
apt-get install -y kvm libvirt-bin pm-utils
  • 在/etc/libvirt/qemu.conf配置文件中启用cgroup_device_aci数组:
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet","/dev/net/tun"
]
  • 删除默认的虚拟网桥:
virsh net-destroy default
virsh net-undefine default
  • 更新/etc/libvirt/libvirtd.conf配置文件:
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"
  • 编辑libvirtd_opts变量在/etc/init/libvirt-bin.conf配置文件中:
env libvirtd_opts="-d -l"
  • 编辑/etc/default/libvirt-bin文件,没有的话就新建:
libvirtd_opts="-d -l"
  • 重启libvirt服务使配置生效:
service libvirt-bin restart

 

4.OpenVSwitch

  • 安装OpenVSwitch软件包:
apt-get install  openvswitch-switch openvswitch-datapath-dkms openvswitch-datapath-source
module-assistant auto-install openvswitch-datapath
service openvswitch-switch restart
  • 创建网桥:
ovs-vsctl add-br br-int

 

5.Neutron

  • 安装Neutron OpenVSwitch代理:
apt-get install neutron-plugin-openvswitch-agent
  • 编辑OVS配置文件/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:
 
[OVS]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.20.20.4
enable_tunneling = True
    
#Firewall driver for realizing quantum security group function
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
 
  • 编辑/etc/neutron/neutron.conf
 
rabbit_host = 10.10.10.2

[keystone_authtoken]
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = admin
signing_dir = /var/lib/neutron/keystone-signing

[database]
connection = mysql://neutronUser:neutronPass@10.10.10.2/neutron
 
  • 重启服务:
service neutron-plugin-openvswitch-agent restart

 

6.Nova

  • 安装nova组件:
apt-get install nova-compute-kvm python-guestfs
  • 注意:如果你的宿主机不支持kvm虚拟化,可把nova-compute-kvm换成nova-compute-qemu
  • 同时/etc/nova/nova-compute.conf配置文件中的libvirt_type=qemu
  • 在/etc/nova/api-paste.ini配置文件中修改认证信息:
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = admin
signing_dirname = /tmp/keystone-signing-nova
# Workaround for https://bugs.launchpad.net/nova/+bug/1154809
auth_version = v2.0
  • 编辑修改/etc/nova/nova.conf

[DEFAULT]
# This file is configuration of nova
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
nova_url=http://10.10.10.2:8774/v1.1/
sql_connection=mysql://novaUser:novaPass@10.10.10.2/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

#Availability_zone
#default_availability_zone=fbw

# Auth
use_deprecated_auth=false
auth_strategy=keystone

# Rabbit MQ
my_ip=10.10.10.4
rabbit_host=10.10.10.2
rpc_backend = nova.rpc.impl_kombu

 

# Imaging service
glance_host=10.10.10.2
glance_api_servers=10.10.10.2:9292
image_service=nova.image.glance.GlanceImageService

# Vnc configuration
novnc_enabled=true
novncproxy_base_url=http://192.168.122.2:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=10.10.10.4 # diffrente from every node,与控制节点不一样
vncserver_listen=0.0.0.0

 

# Network settings
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://10.10.10.2:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=admin
neutron_admin_auth_url=http://10.10.10.2:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

#If you want Neutron + Nova Security groups
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron

#If you want Nova Security groups only, comment the two lines above and uncomment line -1-.
#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

#Metadata
service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = helloOpenStack

# Compute #
compute_driver=libvirt.LibvirtDriver

# Cinder #
volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900

 

# Ceilemeter #
instance_usage_audit=True
instance_usage_audit_period=hour
notify_on_state_change=vm_and_task_state
notification_driver=nova.openstack.common.notifier.rpc_notifier
notification_driver=ceilometer.compute.nova_notifier

  • 重启nova-*服务:
cd /etc/init.d/; for i in $( ls nova-* ); do service $i restart; done;cd $OLDPWD
  • 检查所有nova服务是否正常启动:
nova-manage service list

 

7. 为监控服务安装计算代理

  • 安装监控服务:
ap-get install ceilometer-agent-compute
  • 配置修改/etc/nova/nova.conf, 这个在前面已经配好了:
...
[DEFAULT]
...
instance_usage_audit=True
instance_usage_audit_period=hour
notify_on_state_change=vm_and_task_state
notification_driver=nova.openstack.common.notifier.rpc_notifier
notification_driver=ceilometer.compute.nova_notifier
  • 重启服务:
service ceilometer-agent-compute restart

 

       至此,OpenStack基本的服务已经配置完毕,接下来你就可以创建基本的虚拟网络,启动虚拟机,并分配给网络和可供外网访问的浮动IP。

       至于如何通过neutron创建内网、外网,和路由,我推荐直接在dashboard的界面中来建立,当然也可以参照Awy的博文用neutron的命令去创建,或者参考官方文档,这里就不再赘述了。

       不过要让外网能访问虚拟机,记得要修改安全组规则,增加icmp、ssh(tcp 22)访问规则即可。

       希望,这几篇博文能帮助你成功部署基本的OpenStack环境,同也给自己做个记录,免得过一段时间又全忘了,出了问题,又不知道哪里没配好了。

 

 

        

posted @ 2014-04-11 10:54  登高行远  阅读(794)  评论(0编辑  收藏