汇总OpenStack生产运维中遇到的问题

汇总OpenStack生产运维中遇到的问题

1.冷迁移和升降配

# 1.配置各计算节点nova用户免密互信
usermod  -s /bin/bash nova
echo "NOVA_PASS"|passwd --stdin nova
su - nova
ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa
ssh-copy-id  nova@compute01
ssh-copy-id  nova@compute02
# 2.设置允许在同一台主机升降配和迁移
openstack-config --set /etc/nova/nova.conf DEFAULT allow_resize_to_same_host true
openstack-config --set /etc/nova/nova.conf DEFAULT allow_migrate_to_same_host true
openstack-config --set /etc/nova/nova.conf DEFAULT resize_confirm_window 1
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_default_filters \
RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
# 3.重启计算节点的计算服务
systemctl restart openstack-nova-compute.service

2.设置物理机重启后恢复虚拟机的状态

openstack-config --set /etc/nova/nova.conf DEFAULT resume_guests_state_on_host_boot true
systemctl restart openstack-nova-compute.service

3.Build of instance aborted: Volume did not finish being created even after we waited 191 seconds or 61 attempts. And its status is downloading.

# 解决方法:在nova.conf中有一个控制卷设备重试的参数:block_device_allocate_retries,可以通过修改此参数延长等待时间。该参数默认值为60,这个对应了之前实例创建失败消息里的61 attempts。我们可以将此参数设置的大一点,例如:180。这样Nova组件就不会等待卷创建超时,也即解决了此问题。修改了此参数后,需要重启Nova组件各个服务,配置才能生效。
openstack-config --set /etc/nova/nova.conf DEFAULT block_device_allocate_retries 180
systemctl restart openstack-nova-compute.service

4.Nova scheduler :Host has more disk space than database expected.

# 解决方法:空间不足,可以通过配置超分比解决。即修改cpu_allocation_ratio、ram_allocation_ratio、disk_allocation_ratio参数。
openstack-config --set /etc/nova/nova.conf DEFAULT cpu_allocation_ratio 4
openstack-config --set /etc/nova/nova.conf DEFAULT ram_allocation_ratio 1.5
openstack-config --set /etc/nova/nova.conf DEFAULT disk_allocation_ratio 2
openstack-config --set /etc/nova/nova.conf DEFAULT reserved_host_memory_mb 2048
openstack-config --set /etc/nova/nova.conf DEFAULT reserved_host_disk_mb 20480
systemctl restart openstack-nova-compute.service

# 超线程查询补充:
# 1. 逻辑CPU个数:
grep -c processor /proc/cpuinfo
# 2. 物理CPU个数:
grep 'physical id' /proc/cpuinfo |sort -u|wc -l
# 3.siblings指的是一个物理CPU有几个逻辑CPU
grep 'siblings' /proc/cpuinfo
# 4.cpu cores指的是一个物理CPU有几个核心
grep 'cpu cores' /proc/cpuinfo
# 如果siblings和cpu cores一致,则说明不支持超线程,或者超线程未打开。
# 如果siblings是cpu cores的两倍,则说明支持超线程,并且超线程已打开。

5.Failed to allocate the network(s), not rescheduling.

# 解决方法:超时导致。
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_is_fatal false
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_timeout 0
systemctl restart openstack-nova-compute.service

6.AMQPLAIN login refused: user 'openstack' - invalid credentials.

# 解决方法:一般是由于rabbitmq的用户名密码不对导致,检查nova-api库的cell_mappings表,查看数据库中rabbitmq的用户名密码。
mysql -uroot -p
MariaDB [(none)]> use nova_api;
MariaDB [nova_api]> select transport_url from cell_mappings where name="cell1";
MariaDB [nova_api]> \q

7.虚拟机桥接网卡mac地址放行(原理同Keepalived VIP放行)

neutron port-list
neutron port-show b9d47bd7-04e7-4bba-8c6a-7bcae212407f
neutron port-update b9d47bd7-04e7-4bba-8c6a-7bcae212407f --allowed-address-pairs ip_address=172.18.1.0/24,mac_address=fa:16:3e:aa:15:a0
neutron port-update --no-allowed-address-pairs b9d47bd7-04e7-4bba-8c6a-7bcae212407f

8.UnicodeEncodeError: 'ascii' codec can't encode characters in position 257-260: ordinal not in range(128)

# 解决方法:热迁移不成功,计算节点nova-compute服务日志报错:UnicodeEncodeError: 'ascii' codec can't encode characters in position 257-260: ordinal not in range(128)。原因是python2.7默认字符集不是utf-8所致。
cat >/usr/lib/python2.7/site-packages/sitecustomize.py<<EOF
import sys
reload(sys)
sys.setdefaultencoding('utf8')
EOF
systemctl restart openstack-nova-compute.service

9.修复云主机状态不对

openstack server list --all-projects --host compute01 |awk '{print $2}' |grep -Ev '^$|ID' |xargs -n1 nova reset-state --active
openstack server list --all-projects --host compute01 |awk '{print $2}' |grep -Ev '^$|ID' |xargs -n1 openstack server stop
openstack server list --all-projects --host compute01 |awk '{print $2}' |grep -Ev '^$|ID' |xargs -n1 openstack server start

10.Ceph存储独占锁导致云主机无法启动

for i in $(rbd ls -p volumes); do rbd feature disable volumes/$i exclusive-lock, object-map, fast-diff, deep-flatten; done
for i in $(rbd ls -p vms); do rbd feature disable vms/$i exclusive-lock, object-map, fast-diff, deep-flatten; done
for i in $(rbd ls -p volumesSSD); do rbd feature disable volumesSSD/$i exclusive-lock, object-map, fast-diff, deep-flatten; done
openstack server list --all-projects --host compute01 |awk '{print $2}' |grep -Ev '^$|ID' |xargs -n1 openstack server reboot --hard

# 参考资料
https://blog.csdn.net/xiaoquqi/article/details/119338817
https://blog.csdn.net/weixin_40579389/article/details/120875351
https://blog.51cto.com/u_13788458/2756828
https://www.136.la/nginx/show-162487.html
https://www.likecs.com/show-278361.html

11.RabbitMQ重置以及RabbitMQ无法启动

# controller01、controller02、controller03清理数据、重启服务
systemctl stop rabbitmq-server.service
rm -rf /var/lib/rabbitmq/mnesia/*
systemctl restart rabbitmq-server.service
# controller02、controller03加入集群
rabbitmqctl stop_app
rabbitmqctl join_cluster --ram rabbit@controller01
rabbitmqctl start_app
# controller01启用web插件、创建用户、授权、检查集群状态
rabbitmq-plugins enable rabbitmq_management
rabbitmqctl add_user openstack RABBIT_PASS
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmqctl cluster_status
rabbitmqctl list_queues

12.从数据库删除状态为BUILD的实例

# 故障原因:映射或调度过程中消息队列通信异常导致的数据库脏数据
DELETE FROM nova.instance_extra WHERE instance_extra.instance_uuid = '$UUID';
DELETE FROM nova.instance_faults WHERE instance_faults.instance_uuid = '$UUID';
DELETE FROM nova.instance_id_mappings WHERE instance_id_mappings.uuid = '$UUID';
DELETE FROM nova.instance_info_caches WHERE instance_info_caches.instance_uuid = '$UUID';
DELETE FROM nova.instance_system_metadata WHERE instance_system_metadata.instance_uuid = '$UUID';
DELETE FROM nova.security_group_instance_association WHERE security_group_instance_association.instance_uuid = '$UUID';
DELETE FROM nova.block_device_mapping WHERE block_device_mapping.instance_uuid = '$UUID';
DELETE FROM nova.fixed_ips WHERE fixed_ips.instance_uuid = '$UUID';
DELETE FROM nova.instance_actions_events WHERE instance_actions_events.action_id in (SELECT id from nova.instance_actions where instance_actions.instance_uuid = '$UUID');
DELETE FROM nova.instance_actions WHERE instance_actions.instance_uuid = '$UUID';
DELETE FROM nova.virtual_interfaces WHERE virtual_interfaces.instance_uuid = '$UUID';
DELETE FROM nova.instances WHERE instances.uuid = '$UUID';
# DELETE FROM nova_api.build_requests WHERE request_spec_id = '$UUID';

13、使用growpart工具扩容分区并扩容逻辑卷

# 逻辑卷所在物理磁盘如果划了分区。若动态增加磁盘大小,有两种方式扩容逻辑卷:
# 第一是新建一个分区,将新分区扩容至逻辑卷。第二是扩容最后一个分区,再扩容逻辑卷。
# 下面介绍第二种方式:

curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum install cloud-utils-growpart xfsprogs -y

# 扩容第几个分区 2表示sda的第2个分区
growpart /dev/sda 2
partprobe

pvresize /dev/sda2
lvextend -l +100%FREE /dev/centos/root
xfs_growfs /dev/centos/root
# resize2fs /dev/centos/root

14、扩展swap分区与挂载共享文件夹

mkswap /dev/sdb
swapon /dev/sdb
echo "UUID=1ad7660d-d6a4-4d73-95ac-c8ea359e7988 swap swap defaults 0 0" >>/etc/fstab

# vmware-hgfsclient 查看共享的文件夹
# vmhgfs-fuse 挂载共享文件夹
vmhgfs-fuse .host:/OpenStack /mnt -o subtype=vmhgfs-fuse,allow_other
echo ".host:/OpenStack /mnt/hgfs fuse.vmhgfs-fuse allow_other,defaults 0 0" >>/etc/fstab
posted @ 2022-06-07 19:38  wanghongwei-dev  阅读(591)  评论(0编辑  收藏  举报