openstack all erron
问题一:
libvirtd错误:Unable to read from monitor: Connection reset by peer
[root@compute4 ~]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-03-13 17:01:36 CST; 53min ago
Docs: man:libvirtd(8)
http://libvirt.org
Main PID: 4486 (libvirtd)
Tasks: 19 (limit: 7372)
CGroup: /system.slice/libvirtd.service
├─1760 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/
├─1761 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/
└─4486 /usr/sbin/libvirtd
Mar 13 17:01:36 compute4 systemd[1]: Starting Virtualization daemon...
Mar 13 17:01:36 compute4 systemd[1]: Started Virtualization daemon.
Mar 13 17:01:36 compute4 dnsmasq[1760]: read /etc/hosts - 9 addresses
Mar 13 17:01:36 compute4 dnsmasq[1760]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Mar 13 17:01:36 compute4 dnsmasq-dhcp[1760]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Mar 13 17:03:46 compute4 libvirtd[4486]: libvirt version: 2.2.1, package: 3.fc25 (Fedora Project, 2017-08-04-20:47:2
Mar 13 17:03:46 compute4 libvirtd[4486]: hostname: compute4
Mar 13 17:03:46 compute4 libvirtd[4486]: Unable to read from monitor: Connection reset by peer
Mar 13 17:03:46 compute4 libvirtd[4486]: internal error: qemu unexpectedly closed the monitor: warning: host doesn't
warning: host doesn't support requested feature: CPUID.01H:EDX.acpi [bit 22
warning: host doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
warning: host doesn't support requested feature: CPUID.01H:EDX.tm [bit 29]
warning: host doesn't support requested feature: CPUID.01H:EDX.pbe [bit 31]
warning: host doesn't support requested feature: CPUID.01H:ECX.dtes64 [bit
warning: host doesn't support requested feature: CPUID.01H:ECX.monitor [bit
warning: host doesn't support requested feature: CPUID.01H:ECX.ds_cpl [bit
warning: host doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
#解决办法:
当KVM的宿主机出现如题问题时的解决方法: 在终端中输入: virsh managedsave-remove NameOfDomain 或
直接删除所有虚拟机,再重新创建。
问题二:上传镜像
[root@controller1 ~]# openstack image create "Fedora-25" --file fedora25.qcow2 --disk-format qcow2 --container-format bare --public
Missing value auth-url required for auth plugin password
#解决办法:
[root@controller1 ~]# source admin.rc
[root@controller1 ~]# openstack image create "Fedora-25" --file fedora25.qcow2 --disk-format qcow2 --container-format bare --public
问题三:上传错像报错
[root@controller1 ~]# openstack image create "Fedora-25" --file fedora25.qcow2 --disk-format qcow2 --container-format bare --public
500 Internal Server Error
The server has either erred or is incapable of performing the requested operation.
(HTTP 500)
#解决办法:
systemctl restart httpd
问题四:创建虚拟机时,虚拟机内存给的太大,导致创建不成功。
[root@controller1 log]# tail -100 /var/log/nova/nova-conductor.log
2018-04-17 19:22:19.502 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28277 killed by signal 15
2018-04-17 19:22:19.506 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28276 killed by signal 15
2018-04-17 19:22:19.507 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28263 killed by signal 15
2018-04-17 19:22:19.513 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28271 killed by signal 15
2018-04-17 19:22:19.517 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28278 killed by signal 15
2018-04-17 19:22:19.527 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28268 killed by signal 15
2018-04-17 19:22:19.528 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28270 killed by signal 15
2018-04-17 19:22:19.530 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28266 killed by signal 15
2018-04-17 19:22:19.533 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28274 killed by signal 15
2018-04-17 19:22:19.534 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28281 killed by signal 15
2018-04-17 19:22:19.534 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28283 killed by signal 15
2018-04-17 19:22:19.536 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28282 killed by signal 15
2018-04-17 19:22:19.537 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28267 killed by signal 15
2018-04-17 19:22:19.538 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28272 killed by signal 15
2018-04-17 19:22:19.539 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28279 killed by signal 15
2018-04-17 19:22:19.539 28123 INFO oslo_service.service [req-6cd4f485-1ada-4be4-88a3-1f2072f6152d - - - - -] Child 28273 killed by signal 15
2018-04-17 19:22:23.529 22265 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.
2018-04-17 19:22:23.551 22265 INFO oslo_service.service [req-2901d537-550d-4ff0-aece-abe27216dd96 - - - - -] Starting 24 workers
2018-04-17 19:22:23.561 22353 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.561 22354 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.565 22355 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.567 22357 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.568 22356 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.573 22359 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.574 22360 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.578 22358 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.580 22363 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.580 22365 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.581 22364 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.585 22362 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.586 22366 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.586 22361 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.589 22367 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.593 22368 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.596 22369 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.611 22371 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.611 22372 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.612 22370 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.618 22378 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.618 22375 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.624 22381 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.630 22384 INFO nova.service [-] Starting conductor node (version 14.0.10-1.el7)
2018-04-17 19:22:23.643 22265 WARNING oslo_config.cfg [req-2901d537-550d-4ff0-aece-abe27216dd96 - - - - -] Option "rpc_backend" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:23.654 22265 WARNING oslo_config.cfg [req-2901d537-550d-4ff0-aece-abe27216dd96 - - - - -] Option "os_region_name" from group "barbican" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:23.657 22265 WARNING oslo_config.cfg [req-2901d537-550d-4ff0-aece-abe27216dd96 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:23.664 22265 WARNING oslo_config.cfg [req-2901d537-550d-4ff0-aece-abe27216dd96 - - - - -] Option "vnc_enabled" from group "DEFAULT" is deprecated. Use option "enabled" from group "vnc".
2018-04-17 19:22:24.083 22357 WARNING oslo_config.cfg [req-386a3c69-7f54-4d2d-8b2f-0d9523af064a - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.085 22359 WARNING oslo_config.cfg [req-5d151603-16e3-4125-ad58-ca86987e6684 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.101 22353 WARNING oslo_config.cfg [req-d80b7c66-eab9-43cf-8c51-9e6c87536b37 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.104 22358 WARNING oslo_config.cfg [req-b4d0ca49-b753-46fb-971f-31c7816ba688 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.105 22355 WARNING oslo_config.cfg [req-a3812697-bfeb-4d87-a0f4-1b02292e6268 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.108 22354 WARNING oslo_config.cfg [req-70ac2b6e-8643-4969-8f27-08832a5e0647 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.109 22356 WARNING oslo_config.cfg [req-72cb579a-a4f4-4f63-b5bc-c4cb339a4940 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.116 22362 WARNING oslo_config.cfg [req-60a2439c-0a7d-4a41-91f3-30f55294f9c3 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.117 22368 WARNING oslo_config.cfg [req-f11120c6-2fc5-4a67-91ea-1ac291b1a76f - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.120 22381 WARNING oslo_config.cfg [req-cd1ee552-9dab-401b-a1d9-e48f31c2245a - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.124 22370 WARNING oslo_config.cfg [req-b4dbdbe4-75d4-4d83-86dd-b7211100a69b - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.133 22361 WARNING oslo_config.cfg [req-7ff89a42-112c-44e6-a257-982f23e22d7a - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.136 22365 WARNING oslo_config.cfg [req-5bb2f395-17b7-46d8-be14-60d56030fa1f - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.141 22372 WARNING oslo_config.cfg [req-abe51a44-1f4c-45ac-98eb-51622d2c571b - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.142 22378 WARNING oslo_config.cfg [req-272e5828-3585-451d-b31a-6f8522116e85 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.143 22363 WARNING oslo_config.cfg [req-2d76038c-6863-4001-9ea4-e5f869b96024 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.147 22375 WARNING oslo_config.cfg [req-73d08500-ad26-4795-8c4d-d76e23ffba2c - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.150 22369 WARNING oslo_config.cfg [req-61ced449-b0f3-430f-a937-fc2ff4bab444 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.193 22367 WARNING oslo_config.cfg [req-c6f38d98-ebc5-4853-9d68-d7dc3d407389 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.195 22360 WARNING oslo_config.cfg [req-8269a0ba-5138-494b-a875-f6e91546773c - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.200 22364 WARNING oslo_config.cfg [req-b6fdfbaf-9764-45bb-af78-4b3098139064 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.203 22366 WARNING oslo_config.cfg [req-d493d675-d39b-4794-a137-4977fefabfa2 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.261 22384 WARNING oslo_config.cfg [req-d70c5ecf-7b6f-4c60-88d9-33924604b861 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:22:24.286 22371 WARNING oslo_config.cfg [req-4cf17469-0a2e-44c0-8a31-03b105490457 - - - - -] Option "rabbit_max_retries" from group "oslo_messaging_rabbit" is deprecated for removal. Its value may be silently ignored in the future.
2018-04-17 19:34:17.305 22359 WARNING nova.scheduler.utils [req-9e1cc224-477b-4538-b5e0-1fa5f31fbd8e d0a1f26e0cf84326a11092335859ea28 b74f6db7cb2d413789a017596b9659d2 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 199, in inner
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations
dests = self.driver.select_destinations(ctxt, spec_obj)
File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations
raise exception.NoValidHost(reason=reason)
NoValidHost: No valid host was found. There are not enough hosts available.
2018-04-17 19:34:17.306 22359 WARNING nova.scheduler.utils [req-9e1cc224-477b-4538-b5e0-1fa5f31fbd8e d0a1f26e0cf84326a11092335859ea28 b74f6db7cb2d413789a017596b9659d2 - - -] [instance: d84040c2-5f17-4cdf-a4df-ab069cfd50c1] Setting instance to ERROR state.
2018-04-17 19:54:14.966 22355 WARNING nova.scheduler.utils [req-9bad27c2-0875-42ac-90d9-011c950b5357 d0a1f26e0cf84326a11092335859ea28 b74f6db7cb2d413789a017596b9659d2 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 199, in inner
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations
dests = self.driver.select_destinations(ctxt, spec_obj)
File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations
raise exception.NoValidHost(reason=reason)
NoValidHost: No valid host was found. There are not enough hosts available.
2018-04-17 19:54:14.967 22355 WARNING nova.scheduler.utils [req-9bad27c2-0875-42ac-90d9-011c950b5357 d0a1f26e0cf84326a11092335859ea28 b74f6db7cb2d413789a017596b9659d2 - - -] [instance: 7c7be8b8-e3c1-4b04-ba5e-8921c1991c69] Setting instance to ERROR state
解决办法:
创建虚拟机时,内存不能超出物理内存。
问题四:创建卷时报错,创建不成功。(状态:显示错误)
先检查服务,查看报错信息
[root@controller1 nova]# systemctl status openstack-cinder-volume.service
● openstack-cinder-volume.service - Cluster Controlled openstack-cinder-volume
Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/system/openstack-cinder-volume.service.d
└─50-pacemaker.conf
Active: active (running) since Thu 2018-05-31 11:35:20 CST; 2h 39min ago
Main PID: 28849 (cinder-volume)
Tasks: 3 (limit: 7372)
CGroup: /system.slice/openstack-cinder-volume.service
├─28849 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-
├─28862 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-
└─28877 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-
May 31 14:13:48 controller1 cinder-volume[28849]: 2018-05-31 14:13:48.430 28862 ERROR cinder.service [-] Manager for
May 31 14:13:58 controller1 cinder-volume[28849]: 2018-05-31 14:13:58.125 28862 WARNING cinder.volume.manager [req-06
May 31 14:13:58 controller1 cinder-volume[28849]: 2018-05-31 14:13:58.318 28877 ERROR cinder.service [-] Manager for
May 31 14:13:58 controller1 cinder-volume[28849]: 2018-05-31 14:13:58.432 28862 ERROR cinder.service [-] Manager for
May 31 14:14:08 controller1 cinder-volume[28849]: 2018-05-31 14:14:08.320 28877 ERROR cinder.service [-] Manager for
May 31 14:14:08 controller1 cinder-volume[28849]: 2018-05-31 14:14:08.434 28862 ERROR cinder.service [-] Manager for
May 31 14:14:18 controller1 cinder-volume[28849]: 2018-05-31 14:14:18.331 28877 ERROR cinder.service [-] Manager for
May 31 14:14:18 controller1 cinder-volume[28849]: 2018-05-31 14:14:18.445 28862 ERROR cinder.service [-] Manager for
May 31 14:14:28 controller1 cinder-volume[28849]: 2018-05-31 14:14:28.338 28877 ERROR cinder.service [-] Manager for
May 31 14:14:28 controller1 cinder-volume[28849]: 2018-05-31 14:14:28.451 28862 ERROR cinder.service [-] Manager for
解决办法:
#重启服务
[root@controller1 nova]# systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
#查看服务状态
[root@controller1 nova]# systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
#检查日志
[root@controller1 cinder]# cd /var/log/cinder
[root@controller1 cinder]# ll
total 12280
-rw-r--r-- 1 cinder cinder 9932654 May 31 14:24 api.log
-rw-r--r-- 1 cinder cinder 4828 May 30 22:25 cinder-manage.log
-rw-r--r-- 1 cinder cinder 29358 May 31 14:15 scheduler.log
-rw-r--r-- 1 cinder cinder 2371163 May 31 14:16 volume.log
[root@controller1 ~]# tail -f /var/log/cinder/volume.log
2018-05-31 14:32:52.736 39856 INFO cinder.volume.manager [req-c2042c1a-d8e1-4e28-8e0f-32ce1350cf65 - - - - -] Initializing RPC dependent components of volume driver RBDDriver (1.2.0)
2018-05-31 14:32:52.738 39887 INFO cinder.volume.manager [req-a0f0c986-7615-429c-9113-2ea17b616c73 - - - - -] Driver initialization completed successfully.
2018-05-31 14:32:52.753 39856 INFO cinder.volume.manager [req-c2042c1a-d8e1-4e28-8e0f-32ce1350cf65 - - - - -] Driver post RPC initialization completed successfully.
2018-05-31 14:32:52.786 39887 INFO cinder.volume.manager [req-a0f0c986-7615-429c-9113-2ea17b616c73 - - - - -] Initializing RPC dependent components of volume driver RBDDriver (1.2.0)
2018-05-31 14:32:52.802 39887 INFO cinder.volume.manager [req-a0f0c986-7615-429c-9113-2ea17b616c73 - - - - -] Driver post RPC initialization completed successfully.
2018-05-31 14:33:50.612 39856 INFO cinder.volume.drivers.rbd [req-22a6b317-76a1-420e-8f08-204c78052431 b1594d3a91544f638badc13984c73547 9c015b6512db422bacdb6f04ecd95c09 - default default] volume volume-5bdac639-6abd-439f-8990-53837bc08433 no longer exists in backend
2018-05-31 14:33:50.639 39856 WARNING cinder.quota [req-22a6b317-76a1-420e-8f08-204c78052431 b1594d3a91544f638badc13984c73547 9c015b6512db422bacdb6f04ecd95c09 - default default] Deprecated: Default quota for resource: volumes_ceph is set by the default quota flag: quota_volumes_ceph, it is now deprecated. Please use the default quota class for default quota.
2018-05-31 14:33:50.640 39856 WARNING cinder.quota [req-22a6b317-76a1-420e-8f08-204c78052431 b1594d3a91544f638badc13984c73547 9c015b6512db422bacdb6f04ecd95c09 - default default] Deprecated: Default quota for resource: gigabytes_ceph is set by the default quota flag: quota_gigabytes_ceph, it is now deprecated. Please use the default quota class for default quota.
2018-05-31 14:33:50.709 39856 INFO cinder.volume.manager [req-22a6b317-76a1-420e-8f08-204c78052431 b1594d3a91544f638badc13984c73547 9c015b6512db422bacdb6f04ecd95c09 - default default] Deleted volume successfully.
2018-05-31 14:34:07.533 39856 INFO cinder.volume.flows.manager.create_volume [req-9bc87e39-a993-4ea7-995f-13b26caca836 b1594d3a91544f638badc13984c73547 9c015b6512db422bacdb6f04ecd95c09 - default default] Volume 9ca14279-b82e-46f0-a082-b4e27914cba6: being created as image with specification: {'status': u'creating', 'image_location': (None, None), 'volume_size': 20, 'volume_name': 'volume-9ca14279-b82e-46f0-a082-b4e27914cba6', 'image_id': '2bbe4c10-2acc-40c4-98f4-447e62b8620b', 'image_service': <cinder.image.glance.GlanceImageService object at 0x7f2a56befa90>, 'image_meta': {'status': u'active', 'name': u'win7_template', 'deleted': False, 'container_format': u'bare', 'created_at': datetime.datetime(2018, 5, 31, 4, 26, 39, tzinfo=<iso8601.Utc>), 'disk_format': u'qcow2', 'updated_at': datetime.datetime(2018, 5, 31, 4, 35, 42, tzinfo=<iso8601.Utc>), 'id': u'2bbe4c10-2acc-40c4-98f4-447e62b8620b', 'owner': u'9c015b6512db422bacdb6f04ecd95c09', 'protected': False, 'min_ram': 0, 'checksum': u'5646732b5de7057a0c412acbd08fa974', 'min_disk': 0, 'is_public': True, 'deleted_at': None, 'properties': {}, 'size': 18051301376}}
问题五:没有配置ceph存储连接
2018-05-31 11:27:46.497 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 367, in get_pool_info
2018-05-31 11:27:46.497 22571 ERROR nova.compute.manager with RADOSClient(self) as client:
2018-05-31 11:27:46.497 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 105, in __init__
2018-05-31 11:27:46.497 22571 ERROR nova.compute.manager self.cluster, self.ioctx = driver._connect_to_rados(pool)
2018-05-31 11:27:46.497 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 136, in _connect_to_rados
2018-05-31 11:27:46.497 22571 ERROR nova.compute.manager client.connect()
2018-05-31 11:27:46.497 22571 ERROR nova.compute.manager File "rados.pyx", line 785, in rados.Rados.connect (/builddir/build/BUILD/ceph-10.2.4/src/build/rados.c:10704)
2018-05-31 11:27:46.497 22571 ERROR nova.compute.manager Error: error connecting to the cluster: error code 22
2018-05-31 11:27:46.497 22571 ERROR nova.compute.manager
2018-05-31 11:28:46.439 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] No compute node record for host controller1
2018-05-31 11:28:46.443 22571 INFO nova.compute.resource_tracker [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Auditing locally available compute resources for node controller1
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Error updating resources for node controller1.
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager Traceback (most recent call last):
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6460, in update_available_resource_for_node
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager rt.update_available_resource(context)
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 511, in update_available_resource
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager resources = self.driver.get_available_resource(self.nodename)
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5541, in get_available_resource
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager disk_info_dict = self._get_local_gb_info()
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5177, in _get_local_gb_info
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager info = LibvirtDriver._get_rbd_driver().get_pool_info()
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 367, in get_pool_info
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager with RADOSClient(self) as client:
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 105, in __init__
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager self.cluster, self.ioctx = driver._connect_to_rados(pool)
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 136, in _connect_to_rados
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager client.connect()
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager File "rados.pyx", line 785, in rados.Rados.connect (/builddir/build/BUILD/ceph-10.2.4/src/build/rados.c:10704)
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager Error: error connecting to the cluster: error code 22
2018-05-31 11:28:46.453 22571 ERROR nova.compute.manager
2018-05-31 11:29:39.383 22571 INFO nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Updating bandwidth usage cache
2018-05-31 11:29:39.469 22571 INFO nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Bandwidth usage not supported by hypervisor.
2018-05-31 11:29:46.483 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] No compute node record for host controller1
2018-05-31 11:29:46.486 22571 INFO nova.compute.resource_tracker [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Auditing locally available compute resources for node controller1
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Error updating resources for node controller1.
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager Traceback (most recent call last):
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6460, in update_available_resource_for_node
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager rt.update_available_resource(context)
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 511, in update_available_resource
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager resources = self.driver.get_available_resource(self.nodename)
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5541, in get_available_resource
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager disk_info_dict = self._get_local_gb_info()
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5177, in _get_local_gb_info
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager info = LibvirtDriver._get_rbd_driver().get_pool_info()
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 367, in get_pool_info
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager with RADOSClient(self) as client:
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 105, in __init__
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager self.cluster, self.ioctx = driver._connect_to_rados(pool)
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 136, in _connect_to_rados
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager client.connect()
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager File "rados.pyx", line 785, in rados.Rados.connect (/builddir/build/BUILD/ceph-10.2.4/src/build/rados.c:10704)
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager Error: error connecting to the cluster: error code 22
2018-05-31 11:29:46.497 22571 ERROR nova.compute.manager
2018-05-31 11:30:47.433 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] No compute node record for host controller1
2018-05-31 11:30:47.436 22571 INFO nova.compute.resource_tracker [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Auditing locally available compute resources for node controller1
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Error updating resources for node controller1.
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager Traceback (most recent call last):
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6460, in update_available_resource_for_node
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager rt.update_available_resource(context)
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 511, in update_available_resource
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager resources = self.driver.get_available_resource(self.nodename)
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5541, in get_available_resource
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager disk_info_dict = self._get_local_gb_info()
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5177, in _get_local_gb_info
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager info = LibvirtDriver._get_rbd_driver().get_pool_info()
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 367, in get_pool_info
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager with RADOSClient(self) as client:
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 105, in __init__
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager self.cluster, self.ioctx = driver._connect_to_rados(pool)
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 136, in _connect_to_rados
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager client.connect()
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager File "rados.pyx", line 785, in rados.Rados.connect (/builddir/build/BUILD/ceph-10.2.4/src/build/rados.c:10704)
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager Error: error connecting to the cluster: error code 22
2018-05-31 11:30:47.447 22571 ERROR nova.compute.manager
2018-05-31 11:31:47.435 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] No compute node record for host controller1
2018-05-31 11:31:47.437 22571 INFO nova.compute.resource_tracker [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Auditing locally available compute resources for node controller1
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Error updating resources for node controller1.
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager Traceback (most recent call last):
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6460, in update_available_resource_for_node
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager rt.update_available_resource(context)
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 511, in update_available_resource
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager resources = self.driver.get_available_resource(self.nodename)
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5541, in get_available_resource
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager disk_info_dict = self._get_local_gb_info()
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5177, in _get_local_gb_info
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager info = LibvirtDriver._get_rbd_driver().get_pool_info()
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 367, in get_pool_info
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager with RADOSClient(self) as client:
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 105, in __init__
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager self.cluster, self.ioctx = driver._connect_to_rados(pool)
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 136, in _connect_to_rados
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager client.connect()
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager File "rados.pyx", line 785, in rados.Rados.connect (/builddir/build/BUILD/ceph-10.2.4/src/build/rados.c:10704)
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager Error: error connecting to the cluster: error code 22
2018-05-31 11:31:47.447 22571 ERROR nova.compute.manager
2018-05-31 11:32:48.413 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] No compute node record for host controller1
2018-05-31 11:32:48.416 22571 INFO nova.compute.resource_tracker [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Auditing locally available compute resources for node controller1
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Error updating resources for node controller1.
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager Traceback (most recent call last):
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6460, in update_available_resource_for_node
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager rt.update_available_resource(context)
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 511, in update_available_resource
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager resources = self.driver.get_available_resource(self.nodename)
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5541, in get_available_resource
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager disk_info_dict = self._get_local_gb_info()
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5177, in _get_local_gb_info
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager info = LibvirtDriver._get_rbd_driver().get_pool_info()
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 367, in get_pool_info
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager with RADOSClient(self) as client:
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 105, in __init__
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager self.cluster, self.ioctx = driver._connect_to_rados(pool)
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 136, in _connect_to_rados
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager client.connect()
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager File "rados.pyx", line 785, in rados.Rados.connect (/builddir/build/BUILD/ceph-10.2.4/src/build/rados.c:10704)
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager Error: error connecting to the cluster: error code 22
2018-05-31 11:32:48.429 22571 ERROR nova.compute.manager
2018-05-31 11:33:49.414 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] No compute node record for host controller1
2018-05-31 11:33:49.418 22571 INFO nova.compute.resource_tracker [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Auditing locally available compute resources for node controller1
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Error updating resources for node controller1.
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager Traceback (most recent call last):
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6460, in update_available_resource_for_node
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager rt.update_available_resource(context)
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 511, in update_available_resource
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager resources = self.driver.get_available_resource(self.nodename)
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5541, in get_available_resource
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager disk_info_dict = self._get_local_gb_info()
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5177, in _get_local_gb_info
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager info = LibvirtDriver._get_rbd_driver().get_pool_info()
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 367, in get_pool_info
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager with RADOSClient(self) as client:
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 105, in __init__
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager self.cluster, self.ioctx = driver._connect_to_rados(pool)
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 136, in _connect_to_rados
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager client.connect()
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager File "rados.pyx", line 785, in rados.Rados.connect (/builddir/build/BUILD/ceph-10.2.4/src/build/rados.c:10704)
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager Error: error connecting to the cluster: error code 22
2018-05-31 11:33:49.430 22571 ERROR nova.compute.manager
2018-05-31 11:34:50.399 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] No compute node record for host controller1
2018-05-31 11:34:50.401 22571 INFO nova.compute.resource_tracker [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Auditing locally available compute resources for node controller1
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Error updating resources for node controller1.
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager Traceback (most recent call last):
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6460, in update_available_resource_for_node
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager rt.update_available_resource(context)
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 511, in update_available_resource
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager resources = self.driver.get_available_resource(self.nodename)
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5541, in get_available_resource
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager disk_info_dict = self._get_local_gb_info()
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5177, in _get_local_gb_info
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager info = LibvirtDriver._get_rbd_driver().get_pool_info()
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 367, in get_pool_info
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager with RADOSClient(self) as client:
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 105, in __init__
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager self.cluster, self.ioctx = driver._connect_to_rados(pool)
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 136, in _connect_to_rados
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager client.connect()
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager File "rados.pyx", line 785, in rados.Rados.connect (/builddir/build/BUILD/ceph-10.2.4/src/build/rados.c:10704)
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager Error: error connecting to the cluster: error code 22
2018-05-31 11:34:50.408 22571 ERROR nova.compute.manager
2018-05-31 11:35:50.428 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] No compute node record for host controller1
2018-05-31 11:35:50.431 22571 INFO nova.compute.resource_tracker [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Auditing locally available compute resources for node controller1
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Error updating resources for node controller1.
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager Traceback (most recent call last):
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6460, in update_available_resource_for_node
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager rt.update_available_resource(context)
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 511, in update_available_resource
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager resources = self.driver.get_available_resource(self.nodename)
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5541, in get_available_resource
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager disk_info_dict = self._get_local_gb_info()
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5177, in _get_local_gb_info
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager info = LibvirtDriver._get_rbd_driver().get_pool_info()
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 367, in get_pool_info
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager with RADOSClient(self) as client:
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 105, in __init__
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager self.cluster, self.ioctx = driver._connect_to_rados(pool)
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 136, in _connect_to_rados
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager client.connect()
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager File "rados.pyx", line 785, in rados.Rados.connect (/builddir/build/BUILD/ceph-10.2.4/src/build/rados.c:10704)
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager Error: error connecting to the cluster: error code 22
2018-05-31 11:35:50.440 22571 ERROR nova.compute.manager
2018-05-31 11:36:50.459 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] No compute node record for host controller1
2018-05-31 11:36:50.461 22571 INFO nova.compute.resource_tracker [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Auditing locally available compute resources for node controller1
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager [req-aa6e3580-0491-4435-80b2-18c3181cb064 - - - - -] Error updating resources for node controller1.
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager Traceback (most recent call last):
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6460, in update_available_resource_for_node
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager rt.update_available_resource(context)
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 511, in update_available_resource
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager resources = self.driver.get_available_resource(self.nodename)
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5541, in get_available_resource
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager disk_info_dict = self._get_local_gb_info()
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5177, in _get_local_gb_info
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager info = LibvirtDriver._get_rbd_driver().get_pool_info()
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 367, in get_pool_info
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager with RADOSClient(self) as client:
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 105, in __init__
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager self.cluster, self.ioctx = driver._connect_to_rados(pool)
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 136, in _connect_to_rados
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager client.connect()
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager File "rados.pyx", line 785, in rados.Rados.connect (/builddir/build/BUILD/ceph-10.2.4/src/build/rados.c:10704)
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager Error: error connecting to the cluster: error code 22
2018-05-31 11:36:50.468 22571 ERROR nova.compute.manager
#解决方法:
运行:connect_ceph.sls 配置ceph连接脚本。
#卸载
dnf remove qemu-kvm -y
dnf remove openstack-nova-compute --allowerasing -y
#安装
dnf install qemu-kvm -y
dnf install openstack-nova-compute --allowerasing -y
#重启服务
systemctl restart libvirtd openstack-nova-compute neutron-linuxbridge-agent httpd
#检查crm_mon rabbitmqctl cluster_status ceph osd tree

浙公网安备 33010602011771号