feisky

云计算、虚拟化与Linux技术笔记
  博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

nova分析(3)—— nova-api

Posted on 2014-07-30 22:05  feisky  阅读(2718)  评论(0编辑  收藏  举报

nova-api是nova对外提供Restful API的服务,Horizon、novaclient等均通过该api与nova进行通信。

nova其实对外提供了多个api服务,包括下面这些服务:

nova-api
nova-api-ec2
nova-api-metadata
nova-api-os-compute
其中,nova-api用于启动其他三个服务。下面逐个分析下。

nova-api

入口在 nova.cmd.api:main ,主要是基于WSGI、PasteDeploy、Webob、Routes等框架实现Restful API:

    launcher = service.process_launcher()
    for api in CONF.enabled_apis:
        should_use_ssl = api in CONF.enabled_ssl_apis
        if api == 'ec2':
            server = service.WSGIService(api, use_ssl=should_use_ssl,
                                         max_url_len=16384)
        else:
            server = service.WSGIService(api, use_ssl=should_use_ssl)
        launcher.launch_service(server, workers=server.workers or 1)
    launcher.wait()

对每一个enabled_api,都会创建一个WSGIService,然后调用launch_service:

def _start_child(self, wrap):
        if len(wrap.forktimes) > wrap.workers:
            # Limit ourselves to one process a second (over the period of
            # number of workers * 1 second). This will allow workers to
            # start up quickly but ensure we don't fork off children that
            # die instantly too quickly.
            if time.time() - wrap.forktimes[0] < wrap.workers:
                LOG.info(_LI('Forking too fast, sleeping'))
                time.sleep(1)

            wrap.forktimes.pop(0)

        wrap.forktimes.append(time.time())

        pid = os.fork()
        if pid == 0:
            # create a green thread to run child
            # 实际上最终调用的service的start/wait方法
            launcher = self._child_process(wrap.service)
            while True:
                self._child_process_handle_signal()
                status, signo = self._child_wait_for_exit_or_signal(launcher)
                if not _is_sighup_and_daemon(signo):
                    break
                launcher.restart()

            os._exit(status)

        LOG.info(_LI('Started child %d'), pid)

        wrap.children.add(pid)
        self.children[pid] = wrap

        return pid

    def launch_service(self, service, workers=1):
        wrap = ServiceWrapper(service, workers)

        LOG.info(_LI('Starting %d workers'), wrap.workers)
        while self.running and len(wrap.children) < wrap.workers:
            self._start_child(wrap)

既然每个服务都是 service.WSGIService ,那么WSGIService是如何初始化的呢

class WSGIService(object):
    """Provides ability to launch API from a 'paste' configuration."""

    def __init__(self, name, loader=None, use_ssl=False, max_url_len=None):
        """Initialize, but do not start the WSGI server.

        :param name: The name of the WSGI server given to the loader.
        :param loader: Loads the WSGI application using the given name.
        :returns: None

        """
        self.name = name
        # 根据self.name从配置文件中获取manager,若有则实例化对应的manager
        self.manager = self._get_manager()
        self.loader = loader or wsgi.Loader()
        # 通过paste.deploy加载app,对应配置文件为api-paste.ini
        self.app = self.loader.load_app(name)
        self.host = getattr(CONF, '%s_listen' % name, "0.0.0.0")
        self.port = getattr(CONF, '%s_listen_port' % name, 0)
        self.workers = (getattr(CONF, '%s_workers' % name, None) or
                        processutils.get_worker_count())
        if self.workers and self.workers < 1:
            worker_name = '%s_workers' % name
            msg = (_("%(worker_name)s value of %(workers)s is invalid, "
                     "must be greater than 0") %
                   {'worker_name': worker_name,
                    'workers': str(self.workers)})
            raise exception.InvalidInput(msg)
        self.use_ssl = use_ssl
        self.server = wsgi.Server(name,
                                  self.app,
                                  host=self.host,
                                  port=self.port,
                                  use_ssl=self.use_ssl,
                                  max_url_len=max_url_len)
        # Pull back actual port used
        self.port = self.server.port
        self.backdoor_port = None

即从api-paste.ini加载app后,创建wsgi.Server

Paste Deployment是用于发现和配置WSGI appliaction和server的系统。对于WSGI application,用户提供一个单独的函数(loadapp),用于从配置文件或者python egg中加载WSGI application。因为WSGI application提供了唯一的单独的简单的访问入口,所以application不需要暴露application的内部的实现细节。首先了解几个基本的概念:

  • application: 应用,符合WSGI规范的可调用对象,接受参数(environ,start_response), 调用start_response返回状态和消息头,返回结果作为消息体。

  • filter:过滤器,可调用对象,类型python中的装饰器,接受一个application对象作为参数,返回一个封装后的application。

  • app_factory:可调用对象,接受参数(global,**local_conf),返回application对象

  • composite_factory: 可调用对象,接受参数(loader,global_config,**local_conf), loader有几个方法, get_app用于获取wsgi_app, get_filter用于加载filter, 返回application对象。

  • filter_factory: 可调用对象,接受参数(global_config, **local_conf),返回filter对象

以nova-api-os-compute为例看看这个配置文件是咋回事

# 这儿是整个api的入口,将不同版本转发到compiste处理
[composite:osapi_compute]
use = call:nova.api.openstack.urlmap:urlmap_factory
/: oscomputeversions
/v1.1: openstack_compute_api_v2
/v2: openstack_compute_api_v2
/v3: openstack_compute_api_v3

# 比如v3版本api在这儿处理,在使用keystone配置时,要经过request_id faultwrap sizelimit authtoken keystonecontext这些filter的处理最后再交给osapi_compute_app_v3这个app来处理
# 注意这些filter的顺序:先把前n-1个filter逆序,然后逐个应用到app上
[composite:openstack_compute_api_v3]
use = call:nova.api.auth:pipeline_factory_v3
noauth = request_id faultwrap sizelimit noauth_v3 osapi_compute_app_v3
keystone = request_id faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v3

# filter部分定义了这个filter由哪个类来处理
[filter:request_id]
paste.filter_factory = nova.openstack.common.middleware.request_id:RequestIdMiddleware.factory

[filter:compute_req_id]
paste.filter_factory = nova.api.compute_req_id:ComputeReqIdMiddleware.factory

[filter:faultwrap]
paste.filter_factory = nova.api.openstack:FaultWrapper.factory

[filter:noauth]
paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory

[filter:noauth_v3]
paste.filter_factory = nova.api.openstack.auth:NoAuthMiddlewareV3.factory

[filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory

[filter:sizelimit]
paste.filter_factory = nova.api.sizelimit:RequestBodySizeLimiter.factory

# 这儿是v3的处理app,即APIRouterV3.factory
[app:osapi_compute_app_v3]
paste.app_factory = nova.api.openstack.compute:APIRouterV3.factory

 APIRouterV3会加载setup.cfg中配置的extensions:

        # 初始化api_extension_manager并加载所有的API_EXTENSION_NAMESPACE对应的extensions。
        # 这儿加载的就是setup.cfg中配置的nova.api.v3.extensions部分,比如:
        #   nova.api.v3.extensions =
        #       access_ips = nova.api.openstack.compute.plugins.v3.access_ips:AccessIPs
        #       admin_actions = nova.api.openstack.compute.plugins.v3.admin_actions:AdminActions
        #       admin_password = nova.api.openstack.compute.plugins.v3.admin_password:AdminPassword
        #       agents = nova.api.openstack.compute.plugins.v3.agents:Agents
        #       ...
        # 当然如果有第三方的应用也声明了此namespace的extension,这里也会一并load进来。
        # 其中的check_func为check_load_extension,其所作的工作主要包括:
        #   a) 检测是否为相V3APIExtensionBase类型
        #   b) extension的黑白名单过滤和验证
        self.api_extension_manager = stevedore.enabled.EnabledExtensionManager(
            namespace=self.API_EXTENSION_NAMESPACE, # 'nova.api.v3.extensions'
            check_func=_check_load_extension,
            invoke_on_load=True,
            invoke_kwds={"extension_info": self.loaded_extension_info})

        mapper = PlainMapper()
        self.resources = {}

        # NOTE(cyeoh) Core API support is rewritten as extensions
        # but conceptually still have core
        if list(self.api_extension_manager):
            # NOTE(cyeoh): Stevedore raises an exception if there are
            # no plugins detected. I wonder if this is a bug.
            # Extensions define what resources they want to add through a get_resources function
            self.api_extension_manager.map(self._register_resources,
                                           mapper=mapper)
            # Extensions define what resources they want to add through
            # a get_controller_extensions function
            self.api_extension_manager.map(self._register_controllers)

        # 检测core extensions是否全部加载成功
        missing_core_extensions = self.get_missing_core_extensions(
            self.loaded_extension_info.get_extensions().keys())
        if not self.init_only and missing_core_extensions:
            LOG.critical(_("Missing core API extensions: %s"),
                         missing_core_extensions)
            raise exception.CoreAPIMissing(
                missing_apis=missing_core_extensions)

以 servers = nova.api.openstack.compute.plugins.v3.servers:Servers 这个extension为例来看看加载过程是怎么样的。从前面的代码可以看到对每个extension,会调用get_resources和get_controller_extensions两个函数,其所作的工作分别为新增资源和扩展现有资源以添加action。

class Servers(extensions.V3APIExtensionBase):
    """Servers."""

    name = "Servers"
    alias = "servers"
    version = 1

    def get_resources(self):
        member_actions = {'action': 'POST'}
        collection_actions = {'detail': 'GET'}
        resources = [
            extensions.ResourceExtension(
                'servers',
                ServersController(extension_info=self.extension_info),
                member_name='server', collection_actions=collection_actions,
                member_actions=member_actions)]

        return resources

    def get_controller_extensions(self):
        return []

这里需要注意的是,V2 API中会区分Core API和Extension API。在V3 API中,所有的API均作为extension的形式提供。如V2 API中的servers是Core API,在V3 API中也是一个普通的extension,此extension会在get_resources方法中返回servers资源对象,而get_controller_extensions方法会返回空list。

而当disk_config等需要对servers资源扩展action时,其所需做的就是在get_controller_extensions中返回对servers资源的扩展:

class Disk_config(extensions.ExtensionDescriptor):
    """Disk Management Extension."""

    name = "DiskConfig"
    alias = ALIAS
    namespace = XMLNS_DCF
    updated = "2011-09-27T00:00:00Z"

    def get_controller_extensions(self):
        servers_extension = extensions.ControllerExtension(
                self, 'servers', ServerDiskConfigController())

        images_extension = extensions.ControllerExtension(
                self, 'images', ImageDiskConfigController())

        return [servers_extension, images_extension]

最后再来看看V3版本API是如何避免各Extension之间代码耦合的

V3中,主要是利用了Stevedore避免了各Extension之间的代码耦合,从而避免了上述问题。

还是以当前对核心资源“虚拟机”的create方法扩展来看,在servers资源对应的Controller nova.api.openstack.compute.pulgins.v3.servers:ServersController中,将create作为了一个extension的namespace,并会在初始化时就load所有的extensions:

# Look for implementation of extension point of server creation
        self.create_extension_manager = \
          stevedore.enabled.EnabledExtensionManager(
              namespace=self.EXTENSION_CREATE_NAMESPACE, #nova.api.v3.extensions.server.create
              check_func=_check_load_extension('server_create'),
              invoke_on_load=True,
              invoke_kwds={"extension_info": self.extension_info},
              propagate_map_exceptions=True)
        if not list(self.create_extension_manager):
            LOG.debug("Did not find any server create extensions")

然后在create方法中,会通过map方法依次调用各个Extensions的server_create方法:

        # Query extensions which want to manipulate the keyword
        # arguments.
        # NOTE(cyeoh): This is the hook that extensions use
        # to replace the extension specific code below.
        # When the extensions are ported this will also result
        # in some convenience function from this class being
        # moved to the extension
        if list(self.create_extension_manager):
            self.create_extension_manager.map(self._create_extension_point,
                                              server_dict, create_kwargs)


    def _create_extension_point(self, ext, server_dict, create_kwargs):
        handler = ext.obj
        LOG.debug("Running _create_extension_point for %s", ext.obj)

        handler.server_create(server_dict, create_kwargs)

在nova的setup.cfg中也配置了此namespace对应的所有的extensions,比如 nova.api.openstack.compute.plugins.v3.config_drive:ConfigDrive,其server_create方法如下所示,会更新ServersController:create方法中传入的create_kwargs字典:

    def server_create(self, server_dict, create_kwargs):
        create_kwargs['config_drive'] = server_dict.get(ATTRIBUTE_NAME)

最终,ServersController:create方法会以如下方式调用内部API来创建VM,即直接使用被各个extensions更新过的create_kwargs,而不用感知其实际的内容。

            (instances, resv_id) = self.compute_api.create(context,
                            inst_type,
                            image_uuid,
                            display_name=name,
                            display_description=name,
                            metadata=server_dict.get('metadata', {}),
                            admin_password=password,
                            requested_networks=requested_networks,
                            **create_kwargs)

由上面的分析可知,如上所有的所谓的Extension,最终均依赖的还是一个内部的API,如果此API本身不具有扩展性,那么如上所有的Extension,均只能在此内部API支持的功能的基础上进行发挥。 比如文中提到的创建VM的内部create API,此API的参数是固定的:

    def create(self, context, instance_type,
               image_href, kernel_id=None, ramdisk_id=None,
               min_count=None, max_count=None,
               display_name=None, display_description=None,
               key_name=None, key_data=None, security_group=None,
               availability_zone=None, user_data=None, metadata=None,
               injected_files=None, admin_password=None,
               block_device_mapping=None, access_ip_v4=None,
               access_ip_v6=None, requested_networks=None, config_drive=None,
               auto_disk_config=None, scheduler_hints=None, legacy_bdm=True):
        """Provision instances, sending instance information to the
        scheduler.  The scheduler will determine where the instance(s)
        go and will handle creating the DB entries.

        Returns a tuple of (instances, reservation_id)
        """

参考文档:

http://blog.csdn.net/cloudresearch/article/details/19050595

http://www.choudan.net/2013/07/30/OpenStack-API分析%28一%29.html

http://www.choudan.net/2013/07/31/OpenStack-API分析%28二%29.html

无觅相关文章插件,快速提升流量