在第一篇中,novaclient最終會向nova發出下面的HTTP POST request。
和下面參數:
這裡可以看到attach REST API的詳細說明: http://api.openstack.org/api-ref-compute-v2-ext.html
nova-api啟動
我們現在回頭看看nova是如何啟動web service來監聽上述的http request。 在openstack nova的運作環境中,你會發現nova-api 這個程序。打開/usr/bin/nova-api檔案,我們可以找到啟動nova API服務的函數入口,在nova源代碼目錄/cmd/api.py檔案中。 主要的處理流程如下:
1. 根據nova.conf中定義的enabled_apis變量,來啟動相應的API服務。 例如,下面就說明要啟動ec2 API。
enabled_apis = ec2,osapi_compute,metadata
2. 每個api 服務實際就是一個WSGIService對象的執行個體。server = service.WSGIService(api, use_ssl=should_use_ssl)
WSGI對象的初始化過程中,除了基本的wsgi.Server參數處理,還有import相應的Manager class。比如說:nova.conf檔案中定義了Network manager class
network_manager = nova.network.manager.FlatDHCPManager
3. Launcher 一個服務,然後等待服務結束。函數調用次序: launcher.launch_service ==> self.services.add ==> self.tg.add_thread(self.run_service, service, self.done) => self.run_service, 即時啟動線程來call service中定義的start函數。
481 @staticmethod
482 def run_service(service, done):
483 """Service start wrapper.
484
485 :param service: service to run
486 :param done: event to wait on until a shutdown is triggered
487 :returns: None
488
489 """
490 service.start()
491 systemd.notify_once()
492 done.wait()
切換到 nova/service.py中class WSGService, start函數中主要四個call。依次是self.manager.init_host, self.manager.pre_start_hook, self.server.start, self,manager.post_start_hook. 其中,self.server就是__init__函數中建立的wsgiServer。 檔案位于nova/wsgi.py。wsgiServer.start函數最終spawn一個WSGI app來處理接受和處理HTTP request。
nova-compute 啟動
同理分析,同樣的service啟動次序。打開nova/cmd/compute.py,看看computer service是如何産生。
70 server = service.Service.create(binary='nova-compute',
71 topic=CONF.compute_topic,
72 db_allowed=CONF.conductor.use_local)
73 service.serve(server)
74 service.wait()
在Service的create類方法中,會執行個體化後端的computer manager (nova/computer/manager.py class ComputeManager)。 在ComputeManager的__init__構造函數中, 定義了computer RPC API接口。最後load comouter driver。 這個CONF.compute_driver必須在nova.conf檔案裡配置,告訴nova compiter的後端虛拟化軟體到底用哪個。 (i.e compute_driver = libvirt.LibvirtDriver)
572 def __init__(self, compute_driver=None, *args, **kwargs):
573 """Load configuration options and connect to the hypervisor."""
574 self.virtapi = ComputeVirtAPI(self)
575 self.network_api = network.API()
576 self.volume_api = volume.API()
577 self._last_host_check = 0
578 self._last_bw_usage_poll = 0
579 self._bw_usage_supported = True
580 self._last_bw_usage_cell_update = 0
581 self.compute_api = compute.API()
582 self.compute_rpcapi = compute_rpcapi.ComputeAPI()
583 self.conductor_api = conductor.API()
584 self.compute_task_api = conductor.ComputeTaskAPI()
599 self.driver = driver.load_compute_driver(self.virtapi, compute_driver)
然後運作start,進入了上面的四個call。 我們這裡先看看self.manager.init_host
1045 def init_host(self):
1046 """Initialization for a standalone compute service."""
1047 self.driver.init_host(host=self.host)
1048 context = nova.context.get_admin_context()
這裡的driver對應的就是 libvirt.LibvirtDriver。init_host實質完成libvirt的初始化host的操作。
解析API
在nova/computer/api.py檔案中,我們可以找到attach volume。nova client 的請求會轉發到 RPC API. 因為nova-api服務負責處理REST請求,而nova元件之間的通信是通過RPC call。
2748 def _attach_volume(self, context, instance, volume_id, device,
2749 disk_bus, device_type):
2750 """Attach an existing volume to an existing instance.
2751
2752 This method is separated to make it possible for cells version
2753 to override it.
2754 """
......
2769 self.compute_rpcapi.attach_volume(context, instance=instance,
2770 volume_id=volume_id, mountpoint=device, bdm=volume_bdm)
2783 def attach_volume(self, context, instance, volume_id, device=None,
2784 disk_bus=None, device_type=None):
2785 """Attach an existing volume to an existing instance."""
2786 # NOTE(vish): Fail fast if the device is not going to pass. This
2787 # will need to be removed along with the test if we
2788 # change the logic in the manager for what constitutes
2789 # a valid device.
2790 if device and not block_device.match_device(device):
2791 raise exception.InvalidDevicePath(path=device)
2792 return self._attach_volume(context, instance, volume_id, device,
2793 disk_bus, device_type)
而在nova/compute/rpcapi.py檔案中,所有rpc client請求會remote call RPC server端的attach_volume.
326 def attach_volume(self, ctxt, instance, volume_id, mountpoint, bdm=None):
338 cctxt = self.client.prepare(server=_compute_host(None, instance),
339 version=version)
340 cctxt.cast(ctxt, 'attach_volume', **kw)
從上面的nova-computer啟動分析看出,這裡的cctxt.cast請求實際上是remote call ComputeManager中的attach_volume函數。這是computer instance是管理接口。RPC server端對應的處理函數在nova/computer/manager.py中。這時候,具體的attach virtual device工作才會交給後端的DriverVolumeBlockDevice.attach
4157 def attach_volume(self, context, volume_id, mountpoint,
4158 instance, bdm=None):
4159 """Attach a volume to an instance."""
4160 if not bdm:
4161 bdm = block_device_obj.BlockDeviceMapping.get_by_volume_id(
4162 context, volume_id)
4163 driver_bdm = driver_block_device.DriverVolumeBlockDevice(bdm)
4164 try:
4165 return self._attach_volume(context, instance, driver_bdm)
4166 except Exception:
4167 with excutils.save_and_reraise_exception():
4168 bdm.destroy(context)
...
4170 def _attach_volume(self, context, instance, bdm):
4171 context = context.elevated()
4172 LOG.audit(_('Attaching volume %(volume_id)s to %(mountpoint)s'),
4173 {'volume_id': bdm.volume_id,
4174 'mountpoint': bdm['mount_device']},
4175 context=context, instance=instance)
4176 try:
4177 bdm.attach(context, instance, self.volume_api, self.driver,
4178 do_check_attach=False, do_driver_attach=True)
我們看看具體的virt/block_device.py 中的DriverVolumeBlockDevice class。
212 @update_db
213 def attach(self, context, instance, volume_api, virt_driver,
214 do_check_attach=True, do_driver_attach=False):
215 volume = volume_api.get(context, self.volume_id) # 根據volume id拿到volume object
......
221 # 以LibvirtDriver為例, 拿到虛拟化後端對應的volume連接配接,并初始化。
222 connector = virt_driver.get_volume_connector(instance)
223 connection_info = volume_api.initialize_connection(context,
224 volume_id,
225 connector)
......
229 # If do_driver_attach is False, we will attach a volume to an instance
230 # at boot time. So actual attach is done by instance creation code.
231 if do_driver_attach:
232 encryption = encryptors.get_encryption_metadata(
233 context, volume_api, volume_id, connection_info)
234
235 try:
236 virt_driver.attach_volume(
237 context, connection_info, instance,
238 self['mount_device'], disk_bus=self['disk_bus'],
239 device_type=self['device_type'], encryption=encryption)
240 except Exception: # pylint: disable=W0702
241 with excutils.save_and_reraise_exception():
242 LOG.exception(_("Driver failed to attach volume "
243 "%(volume_id)s at %(mountpoint)s"),
244 {'volume_id': volume_id,
245 'mountpoint': self['mount_device']},
246 context=context, instance=instance)
247 volume_api.terminate_connection(context, volume_id,
248 connector)
249 self['connection_info'] = connection_info
250 volume_api.attach(context, volume_id, # callback函數,nova這邊的事情處理結束了。該cinder端更新資料庫等等
251 instance['uuid'], self['mount_device'])
到這裡nova/virt/libvirt/driver.py檔案中的attach_volume函數,就是libvirt程式設計把volume做為backing storage添加到KVM instance的配置檔案當中,大緻步驟流程,分析拿到KVM hyperv執行個體中,後端存儲是什麼類型(i.e NFS,ISCSI,FC)。然後生成對應的KVM 配置檔案。主要是把需要attach的volume連接配接資訊添加到libvirt.xml檔案中。
LibvirtDriver.attach_volume ==》LibvirtBaseVolumeDriver.connect_volume ==>conf.to_xml() ==> virt_dom.attachDeviceFlags
Cinder follow up
所有volume_api 對應的檔案在nova/volume/cinder.py (通過volume.API()函數import并執行個體化)。這裡的API實質上都是cindercient向cinder server端發出的REST 請求。
259 @translate_volume_exception
260 def attach(self, context, volume_id, instance_uuid, mountpoint):
261 cinderclient(context).volumes.attach(volume_id, instance_uuid,
262 mountpoint)
......
268 @translate_volume_exception
269 def initialize_connection(self, context, volume_id, connector):
270 return cinderclient(context).volumes.initialize_connection(volume_id,
271 connector)
後續的第三篇文章會繼續來分析下cinder端如何更新資料庫資訊。
總結和不足
1. 上述的nova分析主要關注attach volume接口函數的依次調用,最後隻到到virtualization driver層,點到為止。不同的虛拟化軟體driver給instance添加disk的步驟具體不一。這就需要讀者繼續分析相應的virt driver代碼。
2. REST service的server是如何建立起來的? 還需要讀者自行研究WSGI和 paste。
3. RPC 和 message通信分析,都是值得我們研究的内容。這裡就不一一介紹。
4. nova的代碼龐大,服務衆多,這裡隻簡單介紹了nova-api和nova-computer.
第二篇nova inside全文完。轉載請指明出處。
版權聲明:本文為CSDN部落客「weixin_34128237」的原創文章,遵循CC 4.0 BY-SA版權協定,轉載請附上原文出處連結及本聲明。
原文連結:https://blog.csdn.net/weixin_34128237/article/details/91650813