天天看點

nova-cinder互動流程分析1.   Nova現有API統計2.   Nova-Cinder互動流程分析3.   需要新增的API4.   需要注意的問題

http://aspirer2004.blog.163.com/blog/static/106764720134755131463/

本文主要調研cinder與nova的互動流程,分析了自有塊存儲系統與nova的整合問題。

1.   Nova現有API統計

nova已經支援的塊裝置API可以參考http://api.openstack.org/api-ref.html中VolumeAttachments,Volume Extension to Compute兩個部分的說明。

操作類(所有删除操作都是異步的,需要使用者自行調用查詢API進行确認):

1)        建立塊裝置(包括從快照恢複出塊裝置)(可以指定塊裝置AZ)(需要提供使用者ID)

2)        删除塊裝置(需要提供使用者ID和塊裝置ID)

3)        挂載塊裝置(需要指定使用者ID,雲主機ID,塊裝置ID)

4)        解除安裝塊裝置(需要指定使用者ID,雲主機ID,塊裝置ID)

5)        給塊裝置建快照(需要提供使用者ID和塊裝置ID)

6)        删除快照(需要提供使用者ID和快照ID)

查詢類:

1)        列出雲主機上挂載的塊裝置(需要指定使用者ID和雲主機ID)

2)        根據雲主機ID及挂載在其上的塊裝置ID查詢挂載詳細資訊(需要指定使用者ID,雲主機ID,塊裝置ID)

3)        查詢使用者所有的塊裝置(需要提供使用者ID)

4)        根據塊裝置ID查詢使用者某個塊裝置的詳細資訊(需要提供使用者ID和塊裝置ID)

5)        查詢使用者所有的塊裝置快照(需要提供使用者ID)

6)        查詢使用者所有的塊裝置快照詳細資訊(需要提供使用者ID和快照ID)

需要新增API:

1)        擴容API(我們這邊有新增API的經驗,比較容易實作)

2.   Nova-Cinder互動流程分析

這裡隻選擇兩個比較典型的互動過程進行分析。

2.1    建立塊裝置cinder流程

建立塊裝置支援從快照恢複出塊裝置。

API URL:POSThttp://localhost:8774/v1.1/{tenant_id}/os-volumes

    Request parameters

    Parameter        Description

    tenant_id                   Theunique identifier of the tenant or account.

    volume_id         The unique identifier for a volume.

    Volume              A partial representation of avolume that is used to create a volume.

         Create Volume Request: JSON

    {

        "volume":{

            "display_name":"vol-001",

            "display_description":"Another volume.",

            "size":30,

            "volume_type":"289da7f8-6440-407c-9fb4-7db01ec49164",

            "metadata":{"contents":"junk"},

            "availability_zone":"us-east1"

         }

    }

    Create Volume Response: JSON

    {

        "volume":{

            "id":"521752a6-acf6-4b2d-bc7a-119f9148cd8c",

            "display_name":"vol-001",

            "display_description":"Another volume.",

            "size":30,

            "volume_type":"289da7f8-6440-407c-9fb4-7db01ec49164",

            "metadata":{"contents":"junk"},

            "availability_zone":"us-east1",

            "snapshot_id": null,

            "attachments":[],

            "created_at":"2012-02-14T20:53:07Z"

         }

    }

# nova\api\openstack\compute\contrib\volumes.py:

VolumeController.create()

    @wsgi.serializers(xml=VolumeTemplate)

    @wsgi.deserializers(xml=CreateDeserializer)

    def create(self, req, body):

        """Creates a newvolume."""

        context = req.environ['nova.context']

        authorize(context)

        if not self.is_valid_body(body,'volume'):

            raise exc.HTTPUnprocessableEntity()

        vol = body['volume']

        # 卷類型,暫時不支援,參數不傳入即可

        vol_type = vol.get('volume_type',None)

        if vol_type:

            try:

                vol_type = volume_types.get_volume_type_by_name(context,

                                                               vol_type)

            except exception.NotFound:

                raise exc.HTTPNotFound()

        metadata = vol.get('metadata',None)

        # 如果要從快照恢複卷,傳入要被恢複的快照ID即可

        snapshot_id = vol.get('snapshot_id')

        if snapshot_idis not None:

            # 從快照恢複雲硬碟需要實作如下方法,self.volume_api下面會有說明

            snapshot = self.volume_api.get_snapshot(context, snapshot_id)

        else:

            snapshot = None

        size = vol.get('size',None)

        if size is None and snapshot is not None:

            size = snapshot['volume_size']

        LOG.audit(_("Create volume of %s GB"), size, context=context)

        # 卷AZ資訊

        availability_zone = vol.get('availability_zone',None)

        # 雲硬碟需要實作如下方法,self.volume_api下面會有說明

        new_volume = self.volume_api.create(context,

                                           size,

                                            vol.get('display_name'),

                                            vol.get('display_description'),

                                           snapshot=snapshot,

                                           volume_type=vol_type,

                                           metadata=metadata,

                                           availability_zone=availability_zone

                                           )

        # TODO(vish): Instance should be None at dblayer instead of

        #            trying to lazy load, but for now we turn it into

        #            a dict to avoid an error.

        retval = _translate_volume_detail_view(context, dict(new_volume))

        result = {'volume': retval}

        location ='%s/%s' %(req.url, new_volume['id'])

        return wsgi.ResponseObject(result, headers=dict(location=location))

    # self.volume_api說明

    self.volume_api= volume.API()

    volume是from novaimport volume導入的

    # nova\volume\__init__.py:

    def API():

        importutils = nova.openstack.common.importutils

        cls = importutils.import_class(nova.flags.FLAGS.volume_api_class)

        return cls()

可見self.volume_api調用的所有方法都是由配置項volume_api_class決定的,預設配置是使用nova-volume的API封裝類,

   cfg.StrOpt('volume_api_class',

             default='nova.volume.api.API',

             help='The fullclass name of the volume API class to use'),

    也可以改用cinder的API封裝類,隻要把配置改為volume_api_class=nova.volume.cinder.API即可,cinder API封裝類通過調用封裝了建立卷方法的cinder_client庫來調用到cinder的API,雲硬碟可以實作一個類似的client庫,也可以直接調用已有的API來實作相同的動作(cinder_client庫也是對cinder API調用的封裝),雲硬碟可以參考nova\volume\cinder.py開發自己的API封裝類,供NVS使用,由于API已經開發完成,是以隻是封裝API,工作量應該不是很大,需要注意的應該是認證問題。

    快照相關操作及查詢與上述流程沒有差別,隻要模仿nova\volume\cinder.py即可實作。

2.2    挂載塊裝置cinder流程

    API URL:POSThttp://localhost:8774/v2/{tenant_id}/servers/{server_id}/os-volume_attachments

    Request parameters

    Parameter        Description

    tenant_id                   TheID for the tenant or account in a multi-tenancy cloud.

    server_id          The UUID for the server of interest toyou.

    volumeId          IDof the volume to attach.

    device                Nameof the device e.g. /dev/vdb. Use "auto" for autoassign (ifsupported).

    volumeAttachment          Adictionary representation of a volume attachment.

Attach Volume to Server Request: JSON

    {

        'volumeAttachment':{

          'volumeId': volume_id,

          'device': device

        }

    }

    Attach Volume to Server Response: JSON

    {

        "volumeAttachment":{

            "device":"/dev/vdd",

            "serverId":"fd783058-0e27-48b0-b102-a6b4d4057cac",

            "id":"5f800cf0-324f-4234-bc6b-e12d5816e962",

            "volumeId":"5f800cf0-324f-4234-bc6b-e12d5816e962"

        }

    }

需要注意的是這個API傳回是同步的,但挂載卷到虛拟機是異步的。

# nova\api\openstack\compute\contrib\volumes.py:

VolumeAttachmentController.create()

    @wsgi.serializers(xml=VolumeAttachmentTemplate)

    def create(self, req, server_id, body):

        """Attach a volume to aninstance."""

        context = req.environ['nova.context']

        authorize(context)

        if not self.is_valid_body(body,'volumeAttachment'):

            raise exc.HTTPUnprocessableEntity()

        volume_id = body['volumeAttachment']['volumeId']

        device = body['volumeAttachment'].get('device')

        msg = _("Attach volume%(volume_id)s to instance %(server_id)s"

                " at %(device)s")% locals()

        LOG.audit(msg, context=context)

        try:

            instance = self.compute_api.get(context, server_id)

            # nova-compute負責挂載卷到虛拟機

            device = self.compute_api.attach_volume(context, instance,

                                                   volume_id, device)

        except exception.NotFound:

            raise exc.HTTPNotFound()

        # The attach is async

        attachment = {}

        attachment['id']= volume_id

        attachment['serverId']= server_id

        attachment['volumeId']= volume_id

        attachment['device']= device

        # NOTE(justinsb): And now, we have aproblem...

        # The attach is async, so there's a window inwhich we don't see

        # the attachment (until the attachmentcompletes).  We could also

        # get problems with concurrent requests.  I think we need an

        # attachment state, and to write to the DBhere, but that's a bigger

        # change.

        # For now, we'll probably have to rely onlibraries being smart

        # TODO(justinsb): How do I return"accepted" here?

        return {'volumeAttachment': attachment}

    # nova\compute\api.py:API.attach_volume()

    @wrap_check_policy

    @check_instance_lock

    def attach_volume(self, context, instance, volume_id, device=None):

        """Attach an existingvolume to an existing instance."""

        # NOTE(vish): Fail fast if the device is notgoing to pass. This

        #            will need to be removed along with the test if we

        #            change the logic in the manager for what constitutes

        #            a valid device.

        if deviceand not block_device.match_device(device):

            raise exception.InvalidDevicePath(path=device)

        # NOTE(vish): This is done on the computehost because we want

        #            to avoid a race where two devices are requested at

        #            the same time. When db access is removed from

        #            compute, the bdm will be created here and we will

        #            have to make sure that they are assigned atomically.

        device = self.compute_rpcapi.reserve_block_device_name(

            context, device=device, instance=instance)

        try:

            # 雲硬碟需要實作的方法,也可以參考nova\volume\cinder.py

            volume = self.volume_api.get(context, volume_id)

            # 檢測卷是否可以挂載

            self.volume_api.check_attach(context, volume)

            # 預留要挂載的卷,防止并發挂載問題

            self.volume_api.reserve_volume(context, volume)

            # RPC Cast異步調用到虛拟機所在的主控端的nova-compute服務進行挂載

            self.compute_rpcapi.attach_volume(context, instance=instance,

                    volume_id=volume_id, mountpoint=device)

        except Exception:

            with excutils.save_and_reraise_exception():

                self.db.block_device_mapping_destroy_by_instance_and_device(

                        context, instance['uuid'], device)

        # API在這裡傳回

        return device

    # nova\compute\manager.py:ComputeManager.attach_volume()

    @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())

    @reverts_task_state

    @wrap_instance_fault

    def attach_volume(self, context, volume_id, mountpoint, instance):

        """Attach a volume to aninstance."""

        try:

            return self._attach_volume(context, volume_id,

                                      mountpoint, instance)

        except Exception:

            with excutils.save_and_reraise_exception():

           self.db.block_device_mapping_destroy_by_instance_and_device(

                        context, instance.get('uuid'), mountpoint)

    def _attach_volume(self, context, volume_id, mountpoint, instance):

        # 同上面的volume_api.get方法

        volume = self.volume_api.get(context, volume_id)

        context = context.elevated()

        LOG.audit(_('Attaching volume %(volume_id)s to%(mountpoint)s'),

                  locals(), context=context, instance=instance)

        try:

            # 這裡傳回的是initiator資訊,下面有分析

            connector = self.driver.get_volume_connector(instance)

            # 雲硬碟需要實作的方法,下面有cinder的具體實作

            connection_info = self.volume_api.initialize_connection(context,

                                                                   volume,

                                                                   connector)

        except Exception: # pylint: disable=W0702

            with excutils.save_and_reraise_exception():

                msg = _("Failed toconnect to volume %(volume_id)s "

                        "while attaching at%(mountpoint)s")

                LOG.exception(msg% locals(), context=context,

                              instance=instance)

                # 這個方法也要實作

                self.volume_api.unreserve_volume(context, volume)

        if 'serial' not in connection_info:

            connection_info['serial']= volume_id

        try:

            self.driver.attach_volume(connection_info,

                                      instance['name'],

                                     mountpoint)

        except Exception: # pylint: disable=W0702

            with excutils.save_and_reraise_exception():

                msg = _("Failed toattach volume %(volume_id)s "

                        "at %(mountpoint)s")

                LOG.exception(msg% locals(), context=context,

                              instance=instance)

                self.volume_api.terminate_connection(context,

                                                    volume,

                                                    connector)

        # 這個方法也要實作,作用是更新cinder資料庫中的卷的狀态

        self.volume_api.attach(context,

                               volume,

                               instance['uuid'],

                               mountpoint)

        values = {

            'instance_uuid': instance['uuid'],

            'connection_info': jsonutils.dumps(connection_info),

            'device_name': mountpoint,

            'delete_on_termination':False,

            'virtual_name':None,

            'snapshot_id':None,

            'volume_id': volume_id,

            'volume_size':None,

            'no_device':None}

        self.db.block_device_mapping_update_or_create(context, values)

    #nova\virt\libvirt\driver.py:LibvirtDriver.get_volume_connector()

    def get_volume_connector(self, instance):

        if not self._initiator:

            self._initiator= libvirt_utils.get_iscsi_initiator()

            if not self._initiator:

                LOG.warn(_('Could not determine iscsi initiator name'),

                         instance=instance)

        return {

            'ip': FLAGS.my_ip,#主控端IP位址

            'initiator': self._initiator,

            'host': FLAGS.host#主控端名

        }

    # nova\virt\libvirt\utils.py:get_iscsi_initiator()

    def get_iscsi_initiator():

        """Get iscsi initiatorname for this machine"""

        # NOTE(vish) openiscsi stores initiator namein a file that

        #           needs root permission to read.

        contents = utils.read_file_as_root('/etc/iscsi/initiatorname.iscsi')

        for l in contents.split('\n'):

            if l.startswith('InitiatorName='):

                return l[l.index('=')+ 1:].strip()

    nova中cinder API封裝實作:

    # nova\volume\cinder.py:API.initialize_connection():

    def initialize_connection(self, context, volume, connector):

        return cinderclient(context).\

                 volumes.initialize_connection(volume['id'], connector)

    調用的是cinder中的initialize_connection,iscsi driver的實作如下:

    # cinder\volume\iscsi.py:LioAdm.initialize_connection()

    def initialize_connection(self, volume, connector):

        volume_iqn = volume['provider_location'].split(' ')[1]

        (auth_method, auth_user, auth_pass)= \

            volume['provider_auth'].split(' ', 3)

        # Add initiator iqns to target ACL

        try:

            self._execute('rtstool','add-initiator',

                          volume_iqn,

                          auth_user,

                          auth_pass,

                          connector['initiator'],

                          run_as_root=True)

        except exception.ProcessExecutionErroras e:

            LOG.error(_("Failed to add initiator iqn %s totarget")%

                      connector['initiator'])

            raise exception.ISCSITargetAttachFailed(volume_id=volume['id'])

    #nova\virt\libvirt\driver.py:LibvirtDriver.attach_volume()

    @exception.wrap_exception()

    def attach_volume(self, connection_info, instance_name, mountpoint):

        virt_dom = self._lookup_by_name(instance_name)

        mount_device = mountpoint.rpartition("/")[2]

        # 可能需要改動,下面會分析這個方法

        conf = self.volume_driver_method('connect_volume',

                                        connection_info,

                                        mount_device)

        if FLAGS.libvirt_type== 'lxc':

            self._attach_lxc_volume(conf.to_xml(), virt_dom, instance_name)

        else:

            try:

                # 挂載到虛拟機上

                virt_dom.attachDevice(conf.to_xml())

            except Exception, ex:

                if isinstance(ex, libvirt.libvirtError):

                    errcode = ex.get_error_code()

                    if errcode== libvirt.VIR_ERR_OPERATION_FAILED:

                        self.volume_driver_method('disconnect_volume',

                                                 connection_info,

                                                 mount_device)

                        raise exception.DeviceIsBusy(device=mount_device)

                with excutils.save_and_reraise_exception():

                    self.volume_driver_method('disconnect_volume',

                                              connection_info,

                                              mount_device)

        # TODO(danms) once libvirt has support forLXC hotplug,

        # replace this re-define with use of the

        # VIR_DOMAIN_AFFECT_LIVE &VIR_DOMAIN_AFFECT_CONFIG flags with

        # attachDevice()

        # 重新define一下,以間接實作持久化的挂載

        domxml = virt_dom.XMLDesc(libvirt.VIR_DOMAIN_XML_SECURE)

        self._conn.defineXML(domxml)

    #nova\virt\libvirt\driver.py:LibvirtDriver.volume_driver_method()

    def volume_driver_method(self, method_name, connection_info,

                             *args,**kwargs):

        driver_type = connection_info.get('driver_volume_type')

        if not driver_type in self.volume_drivers:

            raise exception.VolumeDriverNotFound(driver_type=driver_type)

        driver = self.volume_drivers[driver_type]

        method = getattr(driver, method_name)

        return method(connection_info,*args,**kwargs)

    def __init__():

        ……

        self.volume_drivers= {}

        for driver_strin FLAGS.libvirt_volume_drivers:

            driver_type, _sep, driver= driver_str.partition('=')

            driver_class = importutils.import_class(driver)

            self.volume_drivers[driver_type]= driver_class(self)

    volume_drivers是由配置項libvirt_volume_drivers決定的,預設配置是:

    cfg.ListOpt('libvirt_volume_drivers',

                default=[

                  'iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver',

                  'local=nova.virt.libvirt.volume.LibvirtVolumeDriver',

                  'fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver',

                  'rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver',

                  'sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver'

                  ],

                help='Libvirt handlers for remote volumes.'),

    雲硬碟可以使用已有的iscsi driver,也可以參考iscsi實作自己的driver,iscsi driver的内容為:

    # nova\virt\libvirt\volume.py:LibvirtISCSIVolumeDriver:

    class LibvirtISCSIVolumeDriver(LibvirtVolumeDriver):

        """Driver to attachNetwork volumes to libvirt."""

        def _run_iscsiadm(self, iscsi_properties, iscsi_command,**kwargs):

            check_exit_code = kwargs.pop('check_exit_code',0)

            (out, err)= utils.execute('iscsiadm','-m','node','-T',

                                      iscsi_properties['target_iqn'],

                                       '-p', iscsi_properties['target_portal'],

                                       *iscsi_command, run_as_root=True,

                                       check_exit_code=check_exit_code)

            LOG.debug("iscsiadm %s:stdout=%s stderr=%s"%

                      (iscsi_command, out, err))

            return (out, err)

        def _iscsiadm_update(self, iscsi_properties, property_key, property_value,

                             **kwargs):

            iscsi_command =('--op','update','-n', property_key,

                             '-v', property_value)

            return self._run_iscsiadm(iscsi_properties, iscsi_command, **kwargs)

        @utils.synchronized('connect_volume')

        def connect_volume(self, connection_info, mount_device):

            """Attach the volumeto instance_name"""

            iscsi_properties = connection_info['data']

            # NOTE(vish): If we are on the same host asnova volume, the

            #            discovery makes the target so we don't need to

            #            run --op new. Therefore, we check to see if the

            #            target exists, and if we get 255 (Not Found), then

            #            we run --op new. This will also happen if another

            #            volume is using the same target.

            try:

                self._run_iscsiadm(iscsi_properties,())

            except exception.ProcessExecutionErroras exc:

                # iscsiadm returns 21 for "No recordsfound" after version 2.0-871

                if exc.exit_codein [21,255]:

                    self._run_iscsiadm(iscsi_properties,('--op','new'))

                else:

                    raise

            if iscsi_properties.get('auth_method'):

                self._iscsiadm_update(iscsi_properties,

                                      "node.session.auth.authmethod",

                                     iscsi_properties['auth_method'])

                self._iscsiadm_update(iscsi_properties,

                                      "node.session.auth.username",

                                     iscsi_properties['auth_username'])

                self._iscsiadm_update(iscsi_properties,

                                      "node.session.auth.password",

                                     iscsi_properties['auth_password'])

            # NOTE(vish): If we have another lun on thesame target, we may

            #            have a duplicate login

            self._run_iscsiadm(iscsi_properties,("--login",),

                               check_exit_code=[0,255])

            self._iscsiadm_update(iscsi_properties,"node.startup","automatic")

            host_device =("/dev/disk/by-path/ip-%s-iscsi-%s-lun-%s"%

                            (iscsi_properties['target_portal'],

                             iscsi_properties['target_iqn'],

                             iscsi_properties.get('target_lun',0)))

            # The /dev/disk/by-path/... node is notalways present immediately

            # TODO(justinsb): This retry-with-delay is apattern, move to utils?

            tries =0

            while not os.path.exists(host_device):

                if tries>= FLAGS.num_iscsi_scan_tries:

                    raise exception.NovaException(_("iSCSI device not found at %s")

                                                 %(host_device))

                LOG.warn(_("ISCSI volume not yet found at:%(mount_device)s. "

                           "Will rescan & retry.  Try number: %(tries)s")%

                         locals())

                # The rescan isn't documented as beingnecessary(?), but it helps

                self._run_iscsiadm(iscsi_properties,("--rescan",))

                tries = tries+ 1

                if not os.path.exists(host_device):

                    time.sleep(tries** 2)

            if tries!= 0:

                LOG.debug(_("Found iSCSI node %(mount_device)s "

                            "(after %(tries)s rescans)")%

                          locals())

            connection_info['data']['device_path']= host_device

            sup = super(LibvirtISCSIVolumeDriver, self)

            return sup.connect_volume(connection_info, mount_device)

        @utils.synchronized('connect_volume')

        def disconnect_volume(self, connection_info, mount_device):

            """Detach the volumefrom instance_name"""

            sup = super(LibvirtISCSIVolumeDriver, self)

            sup.disconnect_volume(connection_info, mount_device)

            iscsi_properties = connection_info['data']

            # NOTE(vish): Only disconnect from the targetif no luns from the

            #            target are in use.

            device_prefix =("/dev/disk/by-path/ip-%s-iscsi-%s-lun-"%

                             (iscsi_properties['target_portal'],

                              iscsi_properties['target_iqn']))

            devices = self.connection.get_all_block_devices()

            devices = [dev for dev in devices if dev.startswith(device_prefix)]

            if not devices:

                self._iscsiadm_update(iscsi_properties,"node.startup","manual",

                                     check_exit_code=[0,255])

                self._run_iscsiadm(iscsi_properties,("--logout",),

                                  check_exit_code=[0,255])

                self._run_iscsiadm(iscsi_properties,('--op','delete'),

                                  check_exit_code=[0,21,255])

也即主要實作了卷挂載到主控端和從主控端解除安裝兩個方法。

2.3    相關代碼源檔案

nova\volume\cinder.py源檔案(雲硬碟需要實作的方法或者要封裝的API都在這裡面):   https://github.com/openstack/nova/blob/stable/folsom/nova/volume/cinder.py       

nova\virt\libvirt\volume.py源檔案(雲硬碟需要實作的driver可以參考這個檔案):   https://github.com/openstack/nova/blob/stable/folsom/nova/virt/libvirt/volume.py

# 預設的driver映射關系,可以看出iscsi卷使用的是LibvirtISCSIVolumeDriver

cfg.ListOpt('libvirt_volume_drivers',

    default=[

  'iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver',

            'local=nova.virt.libvirt.volume.LibvirtVolumeDriver',                 'fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver',

            'rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver',

            'sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver'

            ],

     help='Libvirt handlers for remote volumes.'),

cinder處理各種API請求的抽象類源檔案:    https://github.com/openstack/cinder/blob/master/cinder/volume/manager.py

上述抽象類會調用不同的driver去執行實際的動作,完成API的請求,其中iSCSI driver源檔案為:   

    # 預設的volume driver是cinder.volume.drivers.lvm.LVMISCSIDriver

    cfg.StrOpt('volume_driver',

               default='cinder.volume.drivers.lvm.LVMISCSIDriver',

               help='Driver to use for volume creation'),

    ]    https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/lvm.py#L304

它繼承了LVMVolumeDriver, driver.ISCSIDriver兩個類,其中後一個類所在的源檔案為:   https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L199   https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L339這裡的self.tgtadm是在   https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/lvm.py#L321這裡初始化的,調用的是   https://github.com/openstack/cinder/blob/master/cinder/volume/iscsi.py#L460這裡的方法。

iscsi_helper預設使用的是tgtadm:

    cfg.StrOpt('iscsi_helper',

               default='tgtadm',

               help='iscsi target user-land tool to use'),

3.   需要新增的API

1)        擴容雲硬碟的API(或者直接調用雲硬碟已有的API,但是推薦nova新增一個,這樣雲硬碟就不必對外暴露任何API了,都可以經過nova來轉發處理。)

4.   需要注意的問題

1)        之前雲硬碟agent實作的一下錯誤恢複、異常處理邏輯需要在nova裡面實作

2)        挂載點在雲主機内外看到的不一緻問題(因為nova挂載動作是異步的,是以傳回給使用者的是libvirt看到的挂載點,不是實際的虛拟機内部的挂載點,目前考慮通過查詢卷資訊接口傳回最終的挂載點)

3)        使用者及認證問題(之前雲硬碟應該用的是管理平台的使用者認證邏輯,如果改為使用nova接口,需要使用keystone的使用者認證,不知道可否在管理平台那一層轉換一下)

總的來說雲硬碟所需要做的改動應該不大,工作重點在于封裝已有的API,提供client即可(參考https://github.com/openstack/nova/blob/stable/folsom/nova/volume/cinder.py),另外driver(參考https://github.com/openstack/nova/blob/stable/folsom/nova/virt/libvirt/volume.py)裡面要實作擴容邏輯,應該可以重用agent中現有的代碼。

繼續閱讀