說明:如果vSphere上的VM使用虛拟硬碟是scsi接口,需要安裝virtio子產品,并加載,如果是ide則不需要安裝。(如果ide接口安裝并加載virtio到系統後,在openstack中同樣無法啟動!!!)
一、安裝子產品
1
2
3
<code>/sbin/dracut</code> <code>--force --verbose --add-drivers </code><code>"virtio virtio_ring virtio_pci"</code> <code>/boot/initramfs-3</code><code>.10.0-327.el7.x86_64.img 3.10.0-327.el7.x86_64</code>
<code>modprobe virio</code>
<code>modprobe virtio_pci</code>
二、使用voftool工具從VSphere Center導出ovf模闆
1. 從此位址下載下傳 http://ftp.tucha13.net/pub/software/VMware-ovftool-4.1.0/VMware-ovftool-4.1.0-2459827-lin.x86_64.bundle
2. 添權重限chmod a+x VMware-ovftool-4.1.0-2459827-lin.x86_64.bundle
3. 安裝工具./VMware-ovftool-4.1.0-2459827-lin.x86_64.bundle.
4. 導出虛拟機ovf模闆
4
5
6
7
8
9
<code>[root@compute04 5_126]</code><code># ovftool --disableVerification --noSSLVerify --powerOffSource vi://user:password@ip/path/to/Resources/05.xxxx/5.126-xxxx1-centos7 5_126.ovf</code>
<code>Opening VI </code><code>source</code><code>: </code><code>vi</code><code>:</code><code>//administrator</code><code>%40vc.com@ip</code><code>/path/to/Resources/05</code><code>.xxxxxx5.126-xxxxxx1-centos7</code>
<code>Powering off VM: 5.126-1-centos7</code>
<code>Opening OVF target: 5_126.ovf</code>
<code>Writing OVF package: 5_126.ovf</code>
<code>Transfer Completed </code>
<code>Completed successfully</code>
<code>[root@compute04 5_126]</code><code># ls</code>
<code>5_126-disk1.vmdk 5_126-disk2.vmdk 5_126.mf 5_126.ovf</code>
三、建立卷
1、建立系統卷
說明:所建立卷的大小一定在大于等于vm硬碟
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
<code>create --display-name 5_126_os 20</code>
<code>+--------------------------------+--------------------------------------+</code>
<code>| Property | Value |</code>
<code>| attachments | [] |</code>
<code>| availability_zone | nova |</code>
<code>| bootable | </code><code>false</code> <code>|</code>
<code>| consistencygroup_id | None |</code>
<code>| created_at | 2017-04-01T06:16:33.000000 |</code>
<code>| description | None |</code>
<code>| encrypted | False |</code>
<code>| </code><code>id</code> <code>| 2be0eaee-ca53-4d03-96a4-caae1c011a55 |</code>
<code>| metadata | {} |</code>
<code>| migration_status | None |</code>
<code>| multiattach | False |</code>
<code>| name | 5_126_os |</code>
<code>| os-vol-host-attr:host | None |</code>
<code>| os-vol-mig-status-attr:migstat | None |</code>
<code>| os-vol-mig-status-attr:name_id | None |</code>
<code>| os-vol-tenant-attr:tenant_id | f3419d1896284d15af004b1ad6222a9a |</code>
<code>| replication_status | disabled |</code>
<code>| size | 20 |</code>
<code>| snapshot_id | None |</code>
<code>| source_volid | None |</code>
<code>| status | creating |</code>
<code>| updated_at | None |</code>
<code>| user_id | 0e28446136b742a8849c6a54675e6ee8 |</code>
<code>| volume_type | None |</code>
2. 導入系統硬碟
<code># cinder set-bootable 2be0eaee-ca53-4d03-96a4-caae1c011a55 true</code>
<code># cinder list</code>
<code>+--------------------------------------+-----------+----------------+------+-------------+----------+--------------------------------------+</code>
<code>| ID | Status | Name | Size | Volume Type | Bootable | Attached to |</code>
<code>| 231b0af3-5a28-4ded-a211-517bbc0c5e41 | available | tomcat_volume1 | 10 | - | </code><code>false</code> <code>| |</code>
<code>| 2be0eaee-ca53-4d03-96a4-caae1c011a55 | available | 5_126_os | 20 | - | </code><code>true</code> <code>| |</code>
<code>| 2ea90de4-fad9-4185-af93-6e0580edc846 | </code><code>in</code><code>-use | test02 | 20 | - | </code><code>false</code> <code>| 340a66bf-2c0c-45e3-91c0-7108931a78e3 |</code>
<code>[root@compute04 5_126]</code><code># rbd -p volumes rm volume-2be0eaee-ca53-4d03-96a4-caae1c011a55</code>
<code>2017-04-01 14:21:17.472336 7f85e4ce17c0 -1 asok(0x410f9c0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to </code><code>'/var/run/ceph/guests/ceph-client.admin.10234.68221936.asok'</code><code>: (2) No such </code><code>file</code> <code>or directory</code>
<code>Removing image: 100% complete...</code><code>done</code><code>.</code>
<code>[root@compute04 5_126]</code><code># qemu-img convert -p /path/to/5_126-disk1.vmdk -O rbd rbd:volumes/volume-2be0eaee-ca53-4d03-96a4-caae1c011a55</code>
<code> </code><code>(100.00</code><code>/100</code><code>%)</code>
3. 建立資料卷
<code># cinder create --display-name 5_126_data 20</code>
<code>| created_at | 2017-04-01T06:28:07.000000 |</code>
<code>| </code><code>id</code> <code>| 675d155f-43be-4465-8e48-44a5ec3c12bf |</code>
<code>| name | 5_126_data |</code>
<code>[root@compute04 5_126]</code><code># rbd -p volumes rm volume-675d155f-43be-4465-8e48-44a5ec3c12bf</code>
<code>2017-04-01 14:29:10.726135 7fe708cf27c0 -1 asok(0x3e0a9c0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to </code><code>'/var/run/ceph/guests/ceph-client.admin.10554.65055728.asok'</code><code>: (2) No such </code><code>file</code> <code>or directory</code>
<code>4. 導入資料盤</code>
<code>[root@compute04 5_126]</code><code># qemu-img convert -p /path/to/5_126/5_126-disk2.vmdk -O rbd rbd:volumes/volume-</code>
<code>675d155f-43be-4465-8e48-44a5ec3c12bf</code>
四、 從卷啟動vm主機
1. 從卷啟動vm
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
<code># nova boot --flavor 2 boot_vol --boot-volume 2be0eaee-ca53-4d03-96a4-caae1c011a55 --availability-zone nova:compute07 --security-groups default --nic net-id=4df49be9-ace6-413a-9c6b-0ec055056c76</code>
<code>+--------------------------------------+----------------------------------------------------------------------------------+</code>
<code>| Property | Value |</code>
<code>| OS-DCF:diskConfig | MANUAL |</code>
<code>| OS-EXT-AZ:availability_zone | nova |</code>
<code>| OS-EXT-SRV-ATTR:host | - |</code>
<code>| OS-EXT-SRV-ATTR:</code><code>hostname</code> <code>| boot-vol |</code>
<code>| OS-EXT-SRV-ATTR:hypervisor_hostname | - |</code>
<code>| OS-EXT-SRV-ATTR:instance_name | instance-00000073 |</code>
<code>| OS-EXT-SRV-ATTR:kernel_id | |</code>
<code>| OS-EXT-SRV-ATTR:launch_index | 0 |</code>
<code>| OS-EXT-SRV-ATTR:ramdisk_id | |</code>
<code>| OS-EXT-SRV-ATTR:reservation_id | r-1ww9vdyg |</code>
<code>| OS-EXT-SRV-ATTR:root_device_name | - |</code>
<code>| OS-EXT-SRV-ATTR:user_data | - |</code>
<code>| OS-EXT-STS:power_state | 0 |</code>
<code>| OS-EXT-STS:task_state | scheduling |</code>
<code>| OS-EXT-STS:vm_state | building |</code>
<code>| OS-SRV-USG:launched_at | - |</code>
<code>| OS-SRV-USG:terminated_at | - |</code>
<code>| accessIPv4 | |</code>
<code>| accessIPv6 | |</code>
<code>| adminPass | Xk4PCWqDzksU |</code>
<code>| config_drive | |</code>
<code>| created | 2017-04-01T06:31:09Z |</code>
<code>| description | - |</code>
<code>| flavor | m1.small (2) |</code>
<code>| hostId | |</code>
<code>| host_status | |</code>
<code>| </code><code>id</code> <code>| e9001768-5f7a-4bdb-b4ab-9bb53be6361b |</code>
<code>| image | Attempt to boot from volume - no image supplied |</code>
<code>| key_name | - |</code>
<code>| locked | False |</code>
<code>| metadata | {} |</code>
<code>| name | boot_vol |</code>
<code>| os-extended-volumes:volumes_attached | [{</code><code>"id"</code><code>: </code><code>"2be0eaee-ca53-4d03-96a4-caae1c011a55"</code><code>, </code><code>"delete_on_termination"</code><code>: </code><code>false</code><code>}] |</code>
<code>| progress | 0 |</code>
<code>| security_groups | default |</code>
<code>| status | BUILD |</code>
<code>| tenant_id | f3419d1896284d15af004b1ad6222a9a |</code>
<code>| updated | 2017-04-01T06:31:10Z |</code>
<code>| user_id | 0e28446136b742a8849c6a54675e6ee8 |</code>
2. 附加資料卷
<code># nova volume-attach e9001768-5f7a-4bdb-b4ab-9bb53be6361b 675d155f-43be-4465-8e48-44a5ec3c12bf</code>
<code>+----------+--------------------------------------+</code>
<code>| Property | Value |</code>
<code>| device | </code><code>/dev/vdb</code> <code>|</code>
<code>| </code><code>id</code> <code>| 675d155f-43be-4465-8e48-44a5ec3c12bf |</code>
<code>| serverId | e9001768-5f7a-4bdb-b4ab-9bb53be6361b |</code>
<code>| volumeId | 675d155f-43be-4465-8e48-44a5ec3c12bf |</code>
五、驗證結果
<code># ssh [email protected]</code>
<code>The authenticity of host </code><code>'10.1.200.105 (10.1.200.105)'</code> <code>can't be established.</code>
<code>ECDSA key fingerprint is 63:ae:8b:0c:4b:3f:92:73:18:d4:47:db:cf:ff:1a:e2.</code>
<code>Are you sure you want to </code><code>continue</code> <code>connecting (</code><code>yes</code><code>/no</code><code>)? </code><code>yes</code>
<code>Warning: Permanently added </code><code>'10.1.200.105'</code> <code>(ECDSA) to the list of known hosts.</code>
<code>[email protected]'s password: </code>
<code>[root@host-10-1-200-105 ~]</code><code># df -h</code>
<code>Filesystem Size Used Avail Use% Mounted on</code>
<code>/dev/mapper/centos-root</code> <code>14G 986M 13G 7% /</code>
<code>devtmpfs 910M 0 910M 0% </code><code>/dev</code>
<code>tmpfs 920M 0 920M 0% </code><code>/dev/shm</code>
<code>tmpfs 920M 8.4M 912M 1% </code><code>/run</code>
<code>tmpfs 920M 0 920M 0% </code><code>/sys/fs/cgroup</code>
<code>/dev/vdb</code> <code>16G 33M 16G 1% </code><code>/mnt</code>
<code>/dev/vda1</code> <code>497M 124M 374M 25% </code><code>/boot</code>
<code>tmpfs 184M 0 184M 0% </code><code>/run/user/0</code>
<code>[root@host-10-1-200-105 ~]</code><code># cd /tmp/</code>
<code>[root@host-10-1-200-105 tmp]</code><code># cd /mnt/</code>
<code>[root@host-10-1-200-105 mnt]</code><code># ls</code>
<code>123</code>
<code>[root@host-10-1-200-105 mnt]</code><code>#</code>
本文轉自 jinlinger 51CTO部落格,原文連結:http://blog.51cto.com/essun/1912445,如需轉載請自行聯系原作者