天天看點

centos 7.0 安裝openstack Juno版本

參考官方文檔:http://docs.openstack.org/juno/install-guide/install/yum/content/#

采用flatDHCP的網絡模式

三個節點ip資訊

controller:172.17.80.210

compute2:172.17.80.212

compute1:172.17.80.211

修改各節點的/etc/hosts 檔案

centos 7.0 安裝openstack Juno版本

測試各節點到openstack.org的連通性

centos 7.0 安裝openstack Juno版本

1.配置NTP服務

1.1、In controller node

# yum install -y ntp

vi /etc/ntp.conf 修改檔案如下:

    server NTP_SERVER iburst  #采用預設

    restrict -4 default kod notrap nomodify

    restrict -6 default kod notrap nomodify

# systemctl  enable ntpd.service    #開機啟動

# systemctl  start ntpd.service

centos 7.0 安裝openstack Juno版本

1.2、In other node

# yum install ntp

修改/etc/ntp.conf檔案如下:

    server controller iburst

開機啟動服務:

# systemctl enable ntpd.service

# systemctl start ntpd.service

centos 7.0 安裝openstack Juno版本

1.3、OpenStack packages basic environment(in all node)

Install the yum-plugin-priorities package to enable assignment of relative priorities within repositories:

# yum install -y yum-plugin-priorities 

Install the epel-release package to enable the EPEL repository:

# yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

Install the rdo-release-juno package to enable the RDO repository:

# yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm

Upgrade the packages on your system:

# yum upgrade

# reboot

RHEL and CentOS enable SELinux by default. Install the openstack-selinux package to automatically manage security policies for OpenStack services:

# yum install openstack-selinux

安裝如果報錯,可通過下面安裝:

#yum install http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/openstack-selinux-0.5.19-2.el7ost.noarch.rpm

1.4、To install and configure the database server

# yum install -y mariadb mariadb-server MySQL-python

版權問題,centos7資料庫采用mariadb

修改/etc/my.cnf

    [mysqld]

    bind-address = 127.0.0.1

    default-storage-engine = innodb

    innodb_file_per_table

    collation-server = utf8_general_ci

    init-connect = 'SET NAMES utf8'

    character-set-server = utf8

centos 7.0 安裝openstack Juno版本

# systemctl enable mariadb.service

# systemctl start mariadb.service

#mysql_secure_installation      #mysql安全相關設定

centos 7.0 安裝openstack Juno版本

1.5、To install the RabbitMQ message broker service(in controller node)

# yum install rabbitmq-server

# systemctl enable rabbitmq-server.service

# systemctl start rabbitmq-server.service

啟動服務時出現如下錯誤:

centos 7.0 安裝openstack Juno版本

# rabbitmqctl change_password guest 123456

執行時如果有報錯

centos 7.0 安裝openstack Juno版本

2、Identity service Install and configure

2.1、Install and configure

# mysql 

> create database keystone;

> grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'test01';

> grant all privileges on keystone.* to 'keystone'@'%' identified by 'test01';

# openssl rand -hex 10 

centos 7.0 安裝openstack Juno版本

注:上面的作用是生成随機字元串用于使用者通路的即keystone.conf中的 admin_token .也可以自己随意寫:本次安裝admin_token設定為123456

# yum install -y openstack-keystone python-keystoneclient

修改 /etc/keystone/keystone.conf 

[DEFAULT]

...

    admin_token= 123456 

verbose = True

[database]

connection=mysql://keystone:test01@localhost/keystone 

[token]

provider = keystone.token.providers.uuid.Provider

driver =keystone.token.persistence.backends.sql.Token

# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone

# chown -R keystone:keystone /var/log/keystone

# chown -R keystone:keystone /etc/keystone/ssl

# chmod -R o-rwx /etc/keystone/ssl

# su -s /bin/sh -c "keystone-manage db_sync" keystone 或者 keystone-manage db_sync

# systemctl enable openstack-keystone

# systemctl start openstack-keystone

#(crontab -l -u keystone 2>&1 | grep -q token_flush) ||echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone    #定期清除無效的token

2.2、Create tenants, users, and roles

# export OS_SERVICE_TIOEN=123456

# export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0

Create the admin tenant:

# keystone tenant-create --name admin --description "Admin Tenant"

centos 7.0 安裝openstack Juno版本

Create the admin user:

# keystone user-create --name admin --pass test01 --email [email protected]

centos 7.0 安裝openstack Juno版本

Create the admin role:

# keystone role-create --name admin

centos 7.0 安裝openstack Juno版本

Add the admin role to the admin tenant and user:

# keystone user-role-add --user admin --tenant admin --role admin

2.3、Create a demo tenant and user for typical operations in your environment:

Create the demo tenant:

# keystone tenant-create --name demo --description "Demo Tenant"

centos 7.0 安裝openstack Juno版本

Create the demo user under the demo tenant:

# keystone user-create --name demo --tenant demo --pass test01 --email [email protected]

centos 7.0 安裝openstack Juno版本

Create the service tenant:

# keystone tenant-create --name service --description "Service Tenant"

centos 7.0 安裝openstack Juno版本

Create the service entity for the Identity service:

# keystone service-create --name keystone --type identity --description "Openstack Identity"

centos 7.0 安裝openstack Juno版本

Create the Identity service API endpoints:

# keystone endpoint-create --service-id $(keystone service-list | awk '/ identity / {print $2}') --publicurl http://controller:5000/v2.0 --internalurl http://controller:5000/v2.0 --adminurl http://controller:35357/v2.0 --region regionOne

centos 7.0 安裝openstack Juno版本

2.4、測試内容

# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

# keystone --os-tenant-name admin --os-username admin --os-password test01 --os-auth-url http://controller:35357/v2.0 token-get

centos 7.0 安裝openstack Juno版本

# keystone --os-tenant-name admin --os-username admin --os-password test01 --os-auth-url http://controller:35357/v2.0 tenant-list

centos 7.0 安裝openstack Juno版本

# keystone --os-tenant-name admin --os-username admin --os-password test01 --os-auth-url http://controller:35357/v2.0 user-list

centos 7.0 安裝openstack Juno版本

# keystone --os-tenant-name admin --os-username admin --os-password test01 --os-auth-url http://controller:35357/v2.0 role-list

centos 7.0 安裝openstack Juno版本

# keystone --os-tenant-name demo --os-username demo --os-password test01 --os-auth-url http://controller:35357/v2.0 token-get

centos 7.0 安裝openstack Juno版本

# keystone --os-tenant-name demo --os-username demo --os-password test01 --os-auth-url http://controller:35357/v2.0 user-list

2.5、Create OpenStack client environment scripts

vi admin-openrc.sh

export OS_TENANT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=test01

export OS_AUTH_URL=http://controller:35357/v2.0

vi demo-openrc.sh

export OS_TENANT_NAME=demo

export OS_USERNAME=demo

export OS_AUTH_URL=http://controller:5000/v2.0

3、Add the Image Service(in controller node)

3.1、To configure prerequisites

#mysql -u root

> create database glance;

> grant all privileges on glance.* to 'glance'@'localhost' identified by 'test01';

> grant all privileges on glance.* to 'glance'@'%' identified by 'test01';

>quit

# source admin-openrc.sh 

# keystone user-create --name glance --pass test01 --email [email protected]

centos 7.0 安裝openstack Juno版本

# keystone user-role-add --user glance --tenant service --role admin

# keystone service-create --name glance --type p_w_picpath --description "Openstack Image Service"

centos 7.0 安裝openstack Juno版本

# keystone endpoint-create --service-id $(keystone service-list | awk '/ p_w_picpath / {print $2}') --publicurl http://controller:9292 --internalurl http://controller:9292 --adminurl http://controller:9292 --region regionOne

centos 7.0 安裝openstack Juno版本

3.2、To install and configure the Image Service components

# yum install openstack-glance python-glanceclient

# vi /etc/glance/glance-api.conf

verbose=True

default_store = file

notification_driver = noop

connection=mysql://glance:test01@controller/glance

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri=http://controller:35357

admin_tenant_name=service

admin_user=glance

admin_password=test01

[glance_store]

filesystem_store_datadir = /var/lib/glance/p_w_picpaths/

# vi /etc/glance/glance-registry.conf

[paste_deploy]

flavor = keystone

auth_uri=http://controller:5000/v2.0

# su -s /bin/sh -c "glance-manage db_sync" glance

# systemctl enable openstack-glance-api.service openstack-glance-registry.service 

# systemctl start openstack-glance-api.service openstack-glance-registry.service 

# mkdir /tmp/p_w_picpaths

#cd /tmp/p_w_picpaths

# wget http://cdn.download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img

#source admin-openrc.sh     (要先進入admin-openrc.sh的目錄下才有效,本次預設在/root/下)

# glance p_w_picpath-create --name "cirros-0.3.3-x86_64" --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 --container-format bare --is-public True --progress

centos 7.0 安裝openstack Juno版本

4、Add the Compute service in controller node

4.1、Install and configure controller node

#mysql -u root -p

>create database nova;

>grant all privileges on nova.* to 'nova'@'localhost' identified by 'test01';

>grant all privileges on nova.* to 'nova'@'%' identified by 'test01';

# source admin-openrc.sh

# keystone user-create --name nova --pass test01

centos 7.0 安裝openstack Juno版本

# keystone user-role-add --user nova --tenant service --role admin

# keystone service-create --name nova --type compute --description "OpenStack Compute"

centos 7.0 安裝openstack Juno版本

# keystone endpoint-create \

  --service-id $(keystone service-list | awk '/ compute / {print $2}') \

  --publicurl http://controller:8774/v2/%\(tenant_id\)s \

  --internalurl http://controller:8774/v2/%\(tenant_id\)s \

  --adminurl http://controller:8774/v2/%\(tenant_id\)s \

  --region regionOne

centos 7.0 安裝openstack Juno版本

# yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

修改/etc/nova.nova.conf

connection=mysql://nova:test01@controller/nova

注:上面這部分在Juno版本中的配置檔案中是不存在的,需要手動添加,避免出錯在末尾添加。

rpc_backend=rabbit

rabbit_host=controller

rabbit_password=123456

auth_strategy=keystone

auth_uri=http://controller:5000/v2.0

identity_uri=http://controller:35357

admin_user=nova

my_ip=172.17.80.210

vncserver_listen=172.17.80.210

vncserver_proxyclient_address=172.17.80.210

host=controller

# su -s /bin/sh -c "nova-manage db sync " nova  

注:同步資料庫需要幾分鐘時間,可以稍微等一下

# systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

# systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

4.2、Install and configure in compute node

# yum install -y openstack-nova-compute sysfsutils

vi /etc/nova/nova.conf

rpc_backend = rabbit

rabbit_host = controller

rabbit_password =123456

auth_strategy = keystone

my_ip=172.17.80.211

host=compute1

vnc_enabled=true

vnc_enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = 172.17.80.211

novncproxy_base_url = http://controller:6080/vnc_auto.html

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = nova

admin_password =test01

[glance]

host = controller

檢查節點是否支援硬體加速

    # egrep -c '(vmx|svm)' /proc/cpuinfo 

    1、如果此指令傳回一個值的一個或更多的計算節點,支援硬體加速,通常不需要額外的配置。

    2、如果此指令傳回一個值為零,計算節點不支援硬體加速,你須要配置libvirt使用QEMU代替KVM。

    修改/etc/nova/nova.conf

    [libvirt]

    ...

    virt_type = qemu

# systemctl enable libvirtd.service openstack-nova-compute.service

# systemctl start libvirtd.service openstack-nova-compute.service

4.3、test in controller node

centos 7.0 安裝openstack Juno版本

5、Install and configure networking (nova-network)

5.1、In controller node

network_api_class = nova.network.api.API

security_group_api = nova

# systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \

  openstack-nova-conductor.service

5.2、In compute node

# yum install -y openstack-nova-network openstack-nova-api

# vi /etc/nova/nova.conf 

firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver

network_manager = nova.network.manager.FlatDHCPManager

network_size = 254

allow_same_net_traffic = False

multi_host = True

send_arp_for_ha = True

share_dhcp_address = True

force_dhcp_release = True

flat_network_bridge = br100

flat_interface = eth1

public_interface = eth0

# systemctl enable openstack-nova-network.service openstack-nova-metadata-api.service

# systemctl start openstack-nova-network.service openstack-nova-metadata-api.service

6、Install and configure dashboard

# yum install -y openstack-dashboard httpd mod_wsgi memcached python-memcached

vi /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*']

CACHES = {

'default': {

'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

'LOCATION': '127.0.0.1:11211',

}

# setsebool -P httpd_can_network_connect on

# chown -R apache:apache /usr/share/openstack-dashboard/static

# systemctl enable httpd.service memcached.service

# systemctl start httpd.service memcached.service

測試

登入http://172.17.80.211/dashboard(使用者名密碼前面已建立:admin:test01)

添加執行個體最常遇到錯誤:No valid host was found.

首先看一下各服務是否啟動正常

centos 7.0 安裝openstack Juno版本

通常需要檢視/var/log/nova/ 下的日志檔案來分析具體的錯誤

tail /var/log/nova/nova-conductor.log 

7、Install and configure block storage

7.1、in controller node

# mysql -u root -p

>  create database cinder;

> grant all privileges on cinder.* to 'cinder'@'localhost' identified by 'test01';

> grant all privileges on cinder.* to 'cinder'@'%' identified by 'test01';

# keystone user-create --name cinder --pass test01

centos 7.0 安裝openstack Juno版本

# keystone user-role-add --user cinder --tenant service --role admin

# keystone service-create --name cinder --type volume --description "openstack block storage"

# keystone service-create --name cinderv2 --type volumev2 --description "openstack block storage"

centos 7.0 安裝openstack Juno版本

  --service-id $(keystone service-list | awk '/ volume / {print $2}') \

  --publicurl http://controller:8776/v1/%\(tenant_id\)s \

  --internalurl http://controller:8776/v1/%\(tenant_id\)s \

  --adminurl http://controller:8776/v1/%\(tenant_id\)s \

centos 7.0 安裝openstack Juno版本

  --service-id $(keystone service-list | awk '/ volumev2 / {print $2}') \

  --publicurl http://controller:8776/v2/%\(tenant_id\)s \

  --internalurl http://controller:8776/v2/%\(tenant_id\)s \

  --adminurl http://controller:8776/v2/%\(tenant_id\)s \

# yum install -y openstack-cinder python-cinderclient python-oslo-db

vi /etc/cinder/cinder.conf 

my_ip =172.17.80.210

connection=mysql://cinder:test01@controller/cinder

admin_user = cinder

# su -s /bin/sh -c "cinder-manage db sync" cinder

# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

7.2、Install and configure in storage node

ip address:172.17.80.214

相關軟體包參考basic environment

# yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm -y

# yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm -y

# yum upgrade -y

# yum install openstack-selinux -y 

#如果提示找不到安裝包可通過下面來安裝 

#yum install http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/openstack-selinux-0.5.19-2.el7ost.noarch.rpm

vi /etc/hosts

172.17.80.214block1

# yum install lvm2 -y

# systemctl enable lvm2-lvmetad.service

# systemctl start lvm2-lvmetad.service

本次示範虛拟機添加兩塊硬碟,其中一塊單獨做為存儲用。可依實際情況來操作

# pvcreate /dev/xvdb1

# vgcreate cinder-volumes /dev/xvdb1

vi /etc/lvm/lvm.conf

devices {

filter = [ "a/xvdb/", "r/.*/"]

Warning

If your storage nodes use LVM on the operating system disk, you must also add the associated device to the filter. For example, if the /dev/sda device contains the operating system:

Select Text

filter = [ "a/sda/", "a/sdb/", "r/.*/"]

Similarly, if your compute nodes use LVM on the operating system disk, you must also modify the filter in the /etc/lvm/lvm.conf file on those nodes to include only the operating system disk. For example, if the /dev/sda device contains the operating system:

filter = [ "a/sda/", "r/.*/"]

# yum install openstack-cinder targetcli python-oslo-db MySQL-python -y

vi /etc/cinder/cinder.conf

iscsi_helper = lioadm

glance_host = controller

my_ip =172.17.80.214

connection = mysql://cinder:test01@controller/cinder

# systemctl enable openstack-cinder-volume.service target.service

# systemctl start openstack-cinder-volume.service target.service

7.3、測試操作:

 controller node

centos 7.0 安裝openstack Juno版本

看不到block存儲節點?

檢查block節點日志發現mysql連接配接失敗

centos 7.0 安裝openstack Juno版本

這是由于mysql預設不允許遠端機器連接配接導緻的需要在controller節點允許block1節點連接配接

centos 7.0 安裝openstack Juno版本

block1節點重新開機服務

# systemctl restart openstack-cinder-volume.service

controller 節點服務顯示正常

centos 7.0 安裝openstack Juno版本

建立20G的卷

# cinder create --display-name demo-volume1 20

centos 7.0 安裝openstack Juno版本

8、Add Object Storage

  8.1 In controller node

# keystone user-create --name swift --pass test01

centos 7.0 安裝openstack Juno版本

# keystone user-role-add --user swift --tenant service --role admin

# keystone service-create --name swift --type object-store   --description "OpenStack Object Storage"

centos 7.0 安裝openstack Juno版本

# keystone endpoint-create --service-id $(keystone service-list | awk '/ object-store / {print $2}') --publicurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' --internalurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' --adminurl http://controller:8080 --region regionOne

centos 7.0 安裝openstack Juno版本

# yum install openstack-swift-proxy python-swiftclient python-keystone-auth-token \

  python-keystonemiddleware memcached -y

Obtain the proxy service configuration file from the Object Storage source repository:

# curl -o /etc/swift/proxy-server.conf \

  https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/proxy-server.conf-sample

centos 7.0 安裝openstack Juno版本

vi /etc/swift/proxy-server.conf

bind_port = 8080

user = swift

swift_dir = /etc/swift

[pipeline:main]

pipeline = authtoken cache healthcheck keystoneauth proxy-logging proxy-server

[app:proxy-server]

allow_account_management = true

account_autocreate = true

[filter:keystoneauth]

use = egg:swift#keystoneauth

operator_roles = admin,_member_

[filter:authtoken]

paste.filter_factory = keystonemiddleware.auth_token:filter_factory

admin_user = swift

admin_password = test01

delay_auth_decision = true

[filter:cache]

memcache_servers = 127.0.0.1:11211

8.2、Install and configure the storage nodes

基礎軟體包安裝略……

兩塊硬碟,一塊用于系統,一塊用于storage

ip address :172.17.80.213

# yum install -y xfsprogs rsync

# mkfs.xfs /dev/xvdb1   # 虛拟機硬碟

# mkdir -p /srv/node/sdb1

vi /etc/fstab

/dev/xvdb1/srv/node/sdb1xfsnoatime,nodiratime,nobarrier,logbufs=8 0 2

# mount /srv/node/sdb1

vi /etc/rsyncd.conf

uid = swift

gid = swift

log file = /var/log/rsyncd.log

pid file = /var/run/rsyncd.pid

address = 172.17.80.213

[account]

max connections = 2

path = /srv/node/

read only = false

lock file = /var/lock/account.lock

[container]

lock file = /var/lock/container.lock

[object]

lock file = /var/lock/object.lock

# systemctl enable rsyncd.service

# systemctl start rsyncd.service

Install the packages:

# yum install -y openstack-swift-account openstack-swift-container openstack-swift-object

# curl -o /etc/swift/account-server.conf \

  https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/account-server.conf-sample

# curl -o /etc/swift/container-server.conf \

   https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/container-server.conf-sample

# curl -o /etc/swift/object-server.conf \

   https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/object-server.conf-sample

centos 7.0 安裝openstack Juno版本

vi /etc/swift/account-server.conf

bind_ip = 172.17.80.213

bind_port = 6002

devices = /srv/node

pipeline = healthcheck recon account-server

[filter:recon]

recon_cache_path = /var/cache/swift

vi  /etc/swift/container-server.conf

bind_ip =172.17.80.213

bind_port = 6001

pipeline = healthcheck recon container-server

vi /etc/swift/object-server.conf

bind_port = 6000

pipeline = healthcheck recon object-server

# chown -R swift:swift /srv/node

# mkdir -p /var/cache/swift

# chown -R swift:swift /var/cache/swift

8.3、In controller node

8.3.1、Account ring

# cd /etc/swift/

Create the base account.builder file:

# swift-ring-builder account.builder create 10 3 1

Add each storage node to the ring:

# swift-ring-builder account.builder add r1z1-172.17.80.213:6002/xvdb1 100

centos 7.0 安裝openstack Juno版本

Verify the ring contents:

# swift-ring-builder account.builder

centos 7.0 安裝openstack Juno版本

# swift-ring-builder account.builder rebalance

8.3.2、Container ring

# cd /etc/swift

# swift-ring-builder container.builder create 10 3 1

# swift-ring-builder container.builder add r1z1-172.17.80.213:6001/xvdb1 100

# swift-ring-builder container.builder

 8.3.3、Object ring

# swift-ring-builder object.builder create 10 3 1

# swift-ring-builder object.builder add r1z1-172.17.80.213:6000/xvdb1 100

# swift-ring-builder object.builder

# swift-ring-builder object.builder rebalance

Distribute ring configuration files

Copy the account.ring.gz, container.ring.gz, and object.ring.gz files to the /etc/swift directory on each storage node and any additional nodes running the proxy service.

完成安裝

8.4、In object storage

配置的哈希值和預設存儲政策

# curl -o /etc/swift/swift.conf \

  https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/swift.conf-sample

vi /

[swift-hash]

swift_hash_path_suffix = test01

swift_hash_path_prefix = test01

[storage-policy:0]

name = Policy-0

default = yes

Copy the swift.conf file to the /etc/swift directory on each storage node and any additional nodes running the proxy service.

# chown -R swift:swift /etc/swift     ;on all node

繼續閱讀