前言:
openstack的部署非常簡單,簡單的前提建立在紮實的理論功底,本人一直覺得,玩技術一定是理論指導實踐,網上遍布個種搭建方法都可以實作一個基本的私有雲環境,但是諸位可曾發現,很多配置都是重複的,為何重複?到底什麼位置該不該配?具體配置什麼參數?很多作者本人都搞不清楚,今天本人就是要在這裡正本清源。
如有不解可郵件聯系我:[email protected]
介紹:本次案列為基本的三節點部署,叢集案列後期有時間再整理
一:網絡:
1.管理網絡:172.16.209.0/24
2.資料網絡:1.1.1.0/24
二:作業系統:CentOS Linux release 7.2.1511 (Core)
三:核心:3.10.0-327.el7.x86_64
四:openstack版本mitaka
效果圖:
OpenStack mitaka部署
約定:
1.在修改配置的時候,切勿在某條配置後加上注釋,可以在配置的上面或者下面加注釋
2.相關配置一定是在标題後追加,不要在原有注釋的基礎上修改
PART1:環境準備(在所有節點執行)
一:
每台機器設定固定ip,每台機器添加hosts檔案解析,為每台機器設定主機名,關閉firewalld,selinux
可選操作:在控制節點制作密鑰登入其他節點(可以友善後面的操作,真實環境也極有必要準備一台單獨的管理機),在控制節點修改/etc/hosts并scp到其他節點
/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.209.115 controller01
172.16.209.117 compute01
172.16.209.119 network02
二:擷取軟體包源(在所有節點執行),即配置yum源,下述兩種方式請按個人情況選擇一種,推薦方式一
方式一:自定義的yum源
本人從官網下載下傳包後自定制的yum源,自定制源的好處是:嚴格地控制軟體包版本,保持平台内主機版本的一緻性和可控性,具體做法如下
- 找一個伺服器,作為yum源(同時也可以作為cobbler或pxe主機)
- 上傳openstack-mitaka-rpms.tar.gz
- tar xvf openstack-mitaka-rpms.tar.gz -C /
- 在這台機器上安裝httpd并且啟動,設定開機啟動
- ln -s /mitaka-rpms /var/www/html/
然後每台機器配置yum源
[mitaka]
name=mitaka repo
baseurl=http://172.16.209.100/mitaka-rpms/
enabled=1
gpgcheck=0
方式二:下載下傳安裝官網的源
自定義yum源的包其實也都是來自于官網,隻不過,官網經常會更新包,一個包的更新可能會導緻一些版本相容性問題,是以推薦使用方式一,但如果隻是測試而非生産環境,方式二是一種稍微便捷的方式
基于centos系統,在所有節點上執行:
yum install centos-release-openstack-mitaka -y
基于redhat系統,在所有節點上執行:
yum install https://rdoproject.org/repos/rdo-release.rpm -y #紅帽系統請去掉epel源,否則會跟這裡的源有沖突
三 制作yum緩存并更新系統(在所有節點執行)
yum makecache && yum install vim net-tools -y&& yum update -y
小知識點:
yum -y update
更新所有包,改變軟體設定和系統設定,系統版本核心都更新
yum -y upgrade
更新所有包,不改變軟體設定和系統設定,系統版本更新,核心不改變
四 關閉yum自動更新(在所有節點執行)
CentOS7最小化安裝後預設yum會自動下載下傳更新,這對許多生産系統是不需要的,可以手動關閉它
[root@engine cron.weekly]# cd /etc/yum
[root@engine yum]# ls
fssnap.d pluginconf.d protected.d vars version-groups.conf yum-cron.conf yum-cron-hourly.conf
編輯yum-cron.conf,将download_updates = yes改為no即可
ps:yum install yum-plugin-priorities -y #如果不想關閉自動更新,那麼可以安裝這個插件,來設定更新的優先級,從官網下載下傳更新而不是從一些亂七八糟的第三方源
五 預裝包(在所有節點執行)
yum install python-openstackclient -y
yum install openstack-selinux -y
六 時間服務部署
yum install chrony -y #(在所有節點執行)
控制節點:
修改配置:
/etc/chrony.conf
server ntp.staging.kycloud.lan iburst
allow 管理網絡網段ip/24
啟服務:
systemctl enable chronyd.service
systemctl start chronyd.service
其餘節點:
server 控制節點ip iburst
啟服務
時區不是Asia/Shanghai需要改時區:
# timedatectl set-local-rtc 1 # 将硬體時鐘調整為與本地時鐘一緻, 0 為設定為 UTC 時間
# timedatectl set-timezone Asia/Shanghai # 設定系統時區為上海
其實不考慮各個發行版的差異化, 從更底層出發的話, 修改時間時區比想象中要簡單:
# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
驗證:
每台機器執行:
chronyc sources
在S那一列包含*号,代表同步成功(可能需要花費幾分鐘去同步,時間務必同步)
七:部署mariadb資料庫
yum install mariadb mariadb-server python2-PyMySQL -y
編輯:
/etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 控制節點管理網絡ip
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
啟服務:
systemctl enable mariadb.service
systemctl start mariadb.service
mysql_secure_installation
八:為Telemetry 服務部署MongoDB
yum install mongodb-server mongodb -y
編輯:/etc/mongod.conf
bind_ip = 控制節點管理網絡ip
smallfiles = true
啟動服務:
systemctl enable mongod.service
systemctl start mongod.service
九:部署消息隊列rabbitmq(驗證方式:http://172.16.209.104:15672/ 使用者:guest 密碼:guest)
yum install rabbitmq-server -y
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
建立rabbitmq使用者密碼:
rabbitmqctl add_user openstack che001
為建立的使用者openstack設定權限:
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
十:部署memcached緩存(為keystone服務緩存tokens)
yum install memcached python-memcached -y
systemctl enable memcached.service
systemctl start memcached.service
PART2:認證服務keystone部署
一:安裝和配置服務
1.建庫建使用者
mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'che001';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
flush privileges;
2.yum install openstack-keystone httpd mod_wsgi -y
3.編輯/etc/keystone/keystone.conf
[DEFAULT]
admin_token = che001 #建議用指令制作token:openssl rand -hex 10
[database]
connection = mysql+pymysql://keystone:che001@controller01/keystone
[token]
provider = fernet
#Token Provider:UUID, PKI, PKIZ, or Fernet #http://blog.csdn.net/miss_yang_cloud/article/details/49633719
4.同步修改到資料庫
su -s /bin/sh -c "keystone-manage db_sync" keystone
5.初始化fernet keys
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
6.配置apache服務
編輯:/etc/httpd/conf/httpd.conf
ServerName controller01
編輯:/etc/httpd/conf.d/wsgi-keystone.conf
新增配置
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
7.啟動服務:
systemctl enable httpd.service
systemctl restart httpd.service #因為之前自定義基于http協定的yum源時已經啟動過了httpd,是以此處需要restart
二:建立服務實體和通路端點
1.實作配置管理者環境變量,用于擷取後面建立的權限
export OS_TOKEN=che001
export OS_URL=http://controller01:35357/v3
export OS_IDENTITY_API_VERSION=3
2.基于上一步給的權限,建立認證服務實體(目錄服務)
openstack service create \
--name keystone --description "OpenStack Identity" identity
3.基于上一步建立的服務實體,建立通路該實體的三個api端點
openstack endpoint create --region RegionOne \
identity public http://controller01:5000/v3
identity internal http://controller01:5000/v3
identity admin http://controller01:35357/v3
三:建立域,租戶,使用者,角色,把四個元素關聯到一起
建立一個公共的域名:
openstack domain create --description "Default Domain" default
管理者:admin
openstack project create --domain default \
--description "Admin Project" admin
openstack user create --domain default \
--password-prompt admin
openstack role create admin
openstack role add --project admin --user admin admin
普通使用者:demo
--description "Demo Project" demo
--password-prompt demo
openstack role create user
openstack role add --project demo --user demo user
為後續的服務建立統一租戶service
解釋:後面每搭建一個新的服務都需要在keystone中執行四種操作:1.建租戶 2.建使用者 3.建角色 4.做關聯
後面所有的服務公用一個租戶service,都是管理者角色admin,是以實際上後續的服務安裝關于keysotne
的操作隻剩2,4
--description "Service Project" service
四:驗證操作:
編輯:/etc/keystone/keystone-paste.ini
在[pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3] 三個地方
移走:admin_token_auth
unset OS_TOKEN OS_URL
openstack --os-auth-url http://controller01:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
| expires | 2016-08-17T08:29:18.528637Z |
| id | gAAAAABXtBJO-mItMcPR15TSELJVB2iwelryjAGGpaCaWTW3YuEnPpUeg799klo0DaTfhFBq69AiFB2CbFF4CE6qgIKnTauOXhkUkoQBL6iwJkpmwneMo5csTBRLAieomo4z2vvvoXfuxg2FhPUTDEbw-DPgponQO-9FY1IAEJv_QV1qRaCRAY0 |
| project_id | 9783750c34914c04900b606ddaa62920 |
| user_id | 8bc9b323a3b948758697cb17da304035 |
五:建立用戶端腳本檔案
管理者:admin-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=che001
export OS_AUTH_URL=http://controller01:35357/v3
export OS_IMAGE_API_VERSION=2
普通使用者demo:demo-openrc
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_AUTH_URL=http://controller01:5000/v3
效果:
source admin-openrc
[root@controller01 ~]# openstack token issue
part3:部署鏡像服務
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
2.keystone認證操作:
上面提到過:所有後續項目的部署都統一放到一個租戶service裡,然後需要為每個項目建立使用者,建管理者角色,建立關聯
. admin-openrc
openstack user create --domain default --password-prompt glance
openstack role add --project service --user glance admin
建立服務實體
openstack service create --name glance \
--description "OpenStack Image" p_w_picpath
建端點
p_w_picpath public http://controller01:9292
p_w_picpath internal http://controller01:9292
p_w_picpath admin http://controller01:9292
3.安裝軟體
yum install openstack-glance -y
4.初始化存放鏡像的儲存設備,此處我們暫時用本地存儲,但是無論哪種存儲,都應該在glance啟動前就已經有了,否則glance啟動時通過驅動程式檢索不到儲存設備,即在glance啟動後 建立的儲存設備無法被glance識别到,需要重新啟動glance才可以,是以我們将下面的步驟提到了最前面。
建立目錄:
mkdir -p /var/lib/glance/p_w_picpaths/
chown glance. /var/lib/glance/p_w_picpaths/
5.修改配置:
編輯:/etc/glance/glance-api.conf
#這裡的資料庫連接配接配置是用來初始化生成資料庫表結構,不配置無法生成資料庫表結構
#glance-api不配置database對建立vm無影響,對使用metada有影響
#日志報錯:ERROR glance.api.v2.metadef_namespaces
connection = mysql+pymysql://glance:che001@controller01/glance
[keystone_authtoken]
auth_url = http://controller01:5000
memcached_servers = controller01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = che001
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/p_w_picpaths/
編輯:/etc/glance/glance-registry.conf
#這裡的資料庫配置是用來glance-registry檢索鏡像中繼資料
同步資料庫:(此處會報一些關于future的問題,自行忽略)
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
systemctl start openstack-glance-api.service \
二:驗證操作:
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
(本地下載下傳:wget http://172.16.209.100/cirros-0.3.4-x86_64-disk.img)
openstack p_w_picpath create "cirros" \
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
openstack p_w_picpath list
part4:部署compute服務
一:控制節點配置
CREATE DATABASE nova_api;
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
2.keystone相關操作
--password-prompt nova
openstack role add --project service --user nova admin
openstack service create --name nova \
--description "OpenStack Compute" compute
compute public http://controller01:8774/v2.1/%\(tenant_id\)s
compute internal http://controller01:8774/v2.1/%\(tenant_id\)s
compute admin http://controller01:8774/v2.1/%\(tenant_id\)s
3.安裝軟體包:
yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler -y
4.修改配置:
編輯/etc/nova/nova.conf
enabled_apis = osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
#下面的為管理ip
my_ip = 172.16.209.115
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:che001@controller01/nova_api
connection = mysql+pymysql://nova:che001@controller01/nova
[oslo_messaging_rabbit]
rabbit_host = controller01
rabbit_userid = openstack
rabbit_password = che001
username = nova
[vnc]
vncserver_listen = 172.16.209.115
vncserver_proxyclient_address = 172.16.209.115
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
5.同步資料庫:(此處會報一些關于future的問題,自行忽略)
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage db sync" nova
6.啟動服務
systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service \
二:計算節點配置
1.安裝軟體包:
yum install openstack-nova-compute libvirt-daemon-lxc -y
2.修改配置:
#計算節點管理網絡ip
my_ip = 172.16.209.117
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 172.16.209.117
#控制節點管理網絡ip
novncproxy_base_url = http://172.16.209.115:6080/vnc_auto.html
[glance]
api_servers = http://controller01:9292
3.如果在不支援虛拟化的機器上部署nova,請确認
egrep -c '(vmx|svm)' /proc/cpuinfo結果為0
則編輯/etc/nova/nova.conf
[libvirt]
virt_type = qemu
4.啟動服務
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
三:驗證
控制節點
[root@controller01 ~]# source admin-openrc
[root@controller01 ~]# openstack compute service list
+----+------------------+--------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
| 1 | nova-consoleauth | controller01 | internal | enabled | up | 2016-08-17T08:51:37.000000 |
| 2 | nova-conductor | controller01 | internal | enabled | up | 2016-08-17T08:51:29.000000 |
| 8 | nova-scheduler | controller01 | internal | enabled | up | 2016-08-17T08:51:38.000000 |
| 12 | nova-compute | compute01 | nova | enabled | up | 2016-08-17T08:51:30.000000 |
part5:部署網絡服務
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
2.keystone相關
openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron \
--description "OpenStack Networking" network
network public http://controller01:9696
network internal http://controller01:9696
network admin http://controller01:9696
3.安裝軟體包
yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which -y
4.配置伺服器元件
編輯 /etc/neutron/neutron.conf檔案,并完成以下動作:
在[資料庫]節中,配置資料庫通路:
core_plugin = ml2
service_plugins = router
#下面配置:啟用重疊IP位址功能
allow_overlapping_ips = True
#auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
connection = mysql+pymysql://neutron:che001@controller01/neutron
username = neutron
[nova]
region_name = RegionOne
lock_path = /var/lib/neutron/tmp
編輯/etc/neutron/plugins/ml2/ml2_conf.ini檔案
[ml2]
type_drivers = flat,vlan,vxlan,gre
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = True
編輯/etc/nova/nova.conf檔案:
[neutron]
url = http://controller01:9696
service_metadata_proxy = True
5.建立連接配接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
6.同步資料庫:(此處會報一些關于future的問題,自行忽略)
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
7.重新開機nova服務
systemctl restart openstack-nova-api.service
8.啟動neutron服務
systemctl enable neutron-server.service
systemctl start neutron-server.service
二:網絡節點配置
1. 編輯 /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
2.執行下列指令,立即生效
sysctl -p
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y
4.配置元件
編輯/etc/neutron/neutron.conf檔案
6、編輯 /etc/neutron/plugins/ml2/openvswitch_agent.ini檔案:
[ovs]
#下面ip為網絡節點資料網絡ip
local_ip=1.1.1.119
bridge_mappings=external:br-ex
[agent]
tunnel_types=gre,vxlan
#l2_population=True
prevent_arp_spoofing=True
7.配置L3代理。編輯 /etc/neutron/l3_agent.ini檔案:
interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge=br-ex
8.配置DHCP代理。編輯 /etc/neutron/dhcp_agent.ini檔案:
dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata=True
9.配置中繼資料代理。編輯 /etc/neutron/metadata_agent.ini檔案:
nova_metadata_ip=controller01
metadata_proxy_shared_secret=che001
10.啟動服務(先啟動服務再建網橋br-ex)
網路節點:
systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \
11.建網橋
注意,如果網卡數量有限,想用網路節點的管理網絡網卡作為br-ex綁定的實體網卡
#那麼需要将網絡節點管理網絡網卡ip去掉,建立br-ex的配置檔案,ip使用原管理網ip
[root@network01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
NM_CONTROLLED=no
[root@network01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
#HWADDR=bc:ee:7b:78:7b:a7
IPADDR=172.16.209.10
GATEWAY=172.16.209.1
NETMASK=255.255.255.0
DNS1=202.106.0.20
DNS1=8.8.8.8
NM_CONTROLLED=no #注意加上這一句否則網卡可能啟動不成功
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth2 #要在network服務重新開機前将實體端口eth0加入網橋br-ex
systemctl restart network #重新開機網絡時,務必保證eth2網卡沒有ip或者幹脆是down掉的狀态,并且一定要NM_CONTROLLED=no,否則會無法啟動服務
三:計算節點配置
2.sysctl -p
3.yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y
4.編輯 /etc/neutron/neutron.conf檔案
5.編輯 /etc/neutron/plugins/ml2/openvswitch_agent.ini
#下面ip為計算節點資料網絡ip
local_ip = 1.1.1.117
#bridge_mappings = vlan:br-vlan
tunnel_types = gre,vxlan
l2_population = True #開啟l2_population功能用于接收sdn控制器(一般放在控制節點)發來的(建立的vm)arp資訊,這樣就把arp資訊推送到了每個中斷裝置(計算節點),減少了一大波初識arp廣播流量(說初始是因為如果沒有l2pop機制,一個vm對另外一個vm的arp廣播一次後就緩存到本地了),好強大,詳見https://assafmuller.com/2014/05/21/ovs-arp-responder-theory-and-practice/
arp_responder = True #開啟br-tun的arp響應功能,這樣br-tun就成了一個arp proxy,來自本節點對其他虛拟機而非實體主機的arp請求可以基于本地的br-tun輕松搞定,不能再牛逼了
prevent_arp_spoofing = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
7.編輯 /etc/nova/nova.conf
8.啟動服務
systemctl enable neutron-openvswitch-agent.service
systemctl start neutron-openvswitch-agent.service
systemctl restart openstack-nova-compute.service
part6:部署控制台dashboard
在控制節點
1.安裝軟體包
yum install openstack-dashboard -y
2.配置/etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller01"
ALLOWED_HOSTS = ['*', ]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller01:11211',
}
}
#注意:必須是v3而不是v3.0
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"p_w_picpath": 2,
"volume": 2,
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
TIME_ZONE = "UTC"
3.啟動服務
systemctl enable httpd.service memcached.service
systemctl restart httpd.service memcached.service
4.驗證;
http://172.16.209.115/dashboard
總結:
- 與keystone打交道的隻有api層,是以不要到處亂配
- 建主機的時候由nova-compute負責調用各個api,是以不要再控制節點配置啥調用
- ml2是neutron的core plugin,隻需要在控制節點配置
- 網絡節點隻需要配置相關的agent
- 各元件的api除了接收請求外還有很多其他功能,比方說驗證請求的合理性,控制節點nova.conf需要配neutron的api、認證,因為nova boot時需要去驗證使用者送出網絡的合理性,控制節點neutron.conf需要配nova的api、認證,因為你删除網絡端口時需要通過nova-api去查是否有主機正在使用端口。計算幾點nova.conf需要配neutron,因為nova-compute發送請求給neutron-server來建立端口。這裡的端口值得是'交換機上的端口'
- 不明白為啥?或者不懂我在說什麼,請好好研究openstack各元件通信機制和主機建立流程。
網路故障排查:
網絡節點:
[root@network02 ~]# ip netns show
qdhcp-e63ab886-0835-450f-9d88-7ea781636eb8
qdhcp-b25baebb-0a54-4f59-82f3-88374387b1ec
qrouter-ff2ddb48-86f7-4b49-8bf4-0335e8dbaa83
[root@network02 ~]# ip netns exec qrouter-ff2ddb48-86f7-4b49-8bf4-0335e8dbaa83 bash
[root@network02 ~]# ping -c2 www.baidu.com
PING www.a.shifen.com (61.135.169.125) 56(84) bytes of data.
64 bytes from 61.135.169.125: icmp_seq=1 ttl=52 time=33.5 ms
64 bytes from 61.135.169.125: icmp_seq=2 ttl=52 time=25.9 ms
如果無法ping通,那麼退出namespace
ovs-vsctl del-br br-ex
ovs-vsctl del-br br-int
ovs-vsctl del-br br-tun
ovs-vsctl add-br br-int
ovs-vsctl add-port br-ex eth0
systemctl restart neutron-openvswitch-agent.service neutron-l3-agent.service \