天天看點

openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合

Openstack stein搭建

  • 環境
    • 我的規劃
    • 安全
    • 主機名,yum源等
    • NTP(可選)
      • chrony部署
    • 資料庫(controller)
    • 消息隊列(controller)
    • 安裝Etcd(controller)
  • 認證服務(controller)
    • 資料庫
    • 安裝和配置
  • 鏡像服務(glance controller)
    • 資料庫
    • openstack添加服務
    • 安裝和配置
    • 驗證
  • Placement服務安裝
    • 資料庫
    • openstack添加服務
    • 安裝和配置
    • 驗證:
  • nova(controller)
    • 資料庫
    • openstack配置
    • 安裝和配置
  • nova(compute)
    • 安裝和配置
    • 驗證(controller)
  • 網絡服務neutron(controller)
    • 資料庫
    • openstack服務
    • 設定網絡
    • 配置下面的部分
  • 計算節點的neutron(compute)
    • 安裝和配置
    • 配置網絡
    • 剩下的部分
    • 驗證:
  • web界面horizon(controller)
  • 問題集合
    • placement出錯
    • keystone出錯
    • glance出錯
    • dashboard出錯
      • 還有一個錯誤是因為驅動的問題
    • 建立執行個體出錯
      • 1、Host 'compute' is not mapped to any cell
      • 2、4336 ERROR nova.compute.manager [req-7c1bc64e-74da-4152-84fd-45eff53ed5ee 25b99096b60849e9b5a66dde8ce879cb dba851953a1446cfb651022214d6d486 - default default] [instance: ee763bf9-25ae-48e2-8a52-524695e9b4f1] Failed to allocate network(s): VirtualInterfaceCreateException: Virtual Interface creation failed
      • 3、There are not enough hosts available.
      • 4、oslo.messaging._drivers.impl_rabbit [-] Unexpected error during heartbeat thread processing, retrying...錯誤

環境

我的規劃

注意: 我的教程有很多錯誤,如果發現了錯誤請麻煩給我留個言

controller IP: 192.168.3.104(管理IP),192.168.101.131(提供者IP)

compute IP:192.168.3.103(管理IP),192.168.101.130(提供者IP)

密碼全為: 123456

官方文檔的配置:

comtroller IP: 10.0.0.11

compute IP: 10.0.0.31

技巧:

可以使用grep過濾要配置的檔案,很多檔案都是沒有内容的,隻有一個标簽

grep -vE ‘^#|^$’ 檔案路徑

##配置進度

controller: glance節點已經配完

compute:配置以及全部配完

安全

controoler: 
	openssl rand -hex 10 > rand.pass
官方文檔建議admin使用這個指令生成的的字元串當密碼
           

主機名,yum源等

所有節點:

hostnamectl set-hostname controller
hostnamectl set-hostname compute
yum -y install epel-release centos-release-openstack-stein 
yum -y install python-openstackclient openstack-selinux
yum  -y  upgrade
配置hosts檔案
删除網卡配置檔案中關于UUID和HWADDR的内容
檢查時間是否一緻
	date
           

NTP(可選)

chrony部署

(阿裡的ntp伺服器)

ntp1.aliyun.com

ntp2.aliyun.com

ntp3.aliyun.com

ntp4.aliyun.com

ntp5.aliyun.com

controller節點

yum install chrony -y

vim /etc/chrony.conf

server 127.0.0.1 iburst

allow 0.0.0.0/0

other節點

yum -y install chrony

vim /etc/chrony.conf

server controller iburst

驗證

systemctl restart chronyd

systemctl enable chronyd

chronyc sources

資料庫(controller)

yum -y install mariadb mariadb-server python2-PyMySQL

vim  /etc/my.cnf.d/openstack.cnf

[mysqld]
bind-address = 192.168.3.104

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

systemctl enable mariadb.service
systemctl start mariadb.service

運作 mysql_secure_installation設定密碼
           

消息隊列(controller)

yum install rabbitmq-server  -y 
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

rabbitmqctl add_user openstack 123456
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
(顯示:Setting permissions for user "openstack" in vhost "/" 表示成功)
           

安裝Memcached(controller)

yum install memcached python-memcached -y 
vim /etc/sysconfig/memcached(修改)
OPTIONS="-l 127.0.0.1,::1,controller"

systemctl enable memcached.service
systemctl start memcached.service
           

安裝Etcd(controller)

yum -y install etcd 
編輯:
	vim  /etc/etcd/etcd.conf 
	修改: ETCD_INITIAL_CLUSTER, 
	ETCD_INITIAL_ADVERTISE_PEER_URLS, 
	ETCD_ADVERTISE_CLIENT_URLS, 
	ETCD_LISTEN_CLIENT_URLS 

	類似下面的:
		#[Member]
		ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
		ETCD_LISTEN_PEER_URLS="http://192.168.3.104:2380"
		ETCD_LISTEN_CLIENT_URLS="http://192.168.3.104:2379"
		ETCD_NAME="controller"
		#[Clustering]
		ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.3.104:2380"
		ETCD_ADVERTISE_CLIENT_URLS="http://192.168.3.104:2379"
		ETCD_INITIAL_CLUSTER="controller=http://192.168.3.104:2380"
		ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
		ETCD_INITIAL_CLUSTER_STATE="new"
	
systemctl enable etcd
systemctl start etcd
           

認證服務(controller)

資料庫

mysql -uroot -p123456

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'  IDENTIFIED BY  '123456';
           

安裝和配置

yum install openstack-keystone httpd mod_wsgi -y 
           

vim /etc/keystone/keystone.conf

[database]
connection = mysql+pymysql://keystone:[email protected]/keystone
[token]
provider = fernet

回到指令界面
	su -s /bin/sh -c "keystone-manage db_sync" keystone
	keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
	keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
	keystone-manage bootstrap --bootstrap-password 123456 \
	  --bootstrap-admin-url http://controller:5000/v3/ \
	  --bootstrap-internal-url http://controller:5000/v3/ \
	  --bootstrap-public-url http://controller:5000/v3/ \
	  --bootstrap-region-id RegionOne

配置apache伺服器
	vim /etc/httpd/conf/httpd.conf
		ServerName controller
	ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

systemctl enable httpd.service
systemctl start httpd.service
		
配置管理使用者:
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
驗證:openstack token issue


建立域和使用者:
openstack domain create --description "An Example Domain" example
openstack project create --domain default \
  --description "Service Project" service
openstack project create --domain default \
  --description "Demo Project" myproject
openstack user create --domain default \
  --password-prompt myuser
openstack role create myrole
openstack role add --project myproject --user myuser myrole
           

驗證:

unset OS_AUTH_URL OS_PASSWORD
openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue
  
openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name myproject --os-username myuser token issue
讓你輸入密碼,然後有正确的顯示為正常
           

鏡像服務(glance controller)

資料庫

CREATE DATABASE glance;
 GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost'    IDENTIFIED BY '123456';
 GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'    IDENTIFIED BY '123456';
           

openstack添加服務

openstack user create --domain default --password-prompt glance
openstack role add --project service --user glance admin
openstack service create --name glance  --description "OpenStack Image" image

建立glance API端點:
 openstack endpoint create --region RegionOne \
  image public http://controller:9292
 openstack endpoint create --region RegionOne \
  image internal http://controller:9292
 openstack endpoint create --region RegionOne \
  image admin http://controller:9292
           

安裝和配置

yum install openstack-glance -y

vim /etc/glance/glance-api.conf

[database]
	connection = mysql+pymysql://glance:[email protected]/glance

	[keystone_authtoken]

	www_authenticate_uri  = http://controller:5000
	auth_url = http://controller:5000
	memcached_servers = controller:11211
	auth_type = password
	project_domain_name = Default
	user_domain_name = Default
	project_name = service
	username = glance
	password = 123456
	
	[paste_deploy]
	flavor = keystone
	[glance_store]
	stores = file,http
	default_store = file
	filesystem_store_datadir = /var/lib/glance/images/
           

vim /etc/glance/glance-registry.conf

[database]
connection = mysql+pymysql://glance:[email protected]/glance
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456

[paste_deploy]
flavor = keystone

 su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service  openstack-glance-registry.service
systemctl start openstack-glance-api.service    openstack-glance-registry.service
           

驗證

wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
openstack image create "cirros"   --file cirros-0.4.0-x86_64-disk.img   --disk-format qcow2 --container-format bare   --public
openstack image list
           

Placement服務安裝

資料庫

CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost'    IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%'    IDENTIFIED BY '123456';
           

openstack添加服務

openstack user create --domain default --password-prompt placement
openstack role add --project service --user placement admin
添加目錄API 端點
openstack service create --name placement \
  --description "Placement API" placement

服務端點
openstack endpoint create --region RegionOne \
  placement public http://controller:8778
openstack endpoint create --region RegionOne \
  placement internal http://controller:8778
openstack endpoint create --region RegionOne \
  placement admin http://controller:8778
           

安裝和配置

yum install openstack-placement-api -y
vim /etc/placement/placement.conf
	[placement_database]
	connection = mysql+pymysql://placement:[email protected]/placement
	[api]
	auth_strategy = keystone
	
	[keystone_authtoken]
	auth_url = http://controller:5000/v3
	memcached_servers = controller:11211
	auth_type = password
	project_domain_name = Default
	user_domain_name = Default
	project_name = service
	username = placement
	password = 123456
su -s /bin/sh -c "placement-manage db sync" placement
systemctl restart httpd
           

驗證:

placement-status upgrade check
pip install osc-placement
openstack resource provider list 
openstack --os-placement-api-version 1.2 resource class list --sort-column name
openstack --os-placement-api-version 1.6 trait list --sort-column name

pip配置
	安裝pip
	yum -y install python-pip
	vim /root/.pip/pip.conf
		[global]
		index-url = http://mirrors.aliyun.com/pypi/simple/
		[install]
		trusted-host=mirrors.aliyun.com
pip install --upgrade pip
           

nova(controller)

資料庫

CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
  IDENTIFIED BY '123456';
  
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
  IDENTIFIED BY '123456';
  
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
  IDENTIFIED BY '123456';
           

openstack配置

openstack user create --domain default --password-prompt nova
openstack role add --project service --user nova admin

建立服務實體:
openstack service create --name nova \
  --description "OpenStack Compute" compute
建立計算API服務點:
openstack endpoint create --region RegionOne \
  compute public http://controller:8774/v2.1

openstack endpoint create --region RegionOne \
  compute internal http://controller:8774/v2.1

openstack endpoint create --region RegionOne \
  compute admin http://controller:8774/v2.1

           

安裝和配置

yum install openstack-nova-api openstack-nova-conductor \
  openstack-nova-novncproxy openstack-nova-scheduler -y

vim /etc/nova/nova.conf
	[DEFAULT]
	enabled_apis = osapi_compute,metadata
	transport_url = rabbit://openstack:[email protected]
	my_ip = 192.168.3.104
	use_neutron = true
	firewall_driver = nova.virt.firewall.NoopFirewallDriver
	[api_database]
	# ...
	connection = mysql+pymysql://nova:[email protected]/nova_api
	
	[database]
	# ...
	connection = mysql+pymysql://nova:[email protected]/nova
	[api]
	# ...
	auth_strategy = keystone
	
	[keystone_authtoken]
	# ...
	auth_url = http://controller:5000/v3
	memcached_servers = controller:11211
	auth_type = password
	project_domain_name = Default
	user_domain_name = Default
	project_name = service
	username = nova
	password = 123456

vim /etc/nova/nova.conf
	[neutron]
	[vnc]
	enabled = true
	# ...
	server_listen = $my_ip
	server_proxyclient_address = $my_ip
	
	[glance]
	# ...
	api_servers = http://controller:9292
	
	[oslo_concurrency]
	# ...
	lock_path = /var/lib/nova/tmp
	
	[placement]
	# ...
	region_name = RegionOne
	project_domain_name = Default
	project_name = service
	auth_type = password
	user_domain_name = Default
	auth_url = http://controller:5000/v3
	username = placement
	password = 123456

填充資料庫
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova

驗證是否注冊正确:
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

完全安裝:
systemctl enable openstack-nova-api.service \
  openstack-nova-consoleauth openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service \
  openstack-nova-consoleauth openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

這裡需要注意:openstack-nova-consoleauth貌似已經不支援了,是以啟動這個會報錯,把consoleauth删除就行

           

nova(compute)

安裝和配置

yum install openstack-nova-compute -y 

vim /etc/nova/nova.conf
	[DEFAULT]
	# ...
	enabled_apis = osapi_compute,metadata
	transport_url = rabbit://openstack:[email protected]
	my_ip = 192.168.3.103
	use_neutron = true
	firewall_driver = nova.virt.firewall.NoopFirewallDriver
	[api]
	# ...
	auth_strategy = keystone
	
	[keystone_authtoken]
	# ...
	auth_url = http://controller:5000/v3
	memcached_servers = controller:11211
	auth_type = password
	project_domain_name = Default
	user_domain_name = Default
	project_name = service
	username = nova
	password = 123456

vim /etc/nova/nova.conf
	[neutron]
	[vnc]
	enabled = true
	server_listen = 0.0.0.0
	server_proxyclient_address = $my_ip
	novncproxy_base_url = http://controller:6080/vnc_auto.html
	
	[glance]
	api_servers = http://controller:9292
	
	[oslo_concurrency]
	lock_path = /var/lib/nova/tmp
	
	[placement]
	region_name = RegionOne
	project_domain_name = Default
	project_name = service
	auth_type = password
	user_domain_name = Default
	auth_url = http://controller:5000/v3
	username = placement
	password = 123456
	
	[libvirt]
	virt_type = qemu
	
啟動服務
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service


*将計算節點添加到資料庫(controller配置)*
	openstack compute service list --service nova-compute
	
	發現主機
	su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
	
	定期查找計算節點(可選)
	vim /etc/nova/nova.conf
		[scheduler]
		discover_hosts_in_cells_interval = 300

           

驗證(controller)

openstack compute service list

or nova-manage cell_v2 discover_hosts --verbose

openstack catalog list

openstack image list

nova-status upgrade check

網絡服務neutron(controller)

資料庫

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY '123456';
           

openstack服務

openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin

建立openstack服務實體:
openstack service create --name neutron \
  --description "OpenStack Networking" network

建立opesntackAPI 端點:
openstack endpoint create --region RegionOne \
  network public http://controller:9696
openstack endpoint create --region RegionOne \
  network internal http://controller:9696
openstack endpoint create --region RegionOne \
  network admin http://controller:9696
           

設定網絡

yum install openstack-neutron openstack-neutron-ml2 \
  openstack-neutron-linuxbridge ebtables -y

vim /etc/neutron/neutron.conf
	[database]
	# ...
	connection = mysql+pymysql://neutron:[email protected]/neutron
	
	[DEFAULT]
	# ...
	core_plugin = ml2
	service_plugins = router
	allow_overlapping_ips = true
	transport_url = rabbit://openstack:[email protected]
	auth_strategy = keystone
	notify_nova_on_port_status_changes = true
	notify_nova_on_port_data_changes = true
	
	[keystone_authtoken]
	# ...
	www_authenticate_uri = http://controller:5000
	auth_url = http://controller:5000
	memcached_servers = controller:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = neutron
	password = 123456
	
	[nova]
	# ...
	auth_url = http://controller:5000
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	region_name = RegionOne
	project_name = service
	username = nova
	password = 123456
	
	[oslo_concurrency]
	# ...
	lock_path = /var/lib/neutron/tmp


vim /etc/neutron/plugins/ml2/ml2_conf.ini
	[ml2]
	# ...
	type_drivers = flat,vlan,vxlan
	tenant_network_types = vxlan
	mechanism_drivers = linuxbridge,l2population
	extension_drivers = port_security
	
	[ml2_type_flat]
	# ...
	flat_networks = provider
	
	[ml2_type_vxlan]
	# ...
	vni_ranges = 1:1000
	
	[securitygroup]
	# ...
	enable_ipset = true


vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

	[linux_bridge]
	physical_interface_mappings = provider:eth1            (提供者網絡)
	
	[vxlan]
	enable_vxlan = true
	local_ip = 192.168.3.104        (管理者IP)
	l2_population = true
	
	[securitygroup]
	# ...
	enable_security_group = true
	firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

開啟net.bridge.brideg-nf-call-iptables
modprobe bridge
modprobe br_netfilter
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
sysctl -p 

配置第三層代理
vim /etc/neutron/l3_agent.ini
	[DEFAULT]
	# ...
	interface_driver = linuxbridge
	
配置DHCP代理
vim /etc/neutron/dhcp_agent.ini
	[DEFAULT]
	# ...
	interface_driver = linuxbridge
	dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
	enable_isolated_metadata = true
           

配置下面的部分

vim /etc/neutron/metadata_agent.ini
	[DEFAULT]
	nova_metadata_host = controller
	metadata_proxy_shared_secret = METADATA_SECRET      (這裡可以改,不過我就不改了)

vim /etc/nova/nova.conf
	[neutron]
	url = http://controller:9696
	auth_url = http://controller:5000
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	region_name = RegionOne
	project_name = service
	username = neutron
	password = 123456
	service_metadata_proxy = true
	metadata_proxy_shared_secret = METADATA_SECRET          (這裡和上面對應)

完成安裝
 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
填充資料庫
 su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重新開機網絡
systemctl restart openstack-nova-api.service

systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service  neutron-l3-agent.service

systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service neutron-l3-agent.service
           

計算節點的neutron(compute)

安裝和配置

yum install openstack-neutron-linuxbridge ebtables ipset -y

vim /etc/neutron/neutron.conf
	[DEFAULT]
	# ...
	transport_url = rabbit://openstack:[email protected]
	auth_strategy = keystone
	
	[keystone_authtoken]
	# ...
	www_authenticate_uri = http://controller:5000
	auth_url = http://controller:5000
	memcached_servers = controller:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = neutron
	password = 123456
	
	[oslo_concurrency]
	# ...
	lock_path = /var/lib/neutron/tmp

           

配置網絡

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
	[linux_bridge]
	physical_interface_mappings = provider:eth1       (提供者網絡)
	[vxlan]
	enable_vxlan = true
	local_ip = 192.168.3.103
	l2_population = true
	[securitygroup]
	# ...
	enable_security_group = true
	firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

開啟net.bridge.brideg-nf-call-iptables
modprobe bridge
modprobe br_netfilter
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
sysctl -p 
           

剩下的部分

vim /etc/nova/nova.conf
	[neutron]
	# ...
	url = http://controller:9696
	auth_url = http://controller:5000
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	region_name = RegionOne
	project_name = service
	username = neutron
	password = 123456

完成安裝
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
           

驗證:

openstack extension list --network

openstack network agent list

(會顯示l3和dhcp agent說明成功了)

web界面horizon(controller)

horizon好像先要安裝django

yum install openstack-dashboard -y 
vim  /etc/openstack-dashboard/local_settings
	OPENSTACK_HOST = "controller"
	ALLOWED_HOSTS = ['*',]

	配置memcached和存儲服務
	SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

	CACHES = {
	    'default': {
	         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
	         'LOCATION': 'controller:11211',
	    }
	}

	OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
	
	OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
	
	OPENSTACK_API_VERSIONS = {
	    "identity": 3,
	    "image": 2,
	    "volume": 3,
	}

	OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
	
	OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
	
	OPENSTACK_NEUTRON_NETWORK = {
	    ...
	    'enable_router': False,
	    'enable_quotas': False,
	    'enable_distributed_router': False,
	    'enable_ha_router': False,
	    'enable_lb': False,
	    'enable_firewall': False,
	    'enable_vpn': False,
	    'enable_fip_topology_check': False,
	}

vim /etc/httpd/conf.d/openstack-dashboard.conf
添加:
	WSGIApplicationGroup %{GLOBAL}

完成安裝
systemctl restart httpd.service memcached.service

           

通路:http://controller/dashboard

問題集合

placement出錯

openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合

openstack resource provider list 沒有結果顯示

解決:

對配置檔案/etc/httpd/conf.d/00-placement-api.conf 添加内容:

openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合

再次運作openstack resource provider list 驗證:

openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合

keystone出錯

同步資料時日志報錯

#su -s /bin/sh -c “keystone-manage db_sync” keystone

openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合

在知乎上查找到可能是/etc/hosts 檔案的錯誤,但我hosts檔案沒問題(controler寫的不是管理IP),我猜想是不是前面配置的時候寫的是管理IP,而這裡寫的又不是管理IP,導緻keystone查找IP出錯,是以我把hosts檔案的IP改為controller的管理IP,再次運作同步指令,這個就成功了。

原來的hosts檔案,如下圖

openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合

因為我controller的管理IP是192.168.10.150,然後keystone同步資料庫時根據controller的解析找到第一個IP——192.168.101.150,導緻出錯;把第一行删掉或者換到下一行就可以解決這個問題

glance出錯

配置完成後,上傳鏡像報伺服器内部錯誤,檢視glance日志發現好像資料庫報錯,但仔細看了資料庫的配置貌似沒有配錯, 哪位仁兄發現哪裡配錯了麻煩留言一下

報錯資訊:

HTTP 500 Internal Server Error: The server has either erred or is incapable of performing the requested operation.

openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合

問題已解決:(同keystone資料庫同步失敗的原因一樣)

尴尬:我原來hosts中controller的IP有兩個,删除掉多餘的,隻留一個管理IP的映射,結果就不報錯了

dashboard出錯

建立網絡時出現錯誤

openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合
openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合
openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合

原來我在配/etc/neutron/plugins/ml2/ml2_conf.ini時配錯了,跟下面參考的網站的情況一樣,配錯地方了。

參考了:https://blog.csdn.net/ranweizheng/article/details/87857936?ops_request_misc=%25257B%252522request%25255Fid%252522%25253A%252522160837126416780273393490%252522%25252C%252522scm%252522%25253A%25252220140713.130102334…%252522%25257D&request_id=160837126416780273393490&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2allsobaiduend~default-2-87857936.first_rank_v2_pc_rank_v29&utm_term=No%20tenant%20network%20is%20available%20for%20allocation.

還有一個錯誤是因為驅動的問題

有一個選型時type_driver,隻有在裡面有的驅動才能建立相應的網絡

建立執行個體出錯

1、Host ‘compute’ is not mapped to any cell

openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合

按照網上的教程:

nova-manage cell_v2 discover_hosts --verbose

再次建立執行個體貌似就沒問題了

2、4336 ERROR nova.compute.manager [req-7c1bc64e-74da-4152-84fd-45eff53ed5ee 25b99096b60849e9b5a66dde8ce879cb dba851953a1446cfb651022214d6d486 - default default] [instance: ee763bf9-25ae-48e2-8a52-524695e9b4f1] Failed to allocate network(s): VirtualInterfaceCreateException: Virtual Interface creation failed

openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合

建立VM報錯,同時網絡是down的

openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合

我用vxlan網絡建立VM成功建立并允許,但用Local建立就報這樣的錯,可能local配置的原因

3、There are not enough hosts available.

openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合

計算結點上建立的VM太多了

4、oslo.messaging._drivers.impl_rabbit [-] Unexpected error during heartbeat thread processing, retrying…錯誤

openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合
openstack_stein搭建環境認證服務(controller)鏡像服務(glance controller)Placement服務安裝nova(controller)nova(compute)網絡服務neutron(controller)計算節點的neutron(compute)web界面horizon(controller)問題集合

網上說添加這些

繼續閱讀