天天看點

搭建高可用OpenStack(Queen版)叢集(六)之部署Neutron控制/網絡節點叢集

一、搭建高可用OpenStack(Queen版)叢集之部署Neutron控制/網絡節點叢集

  一、OpenStack Neutron簡介

  1、概述

  Openstack Networking(neutron),允許建立、插入接口裝置,這些裝置由其他openstack服務管理。

  openstack網絡主要和openstack計算互動,以提供網絡連接配接到它的執行個體。

  2、neutron包含的元件

    (1)neutron-server

        接收和路由APi請求到合适的openstack網絡插件,以達到預想的目的。

    (2)openstack網絡插件和代理

        插拔端口,建立網絡和子網,以及提供IP位址,這些插件和代理依賴供應商和技術而不同。例如:Linux Bridge、 Open vSwitch

    (3)消息隊列

        大多數的openstack networking安裝都會用到,用于在neutron-server和各種各樣的代理程序間路由資訊。也為某些特定的插件扮演資料庫的角色

  3、網絡工作模式和概念(虛拟化網絡)

[ KVM ] 四種簡單的網絡模型​        

[ KVM 網絡虛拟化 ] Openvswitch

  二、部署Neutron控制/網絡節點叢集

  網卡資訊根據自己情況進行修改

  1、建立neutron資料庫

  在任意控制節點建立資料庫,背景資料自動同步

  mysql -u root -p

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
flush privileges;
exit;      

  2、建立neutron-api

  在任意控制節點操作

  調用neutron服務需要認證資訊,加載環境變量腳本即可

. admin-openrc      
    1、建立neutron使用者

  service項目已在glance章節建立,neutron使用者在”default” domain中

[root@controller01 ~]# openstack user create --domain default --password=neutron_pass neutron
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id      
    2、neutron賦權

  為neutron使用者賦予admin權限(沒有傳回值)

openstack role add --project service --user neutron admin      
    3、建立neutron服務實體

  neutron服務實體類型”network”

[root@controller01 ~]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id      
    4、建立neutron-api

  注意:

  1. region與初始化admin使用者時生成的region一緻;
  2. api位址統一采用vip,如果public/internal/admin分别使用不同的vip,請注意區分;
  3. neutron-api 服務類型為network;
# neutron-api 服務類型為network;
# public api
[root@controller01 ~]# openstack endpoint create --region RegionTest network public http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 87bb1951240b4cce8b56406642a0d169 |
| interface    | public                           |
| region       | RegionTest                       |
| region_id    | RegionTest                       |
| service_id   | db519ab1d6654bf8af0cccabddf5a0cc |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+

# internal api
[root@controller01 ~]# openstack endpoint create --region RegionTest network internal http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ab8bfd0e17b945e7bd54c44514965d9f |
| interface    | internal                         |
| region       | RegionTest                       |
| region_id    | RegionTest                       |
| service_id   | db519ab1d6654bf8af0cccabddf5a0cc |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |

# admin api
[root@controller01 ~]#//controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | adbcdd77fd9347ed95023a93b62edcff |
| interface    | admin                            |
| region       | RegionTest                       |
| region_id    | RegionTest                       |
| service_id   | db519ab1d6654bf8af0cccabddf5a0cc |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |      

  3、安裝neutron

  在全部控制節點安裝neutron相關服務

yum install      

  4、配置neutron.conf 

  在全部控制節點操作

  注意:

  1. ”bind_host”參數,根據節點修改;
  2. neutron.conf檔案的權限:root:neutron
cp -rp /etc/neutron/neutron.conf{,.bak}
egrep -v "^$|^#"      
[DEFAULT]
bind_host = 10.20.9.189
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
# l3高可用,可以采用vrrp模式或者dvr模式;
# vrrp模式下,在各網絡節點(此處網絡節點與控制節點混合部署)以vrrp的模式設定主備virtual router;mater故障時,virtual router不會遷移,而是将router對外服務的vip漂移到standby router上; 
# dvr模式下,三層的轉發(L3 Forwarding)與nat功能都會被分布到計算節點上,即計算節點也有了網絡節點的功能;但是,dvr依然不能消除集中式的virtual router,為了節省IPV4公網位址,仍将snat放在網絡節點上提供;
# vrrp模式與dvr模式不可同時使用
# “l3_ha = true“參數即啟用l3 ha功能
l3_ha = true
# 最多在幾個l3 agent上建立ha router
max_l3_agents_per_router = 3
# 可建立ha router的最少正常運作的l3 agnet數量
min_l3_agents_per_router = 2
# vrrp廣播網絡
l3_ha_net_cidr = 169.254.192.0/18
# ”router_distributed “參數本身的含義是普通使用者建立路由器時,是否預設建立dvr;此參數預設值為“false”,這裡采用vrrp模式,可注釋此參數
# 雖然此參數在mitaka(含)版本後,可與l3_ha參數同時打開,但設定dvr模式還同時需要設定網絡節點與計算節點的l3_agent.ini與ml2_conf.ini檔案
# router_distributed = true
# dhcp高可用,在3個網絡節點各生成1個dhcp伺服器
dhcp_agents_per_network = 3
# 前端采用haproxy時,服務連接配接rabbitmq會出現連接配接逾時重連的情況,可通過各服務與rabbitmq的日志檢視;
# transport_url = rabbit://openstack:openstack@controller:5673
# rabbitmq本身具備叢集機制,官方文檔建議直接連接配接rabbitmq叢集;但采用此方式時服務啟動有時會報錯,原因不明;如果沒有此現象,強烈建議連接配接rabbitmq直接對接叢集而非通過前端haproxy
transport_url=rabbit://openstack:openstack@controller01:5672,openstack:openstack@controller02:5672,openstack:openstack@controller03:5672
[agent]
[cors]
[database]
connection = mysql+pymysql://neutron:123456@controller/neutron
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller01:11211,controller02:11211,controller03:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron_pass
[matchmaker_redis]
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionTest
project_name = service
username = nova
password = nova_pass
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]      

  5、配置ml2_conf.ini

  在全部控制節點操作

  注意:ml2_conf.ini檔案的權限:root:neutron 

    單網卡需要設定:flat_networks = provider

cp -rp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}      
cat> /etc/neutron/plugins/ml2/ml2_conf.ini<<EOF
[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,vxlan
# ml2 mechanism_driver 清單,l2population對gre/vxlan租戶網絡有效
mechanism_drivers = linuxbridge,l2population
# 可同時設定多種租戶網絡類型,第一個值是正常租戶建立網絡時的預設值,同時也預設是master router心跳信号的傳遞網絡類型
tenant_network_types = vlan,vxlan,flat
extension_drivers = port_security
[ml2_type_flat]
# 指定flat網絡類型名稱為”external”,”*”表示任意網絡,空值表示禁用flat網絡
flat_networks = external
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
# 指定vlan網絡類型的網絡名稱為”vlan”;如果不設定vlan id則表示不受限
network_vlan_ranges = vlan:3001:3500
[ml2_type_vxlan]
vni_ranges = 10001:20000
[securitygroup]
enable_ipset = true
EOF      

  服務初始化調用ml2_conf.ini中的配置,但指向/etc/neutron/olugin.ini檔案

ln      

  6、配置linuxbridge_agent.ini

  在全部控制節點操作

    1、配置linuxbridge_agent.ini

  注意:linuxbridge_agent.ini檔案的權限:root:neutron

    單網卡需要設定:physical_interface_mappings = provider:ens192

cp -rp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
cat>/etc/neutron/plugins/ml2/linuxbridge_agent.ini<<EOF
[DEFAULT]
[agent]
[linux_bridge]
# 網絡類型名稱與實體網卡對應,這裡flat external網絡對應規劃的eth1,vlan租戶網絡對應規劃的eth3,在建立相應網絡時采用的是網絡名稱而非網卡名稱;
# 需要明确的是實體網卡是本地有效,根據主機實際使用的網卡名确定;
# 另有” bridge_mappings”參數對應網橋
physical_interface_mappings = external:eth1,vlan:eth3= neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
# tunnel租戶網絡(vxlan)vtep端點,這裡對應規劃的eth2(的位址),根據節點做相應修改
local_ip = 10.0.0.31
l2_population = true
EOF      
sed -i 's/10.0.0.31/10.20.9.189/g' /etc/neutron/plugins/ml2/linuxbridge_agent.ini      
    2、配置核心參數
  • bridge:是否允許橋接;
  • 如果“sysctl -p”加載不成功,報” No such file or directory”錯誤,需要加載核心子產品“br_netfilter”;
  • 指令“modinfo br_netfilter”檢視核心子產品資訊;
  • 指令“modprobe br_netfilter”加載核心子產品
echo "# bridge" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
sysctl -p      

  報錯如下

# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory      

  解決辦法

[root@controller01 ml2]#  modprobe br_netfilter
[root@controller01 ml2]#  ls /proc/sys/net/bridge
bridge-nf-call-arptables  bridge-nf-call-iptables        bridge-nf-filter-vlan-tagged
bridge-nf-call-ip6tables  bridge-nf-filter-pppoe-tagged  bridge-nf-pass-vlan-input-dev
[root@controller01 ml2]#  sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1      

  7、配置l3_agent.ini(self-networking)

  在全部控制節點操作

  注意:l3_agent.ini檔案的權限:root:neutron

cp -rp /etc/neutron/l3_agent.ini{,.bak}

# egrep -v "^$|^#"  /etc/neutron/l3_agent.ini

cat>/etc/neutron/l3_agent.ini<<EOF
[DEFAULT]
interface_driver = linuxbridge
[agent]
[ovs]
EOF      

  8、配置dhcp_agent.ini

  在全部控制節點操作

  使用dnsmasp提供dhcp服務;

  dhcp_agent.ini檔案的權限:root:neutron

cp -rp /etc/neutron/dhcp_agent.ini{,.bak}

# egrep -v "^$|^#" /etc/neutron/dhcp_agent.ini

cat>/etc/neutron/dhcp_agent.ini<<EOF
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[agent]
[ovs]
EOF      

  9、配置metadata_agent.ini

  在全部控制節點操作

  注意:

  1. metadata_proxy_shared_secret:與/etc/nova/nova.conf檔案中參數一緻;
  2. metadata_agent.ini檔案的權限:root:neutron
cp -rp /etc/neutron/metadata_agent.ini{,.bak}

# egrep -v "^$|^#"  /etc/neutron/metadata_agent.ini
cat>/etc/neutron/metadata_agent.ini<<EOF
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = neutron_metadata_secret
[agent]
[cache]
EOF      

  10、配置nova.conf

  在全部控制節點操作

  注意:

  1. 配置隻涉及nova.conf的”[neutron]”字段;
  2. metadata_proxy_shared_secret:與/etc/neutron/metadata_agent.ini檔案中參數一緻
  3. 在/etc/nova/nova.conf添加如下内容
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionTest
project_name = service
username = neutron
password = neutron_pass
service_metadata_proxy = true
metadata_proxy_shared_secret = neutron_metadata_secret      

  11、同步neutron資料庫

  任意控制節點操作

需要時間有點長,最後傳回ok表示正常)

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head"      

  驗證

mysql -h controller01 -u neutron -p123456 -e "use neutron;show tables;"      

  12、啟動服務

  全部控制節點操作

    1、變更nova配置檔案,首先需要重新開機nova服務
systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service      
    2、啟動并設定開機啟動
systemctl enable neutron-server.service \
 neutron-linuxbridge-agent.service \
 neutron-l3-agent.service \
 neutron-dhcp-agent.service \
 neutron-metadata-agent.service

systemctl restart neutron-server.service
systemctl restart neutron-linuxbridge-agent.service
systemctl restart neutron-l3-agent.service
systemctl restart neutron-dhcp-agent.service
systemctl restart neutron-metadata-agent.service      
    3、檢視服務狀态
systemctl  status neutron-server.service \
 neutron-linuxbridge-agent.service \
 neutron-l3-agent.service \
 neutron-dhcp-agent.service \
 neutron-metadata-agent.service      

  13、 驗證

. admin-openrc      

  檢視加載的擴充服務(因為資料有點多,就不展示了)

openstack extension list --network      

  檢視agent服務

[root@controller01 neutron]# openstack network agent list      

  14、設定pcs資源

    1、添加資源neutron-server,neutron-linuxbridge-agent,neutron-l3-agent,neutron-dhcp-agent與neutron-metadata-agent
pcs resource create neutron-server systemd:neutron-server --clone interleave=true
pcs resource create neutron-linuxbridge-agent systemd:neutron-linuxbridge-agent --clone interleave=true
pcs resource create neutron-l3-agent systemd:neutron-l3-agent --clone interleave=true
pcs resource create neutron-dhcp-agent systemd:neutron-dhcp-agent --clone interleave=true
pcs resource create neutron-metadata-agent systemd:neutron-metadata-agent --clone interleave=true      
    2、檢視pcs資源
[root@controller01 neutron]# pcs resource
 vip    (ocf::heartbeat:IPaddr2):    Started controller01
 Clone Set: lb-haproxy-clone [lb-haproxy]
     Started: [ controller01 ]
     Stopped: [ controller02 controller03 ]
 Clone Set: openstack-keystone-clone [openstack-keystone]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-glance-api-clone [openstack-glance-api]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-nova-api-clone [openstack-nova-api]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: neutron-server-clone [neutron-server]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: neutron-linuxbridge-agent-clone [neutron-linuxbridge-agent]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: neutron-l3-agent-clone [neutron-l3-agent]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]
     Started: [ controller01 controller02 controller03 ]      

二、OpenStack清除網絡和路由

繼續閱讀