文章目录
- 前言
- 一、高可用集群
-
- 1.1 高可用集群技术细节
- 二、部署高可用集群
-
- 2.1 准备环境
- 2.2 所有master节点部署keepalived
-
- 2.2.1 安装相关包和keepalived
- 2.2.2 配置master节点
- 2.2.3 启动和检查
- 2.3 部署haproxy
- 2.4 所有节点安装Docker/kubeadm/kubelet
-
- 2.4.1 安装Docker
- 2.4.2 添加kubernetes软件源
- 2.4.3 安装kubeadm,kubelet和kubectl
- 2.5 部署Kubernetes Master
- 2.5.1 创建kubeadm配置文件
- 2.6 安装集群网络
- 2.7 master2节点加入集群
-
- 2.7.1 复制密钥及相关文件
- 2.7.2 master2加入集群
- 2.8 加入Kubernetes Node
前言
之前我们搭建的集群,只有一个master节点,当master节点宕机的时候,通过node将无法继续访问,而master主要是管理作用,所以整个集群将无法提供服务
一、高可用集群
下面我们就需要搭建一个多master节点的高可用集群,不会存在单点故障问题
但是在node 和 master节点之间,需要存在一个 LoadBalancer组件,作用如下:
- 负载
- 检查master节点的状态
对外有一个统一的VIP:虚拟ip来对外进行访问
1.1 高可用集群技术细节
- keepalived:配置虚拟ip,检查节点的状态
- haproxy:负载均衡服务【类似于nginx】
- apiserver:
- controller:
- manager:
- scheduler
二、部署高可用集群
2.1 准备环境
角色 | IP |
---|---|
master1 | 192.168.88.10 |
master2 | 192.168.88.11 |
node1 | 192.168.88.12 |
VIP(虚拟ip) | 192.168.88.100 |
需要在这三个节点上进行操作
# 关闭防火墙
[[email protected] ~]$ systemctl stop firewalld && systemctl disable firewalld
# 关闭selinux
# 永久关闭
[[email protected] ~]$ sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0
# 临时关闭
# 关闭swap
# 临时
[[email protected] ~]$ swapoff -a
# 永久关闭
[[email protected] ~]$ sed -ri 's/.*swap.*/#&/' /etc/fstab
# 根据规划设置主机名【master1节点上操作】
[[email protected] ~]$ hostnamectl set-hostname master1
# 根据规划设置主机名【master2节点上操作】
[[email protected] ~]$ hostnamectl set-hostname master2
# 根据规划设置主机名【node1节点操作】
[[email protected] ~]$ hostnamectl set-hostname node1
# 在master添加hosts
[[email protected] ~]$ cat >> /etc/hosts << EOF
192.168.88.100 k8smaster
192.168.88.10 master01.k8s.io master1
192.168.88.11 master02.k8s.io master2
192.168.88.12 node01.k8s.io node1
EOF
# 将桥接的IPv4流量传递到iptables的链【3个节点上都执行】
[[email protected] ~]$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 生效
[[email protected] ~]$ sysctl --system
# 时间同步
[[email protected] ~]$ yum install ntpdate -y
[[email protected] ~]$ ntpdate time.windows.com
2.2 所有master节点部署keepalived
2.2.1 安装相关包和keepalived
[[email protected] ~]$ yum install -y conntrack-tools libseccomp libtool-ltdl
[[email protected] ~]$ yum install -y keepalived
2.2.2 配置master节点
[[email protected] ~]$ cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id k8s
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 250
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab
}
virtual_ipaddress {
192.168.88.100 #vip
}
track_script {
check_haproxy
}
}
EOF
master2节点配置
[[email protected] ~]$ cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id k8s
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab
}
virtual_ipaddress {
192.168.88.100
}
track_script {
check_haproxy
}
}
EOF
2.2.3 启动和检查
在两台master节点都执行
# 启动keepalived,设置开机启动
[[email protected] ~]$ systemctl start keepalived.service && systemctl enable keepalived.service
# 查看启动状态
[[email protected] ~]$ systemctl status keepalived.service
启动后查看master1的网卡信息
[[email protected] ~]$ ip a s ens33
2.3 部署haproxy
安装
[[email protected] ~]$ yum install -y haproxy
配置
两台master节点的配置均相同,配置中声明了后端代理的两个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口
[[email protected] ~]$ cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
mode tcp
bind *:16443
option tcplog
default_backend kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
mode tcp
balance roundrobin
server master01.k8s.io 192.168.44.155:6443 check
server master02.k8s.io 192.168.44.156:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
bind *:1080
stats auth admin:awesomePassword
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /admin?stats
EOF
两台master都启动
# 设置开机启动
[[email protected] ~]$ systemctl enable haproxy && systemctl start haproxy
# 查看启动状态
[[email protected] ~]$ systemctl status haproxy
检查端口
[[email protected] ~]$ netstat -lntup|grep haproxy
2.4 所有节点安装Docker/kubeadm/kubelet
Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。
2.4.1 安装Docker
#配置一下Docker的阿里yum源
[[email protected] ~]$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
#yum方式安装docker
[[email protected] ~]$ yum -y install docker-ce-18.06.1.ce-3.el7
# 启动docker
[[email protected] ~]$ systemctl enable docker && systemctl start docker
[[email protected] ~]$ docker --version
Docker version 18.06.1-ce, build e68fc7a
配置docker的镜像源
[[email protected] ~]$ cat >> /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
然后重启docker
[[email protected] ~]$ systemctl restart docker
2.4.2 添加kubernetes软件源
配置一下yum的k8s软件源
[[email protected] ~]$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.4.3 安装kubeadm,kubelet和kubectl
[[email protected] ~]$ yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
[[email protected] ~]$ systemctl enable kubelet && systemctl start kubelet
2.5 部署Kubernetes Master
2.5.1 创建kubeadm配置文件
在具有vip的master上进行初始化操作,这里为master1
# 创建文件夹
[[email protected] ~]$ mkdir /usr/local/kubernetes/manifests -p
# 到manifests目录
[[email protected] ~]$ cd /usr/local/kubernetes/manifests/
# 新建yaml文件
[[email protected] ~]$ cat > kubeadm-config.yaml << EOF
apiServer:
certSANs:
- master1
- master2
- master.k8s.io
- 192.168.88.10
- 192.168.88.11
- 192.168.88.12
- 127.0.0.1
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "master.k8s.io:16443"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.1.0.0/16
scheduler: {}
EOF
在master1节点执行
[[email protected] ~]$ kubeadm init --config kubeadm-config.yaml
按照提示配置环境变量,使用kubectl工具
# 执行下方命令
[[email protected] ~]$ mkdir -p $HOME/.kube
[[email protected] ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[[email protected] ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 查看节点
[[email protected] ~]$ kubectl get nodes
# 查看pod
[[email protected] ~]$ kubectl get pods -n kube-system
按照提示保存以下内容,一会要使用:
#注意,每个人的都不一样
kubeadm join master.k8s.io:16443 --token jv5z7n.3y1zi95p952y9p65 \
--discovery-token-ca-cert-hash sha256:403bca185c2f3a4791685013499e7ce58f9848e2213e27194b75a2e3293d8812 \
--control-plane
–control-plane : 只有在添加master节点的时候才有
查看集群状态
# 查看集群状态
[[email protected] ~]$ kubectl get cs
# 查看pod
[[email protected] ~]$ kubectl get pods -n kube-system
2.6 安装集群网络
获取到flannel的yaml,在master1上执行
# 创建文件夹
[[email protected] ~]$ mkdir flannel && cd flannel
# 下载yaml文件
[[email protected] ~]$ wget https://www.chenleilei.net/soft/k8s/kube-flannel.yaml
#直接使用地址这个即可
#替换仓库地址,quay.io国内访问不到,需要修改为quay-mirror.qiniu.com
[[email protected] ~]$ sed -i 's#quay.io#quay-mirror.qiniu.com#g' kube-flannel.yaml
[[email protected] ~]$ kubectl apply -f kube-flannel.yaml
检查
[[email protected] ~]$ kubectl get pods -n kube-system
2.7 master2节点加入集群
2.7.1 复制密钥及相关文件
从master1复制密钥及相关文件到master2
[[email protected] ~]$ ssh [email protected].168.88.11 mkdir -p /etc/kubernetes/pki/etcd
[[email protected] ~]$ scp /etc/kubernetes/admin.conf [email protected].168.88.11 :/etc/kubernetes
[[email protected] ~]$ scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} [email protected].168.88.11 :/etc/kubernetes/pki
[[email protected] ~]$ scp /etc/kubernetes/pki/etcd/ca.* [email protected].168.88.11 :/etc/kubernetes/pki/etcd
2.7.2 master2加入集群
执行在master1上init后输出的join命令,需要带上参数
--control-plane
表示把master控制节点加入集群
[[email protected] ~]$ kubeadm join master.k8s.io:16443 --token ckf7bs.30576l0okocepg8b --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba --control-plane
检查状态
[[email protected] ~]$ kubectl get node
[[email protected] ~]$ kubectl get pods --all-namespaces
2.8 加入Kubernetes Node
在node1上执行
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:
[[email protected] ~]$ kubeadm join master.k8s.io:16443 --token ckf7bs.30576l0okocepg8b --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba
集群网络重新安装,因为添加了新的node节点
检查状态
[[email protected] ~]$ kubectl get node
[[email protected] ~]$ kubectl get pods --all-namespaces
测试kubernetes集群
在Kubernetes集群中创建一个pod,验证是否正常运行
# 创建nginx deployment
[[email protected] ~]$ kubectl create deployment nginx --image=nginx
# 暴露端口
[[email protected] ~]$ kubectl expose deployment nginx --port=80 --type=NodePort
# 查看状态
[[email protected] ~]$ kubectl get pod,svc
通过任何一个节点,都能够访问我们的nginx页面,访问地址:http://NodeIP:Port