一、概述
前面我寫了關于k8s環境部署的幾篇文章,k8s部署還是比較麻煩的,是以是有必要考慮一鍵部署的方案,這裡借助ansible playbook來實作k8s環境的一鍵部署,實作快速部署的目的。關于k8s傳統部署詳細過程可以參考我以下幾篇文章:
- Kubernetes(k8s)安裝以及搭建k8s-Dashboard詳解
- 「雲原生」Kubernetes(k8s)最完整版環境部署(V1.24.1)
關于Ansible的介紹可以參考我以下幾篇文章:
- Ansible 介紹與實戰操作示範
- Ansible playbook 講解與實戰操作
節點資訊
主機名 | IP | 角色 | 作業系統 |
local-168-182-110 | 192.168.182.110 | master,ansible | centos7 |
local-168-182-111 | 192.168.182.110 | master | centos7 |
local-168-182-112 | 192.168.182.110 | master | centos7 |
local-168-182-113 | 192.168.182.110 | node | centos7 |
k8s 架構圖:
基于ansible部署k8s流程圖:
二、Ansible 部署
yum -y install epel-release
yum -y install ansible
ansible --version
1)開啟記錄日志
配置檔案:/etc/ansible/ansible.cfg
vi /etc/ansible/ansible.cfg
# 去掉前面的'#'号
#log_path = /var/log/ansible.log ==> log_path = /var/log/ansible.log
2)去掉第一次連接配接ssh ask确認
vi /etc/ansible/ansible.cfg
# 其實就是把#去掉
# host_key_checking = False ==> host_key_checking = False
3)配置hosts
配置檔案:/etc/ansible/hosts
[master1]
192.168.182.110
[master2]
192.168.182.111
192.168.182.112
[node]
192.168.182.113
[k8s:children]
master1
master2
node
[k8s:vars]
ansible_ssh_user=root
ansible_ssh_pass=1331301116
ansible_ssh_port=22
# k8s 版本
k8s_version=1.23.6
測試連通性
ansible k8s -m ping
三、開始編排 ansible playbook
1)建立目錄
mkdir -pv ./install-k8s/{init,install-docker,install-k8s,master-init,install-cni,install-ipvs,master-join,node-join,install-ingress-nginx,install-nfs-provisioner,install-harbor,install-metrics-server,uninstall-k8s}/{files,templates,vars,tasks,handlers,meta,default}
2)節點初始化
- 準備install-k8s/init/files/hosts檔案
192.168.182.110 local-168-182-110
192.168.182.111 local-168-182-111
192.168.182.112 local-168-182-112
192.168.182.113 local-168-182-113
- 準備腳本install-k8s/init/templates/init.sh,内容如下:
#!/usr/bin/env bash
### 【第一步】修改主機名
# 擷取主機名
hostnamectl set-hostname $(grep `hostname -i` /tmp/hosts|awk '{print $2}')
### 【第二步】配置hosts
# 先删除
for line in `cat /tmp/hosts`
do
sed -i "/$line/d" /etc/hosts
done
# 追加
cat /tmp/hosts >> /etc/hosts
### 【第三步】添加互信
# 先建立秘鑰對
ssh-keygen -f ~/.ssh/id_rsa -P '' -q
# 安裝expect
yum -y install expect -y
# 批量推送公鑰
for line in `cat /tmp/hosts`
do
ip=`echo $line|awk '{print $1}'`
password={{ ansible_ssh_pass }}
expect <<-EOF
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub $ip
expect {
"(yes/no)?"
{
send "yes\n"
expect "*assword:" { send "$password\n"}
}
"*assword:"
{
send "$password\n"
}
}
expect eof
EOF
done
### 【第四步】時間同步
yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
chronyc sources
### 【第五步】關閉防火牆
systemctl stop firewalld
systemctl disable firewalld
### 【第六步】關閉swap
# 臨時關閉;關閉swap主要是為了性能考慮
swapoff -a
# 永久關閉
sed -ri 's/.*swap.*/#&/' /etc/fstab
### 【第七步】禁用SELinux
# 臨時關閉
setenforce 0
# 永久禁用
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
### 【第八步】允許 iptables 檢查橋接流量
sudo modprobe br_netfilter
lsmod | grep br_netfilter
# 先删
rm -rf /etc/modules-load.d/k8s.conf
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
rm -rf /etc/sysctl.d/k8s.conf
# 設定所需的 sysctl 參數,參數在重新啟動後保持不變
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# 應用 sysctl 參數而不重新啟動
sudo sysctl --system
- 任務編排 install-k8s/init/tasks/main.yml
- name: cp hosts
copy: src=hosts dest=/tmp/hosts
- name: init cp
template: src=init.sh dest=/tmp/init.sh
- name: init install
shell: sh /tmp/init.sh
3)安裝 docker
- install-k8s/install-docker/files/install-docker.sh
#!/usr/bin/env bash
### 安裝docker
# 配置yum源
cd /etc/yum.repos.d ; mkdir bak; mv CentOS-Linux-* bak/
# centos7
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# centos8
# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
# 安裝yum-config-manager配置工具
yum -y install yum-utils
# 設定yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安裝docker-ce版本
yum install -y docker-ce
# 啟動并開機自啟
systemctl enable --now docker
# Docker鏡像源設定
# 修改檔案 /etc/docker/daemon.json,沒有這個檔案就建立
# 添加以下内容後,重新開機docker服務:
cat >/etc/docker/daemon.json<<EOF
{
"registry-mirrors": ["http://hub-mirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 重新開機
systemctl restart docker
# 檢視
systemctl status docker containerd
- 任務編排 install-k8s/install-docker/tasks/main.yml
- name: install docker cp
copy: src=install-docker.sh dest=/tmp/install-docker.sh
- name: install docker
shell: sh /tmp/install-docker.sh
4)安裝 k8s 相關元件
- install-k8s/install-k8s/templates/install-k8s.sh
#!/usr/bin/env bash
# 檢查是否已經安裝
yum list installed kubelet
if [ $? -eq 0 ];then
exit 0
fi
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF
# disableexcludes=kubernetes:禁掉除了這個kubernetes之外的别的倉庫
yum install -y kubelet-{{ k8s_version }} kubeadm-{{ k8s_version }} kubectl-{{ k8s_version }} --disableexcludes=kubernetes
# 設定為開機自啟并現在立刻啟動服務 --now:立刻啟動服務
systemctl enable --now kubelet
# 檢視狀态,這裡需要等待一段時間再檢視服務狀态,啟動會有點慢
systemctl status kubelet
# 提前下載下傳好
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v{{ k8s_version }}
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v{{ k8s_version }}
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v{{ k8s_version }}
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v{{ k8s_version }}
docker pull registry.aliyuncs.com/google_containers/pause:3.6
docker pull registry.aliyuncs.com/google_containers/etcd:3.5.1-0
docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.6
- 任務編排 install-k8s/install-k8s/tasks/main.yml
- name: install k8s cp
template: src=install-k8s.sh dest=/tmp/install-k8s.sh
- name: install k8s
shell: sh /tmp/install-k8s.sh
5)k8s master節點初始化
- install-k8s/master-init/templates/master-init.sh
#!/usr/bin/env bash
# 判斷是否已經初始化了
kubectl get nodes |grep -q `hostname` 1>&2 >/dev/null
if [ $? -eq 0 ];then
exit 0
fi
ip=`hostname -i`
kubeadm init \
--apiserver-advertise-address=$ip \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v{{ k8s_version }} \
--control-plane-endpoint=$ip \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--v=5
mkdir -p $HOME/.kube
rm -rf $HOME/.kube/config
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- 任務編排 install-k8s/master-init/tasks/main.yml
- name: k8s master init cp
template: src=master-init.sh dest=/tmp/master-init.sh
- name: k8s master init
shell: sh /tmp/master-init.sh
6)安裝 CNI(flannel)
- install-k8s/install-cni/files/install-flannel.sh
#!/usr/bin/env bash
# 去掉master污點
kubectl taint nodes `hostname` node-role.kubernetes.io/master:NoSchedule- 2>/dev/null
kubectl taint nodes `hostname` node.kubernetes.io/not-ready:NoSchedule- 2>/dev/null
# For Kubernetes v1.17+
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/v0.20.2/Documentation/kube-flannel.yml
# 檢視
kubectl get all -n kube-flannel
# 持續檢查
while true
do
kubectl get pods -n kube-flannel|grep -q '0/1'
if [ $? -ne 0 ];then
echo "flannel started"
break
else
echo "flannel starting..."
fi
sleep 1
done
- 任務編排 install-k8s/install-cni/tasks/main.yml
- name: install cni flannel cp
copy: src=install-flannel.sh dest=/tmp/install-flannel.sh
- name: install cni flannel
shell: sh /tmp/install-flannel.sh
7)master 節點加入k8s叢集
- install-k8s/master-join/files/master-join.sh
#!/usr/bin/env bash
# 擷取master ip,假設都是第一個節點為master
# 證如果過期了,可以使用下面指令生成新證書上傳,這裡會列印出certificate key,後面會用到
maser_ip=`head -1 /tmp/hosts |awk '{print $1}'`
# 判斷節點是否加入
ssh $maser_ip "kubectl get nodes|grep -q `hostname`"
if [ $? -eq 0 ];then
exit 0
fi
CERT_KEY=`ssh $maser_ip "kubeadm init phase upload-certs --upload-certs|tail -1"`
join_str=`ssh $maser_ip kubeadm token create --print-join-command`
$( echo $join_str " --control-plane --certificate-key $CERT_KEY --v=5")
# 拿到上面列印的指令在需要添加的節點上執行
# --control-plane 标志通知 kubeadm join 建立一個新的控制平面。加入master必須加這個标記
# --certificate-key ... 将導緻從叢集中的 kubeadm-certs Secret 下載下傳控制平面證書并使用給定的密鑰進行解密。這裡的值就是上面這個指令(kubeadm init phase upload-certs --upload-certs)列印出的key。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 去掉master污點
kubectl taint nodes `hostname` node-role.kubernetes.io/master:NoSchedule- 2>/dev/null
kubectl taint nodes `hostname` node.kubernetes.io/not-ready:NoSchedule- 2>/dev/null
- 任務編排 install-k8s/master-join/tasks/main.yml
- name: master join cp
copy: src=master-join.sh dest=/tmp/master-join.sh
- name: master join
shell: sh /tmp/master-join.sh
8)node 節點加入k8s叢集
- install-k8s/node-join/files/node-join.sh
#!/usr/bin/env bash
# 擷取master ip,假設都是第一個節點為master
maser_ip=`head -1 /tmp/hosts |awk '{print $1}'`
# 判斷節點是否加入
ssh $maser_ip "kubectl get nodes|grep -q `hostname`"
if [ $? -eq 0 ];then
exit 0
fi
CERT_KEY=`ssh $maser_ip "kubeadm init phase upload-certs --upload-certs|tail -1"`
join_str=`ssh $maser_ip kubeadm token create --print-join-command`
$( echo $join_str " --certificate-key $CERT_KEY --v=5")
- 任務編排 install-k8s/node-join/tasks/main.yml
- name: node join cp
copy: src=node-join.yaml dest=/tmp/node-join.yaml
- name: node join
shell: sh /tmp/node-join.yaml
9)安裝 ingress-nginx
- install-k8s/install-ingress-nginx/files/ingress-nginx.sh
#!/usr/bin/env bash
# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml -O /tmp/deploy.yaml
# 可以先把鏡像下載下傳,再安裝
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.2.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
kubectl apply -f /tmp/deploy.yaml
- 任務編排 install-k8s/install-ingress-nginx/tasks/main.yml
- name: ingress-nginx deploy cp
copy: src=deploy.yaml dest=/tmp/deploy.yaml
- name: install ingress-nginx cp
copy: src=ingress-nginx.sh dest=/tmp/ingress-nginx.sh
- name: install ingress-nginx
shell: sh /tmp/ingress-nginx.sh
10)安裝 nfs 共享存儲
- install-k8s/install-nfs-provisioner/files/nfs-provisioner.sh
#!/usr/bin/env bash
### 安裝helm
# 下載下傳包
wget https://get.helm.sh/helm-v3.7.1-linux-amd64.tar.gz -O /tmp/helm-v3.7.1-linux-amd64.tar.gz
# 解壓壓縮包
tar -xf /tmp/helm-v3.7.1-linux-amd64.tar.gz -C /root/
# 制作軟連接配接
rm -rf /usr/local/bin/helm
ln -s /root/linux-amd64/helm /usr/local/bin/helm
# 判斷是否已經部署
helm list -n nfs-provisioner|grep -q nfs-provisioner
if [ $? -eq 0 ];then
exit 0
fi
### 開始安裝nfs-provisioner
# 添加helm倉庫源
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
#### 安裝nfs
yum -y install nfs-utils rpcbind
# 服務端
mkdir -p /opt/nfsdata
# 授權共享目錄
chmod 666 /opt/nfsdata
cat > /etc/exports<<EOF
/opt/nfsdata *(rw,no_root_squash,no_all_squash,sync)
EOF
# 配置生效
exportfs -r
systemctl enable --now rpcbind
systemctl enable --now nfs-server
# 用戶端
for line in `cat /tmp/hosts`
do
ip=`echo $line|awk '{print $1}'`
master_ip=`head -1 /tmp/hosts|awk '{print $1}'`
if [ "$ip" != "$master_ip" ];then
ssh $ip "yum -y install rpcbind"
ssh $ip "systemctl enable --now rpcbind"
fi
done
### helm安裝nfs provisioner
ip=`hostname -i`
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--namespace=nfs-provisioner \
--create-namespace \
--set image.repository=willdockerhub/nfs-subdir-external-provisioner \
--set image.tag=v4.0.2 \
--set replicaCount=2 \
--set storageClass.name=nfs-client \
--set storageClass.defaultClass=true \
--set nfs.server=${ip} \
--set nfs.path=/opt/nfsdata
# 檢視
kubectl get pods,deploy,sc -n nfs-provisioner
# 持續檢查
while true
do
kubectl get pods -n nfs-provisioner|grep -q '0/1'
if [ $? -ne 0 ];then
echo "nfs-provisioner started"
break
else
echo "nfs-provisioner starting..."
fi
sleep 1
done
- 任務編排 install-k8s/install-nfs-provisioner/tasks/main.yml
- name: install nfs-provisioner cp
copy: src=nfs-provisioner.sh dest=/tmp/nfs-provisioner.sh
- name: install nfs-provisioner
shell: sh /tmp/nfs-provisioner.sh
11)k8s 環境安裝編排 roles
- install-k8s.yaml
- hosts: k8s
remote_user: root
roles:
- init
- hosts: k8s
remote_user: root
roles:
- install-docker
- hosts: k8s
remote_user: root
roles:
- install-k8s
- hosts: master1
remote_user: root
roles:
- master-init
- hosts: master1
remote_user: root
roles:
- install-cni
- hosts: master2
remote_user: root
roles:
- master-join
- hosts: node
remote_user: root
roles:
- node-join
- hosts: master1
remote_user: root
roles:
- install-ingress-nginx
- hosts: master1
remote_user: root
roles:
- install-nfs-provisioner
執行安裝
# 可以加上-vvv顯示更多資訊
ansible-playbook install-k8s.yaml
kubectl get nodes
kubectl get pods -A
12)k8s 環境解除安裝
- install-k8s/uninstall-k8s/files/uninstall-k8s.sh
#!/usr/bin/env bash
expect <<-EOF
spawn kubeadm reset
expect "*y/N*"
send "y\n"
expect eof
EOF
rm -rf /etc/kubernetes/*
rm -fr ~/.kube
rm -fr /var/lib/etcd
- 任務編排 install-k8s/uninstall-k8s/tasks/main.yaml
- name: uninstall k8s cp
copy: src=uninstall-k8s.sh dest=/tmp/uninstall-k8s.sh
- name: uninstall k8s
shell: sh /tmp/uninstall-k8s.sh
13)k8s 環境解除安裝編排 roles
- uninstall-k8s.yaml
- hosts: k8s
remote_user: root
roles:
- uninstall-k8s
執行解除安裝
ansible-playbook uninstall-k8s.yaml
溫馨提示:
- 其實建立目錄結構可以通過ansible-galaxy工具,也可以通過這個工具安裝線上别人編排好的包,非常友善的。
- 這裡隻是驗證了k8s V1.23.6版本的,其它高版本和低版本後續會繼續完善驗證,還有就是如果執行腳本的話,可以将copy和shell子產品并用一個script子產品,編排就會變更更簡潔,其實script内部也是先copy檔案,執行完後會清理。
k8s 一鍵部署(ansible)就先到這裡了,後續會繼續完善,增加其它元件和驗證其它版本,讓部署k8s環境變得更簡單友善。關注我的公衆号【大資料與雲原生技術分享】,回複 k8s,即可擷取下載下傳位址。