混合叢集的添加雲上節點的方式依賴本地資料中心自建Kubernetes叢集的搭建方式,比如使用Kubeadm搭建、使用Kubernetes二進制檔案打擊那或者使用Rancher平台搭建等。本文将介紹如何編寫自定義節點添加腳本。
阿裡雲注冊叢集下發的系統環境變量
使用者在編寫自定義節點添加腳本時需要注意必須接收阿裡雲注冊叢集下發的系統環境變量,如下所示。
- ALIBABA_CLOUD_PROVIDE_ID。
使用者自定義節點添加腳本中必須接收并進行配置,否則會影響注冊叢集管理異常。示例:
ALIBABA_CLOUD_PROVIDE_ID=cn-shenzhen.i-wz92ewt14n9wx9mol2cd。對應的kubelet參數設定為--provider-id=${ALIBABA_CLOUD_PROVIDE_ID}。
- ALIBABA_CLOUD_NODE_NAME。
使用者自定義節點添加腳本中必須接收并進行配置,否則會導緻注冊叢集節點池中節點狀态異常。示例:ALIBABA_CLOUD_NODE_NAME=cn-shenzhen.192.168.1.113。對應的kubelet參數設定為--hostname-override=${ALIBABA_CLOUD_NODE_NAME}。
- ALIBABA_CLOUD_LABELS。
使用者自定義節點添加腳本中必須接收并進行配置,否則會影響節點池管理狀态異常以及後續工作負載在雲上雲下的排程。示例:ALIBABA_CLOUD_LABELS=alibabacloud.com/nodepool-id=np0e2031e952c4492bab32f512ce1422f6,ack.aliyun.com=cc3df6d939b0d4463b493b82d0d670c66,alibabacloud.com/instance-id=i-wz960ockeekr3dok06kr,alibabacloiud.com/external=true,workload=cpu。其中workload=cpu是使用者在節點池中配置的使用者自定義節點标簽,其他都為系統下發的節點标簽。對應的kubelet參數設定為--node-labels=${ALIBABA_CLOUD_LABELS}。
- ALIBABA_CLOUD_TAINTS。
使用者自定義節點添加腳本中必須接收并進行配置,否則會節點池中的污點配置将不會生效。示例:ALIBABA_CLOUD_TAINTS=workload=ack:NoSchedule。對應的kubelet參數配置為--register-with-taints=${ALIBABA_CLOUD_TAINTS}。
kubelet配置示例
下面是一個kubelet配置的示例内容:
cat >/usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/data0/kubernetes/bin/kubelet \\
--node-ip=${ALIBABA_CLOUD_NODE_NAME} \\
--hostname-override=${ALIBABA_CLOUD_NODE_NAME} \\
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \\
--config=/var/lib/kubelet/config.yaml \\
--kubeconfig=/etc/kubernetes/kubelet.conf \\
--cert-dir=/etc/kubernetes/pki/ \\
--cni-bin-dir=/opt/cni/bin \\
--cni-cache-dir=/opt/cni/cache \\
--cni-conf-dir=/etc/cni/net.d \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes/logs \\
--log-file=/var/log/kubernetes/logs/kubelet.log \\
--node-labels=${ALIBABA_CLOUD_LABELS} \\
--root-dir=/var/lib/kubelet \\
--provider-id=${ALIBABA_CLOUD_PROVIDE_ID} \\
--register-with-taints=${ALIBABA_CLOUD_TAINTS} \\
--v=4
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
自定義腳本的儲存
支援存放在http檔案伺服器上,比如存放在OSS Bucket上:
https://kubelet-liusheng.oss-ap-southeast-3-internal.aliyuncs.com/attachnode.sh如何使用自定義腳本
在成功将本地資料中心叢集接入注冊叢集後,注冊叢集的agent元件會自動在kube-system下建立名為 ack-agent-config 的ConfigMap,初始化配置如下:
apiVersion: v1
data:
addNodeScriptPath: ""
enableNodepool: "true"
isInit: "true"
kind: ConfigMap
metadata:
name: ack-agent-config
namespace: kube-system
您需要将自定義節點添加腳本的路徑
配置到addNodeScriptPath字段區域并儲存即可。如下所示:
apiVersion: v1
data:
addNodeScriptPath: https://kubelet-liusheng.oss-ap-southeast-3-internal.aliyuncs.com/attachnode.sh
enableNodepool: "true"
isInit: "true"
kind: ConfigMap
metadata:
name: ack-agent-config
namespace: kube-system
Kubeadm初始化示例
1) 使用Kubeadm搭建的叢集,示例腳本如下所示。
```
#!/bin/bash
# 解除安裝舊版本
yum remove -y docker \
docker-client \
docker-client-latest \
docker-ce-cli \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
# 設定 yum repository
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
yum-config-manager --add-repo
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# 安裝并啟動 docker
yum install -y docker-ce-19.03.13 docker-ce-cli-19.03.13 containerd.io-1.4.3 conntrack
# Restart Docker
systemctl enable docker
systemctl restart docker
# 關閉 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
# 修改 /etc/sysctl.conf
# 如果有配置,則修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf
# 可能沒有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
# 執行指令以應用
sysctl -p
# 配置K8S的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
yum remove -y kubelet kubeadm kubectl
# 安裝kubelet、kubeadm、kubectl
yum install -y kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4
# 配置node labels,taints,node name,provider id
KUBEADM_CONFIG_FILE="/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf"
if [[ $ALIBABA_CLOUD_LABELS != "" ]];then
option="--node-labels"
if grep -- "${option}=" $KUBEADM_CONFIG_FILE &> /dev/null;then
sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_LABELS},@g" $KUBEADM_CONFIG_FILE
elif grep "KUBELET_EXTRA_ARGS=" $KUBEADM_CONFIG_FILE &> /dev/null;then
sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS} @g" $KUBEADM_CONFIG_FILE
else
sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS}\"" $KUBEADM_CONFIG_FILE
fi
fi
if [[ $ALIBABA_CLOUD_TAINTS != "" ]];then
option="--register-with-taints"
sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_TAINTS},@g" $KUBEADM_CONFIG_FILE
sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS} @g" $KUBEADM_CONFIG_FILE
sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS}\"" $KUBEADM_CONFIG_FILE
if [[ $ALIBABA_CLOUD_NODE_NAME != "" ]];then
option="--hostname-override"
sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_NODE_NAME},@g" $KUBEADM_CONFIG_FILE
sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME} @g" $KUBEADM_CONFIG_FILE
sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME}\"" $KUBEADM_CONFIG_FILE
if [[ $ALIBABA_CLOUD_PROVIDER_ID != "" ]];then
option="--provider-id"
sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_PROVIDER_ID},@g" $KUBEADM_CONFIG_FILE
sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID} @g" $KUBEADM_CONFIG_FILE
sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID}\"" $KUBEADM_CONFIG_FILE
# 重新開機 docker,并啟動 kubelet
systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet
kubeadm join --node-name $ALIBABA_CLOUD_NODE_NAME --token 2q3s0u.w3d10wtsndqjitrg 172.16.0.153:6443 --discovery-token-unsafe-skip-ca-verification