混合集群的添加云上节点的方式依赖本地数据中心自建Kubernetes集群的搭建方式,比如使用Kubeadm搭建、使用Kubernetes二进制文件打击那或者使用Rancher平台搭建等。本文将介绍如何编写自定义节点添加脚本。
阿里云注册集群下发的系统环境变量
用户在编写自定义节点添加脚本时需要注意必须接收阿里云注册集群下发的系统环境变量,如下所示。
- ALIBABA_CLOUD_PROVIDE_ID。
用户自定义节点添加脚本中必须接收并进行配置,否则会影响注册集群管理异常。示例:
ALIBABA_CLOUD_PROVIDE_ID=cn-shenzhen.i-wz92ewt14n9wx9mol2cd。对应的kubelet参数设置为--provider-id=${ALIBABA_CLOUD_PROVIDE_ID}。
- ALIBABA_CLOUD_NODE_NAME。
用户自定义节点添加脚本中必须接收并进行配置,否则会导致注册集群节点池中节点状态异常。示例:ALIBABA_CLOUD_NODE_NAME=cn-shenzhen.192.168.1.113。对应的kubelet参数设置为--hostname-override=${ALIBABA_CLOUD_NODE_NAME}。
- ALIBABA_CLOUD_LABELS。
用户自定义节点添加脚本中必须接收并进行配置,否则会影响节点池管理状态异常以及后续工作负载在云上云下的调度。示例:ALIBABA_CLOUD_LABELS=alibabacloud.com/nodepool-id=np0e2031e952c4492bab32f512ce1422f6,ack.aliyun.com=cc3df6d939b0d4463b493b82d0d670c66,alibabacloud.com/instance-id=i-wz960ockeekr3dok06kr,alibabacloiud.com/external=true,workload=cpu。其中workload=cpu是用户在节点池中配置的用户自定义节点标签,其他都为系统下发的节点标签。对应的kubelet参数设置为--node-labels=${ALIBABA_CLOUD_LABELS}。
- ALIBABA_CLOUD_TAINTS。
用户自定义节点添加脚本中必须接收并进行配置,否则会节点池中的污点配置将不会生效。示例:ALIBABA_CLOUD_TAINTS=workload=ack:NoSchedule。对应的kubelet参数配置为--register-with-taints=${ALIBABA_CLOUD_TAINTS}。
kubelet配置示例
下面是一个kubelet配置的示例内容:
cat >/usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/data0/kubernetes/bin/kubelet \\
--node-ip=${ALIBABA_CLOUD_NODE_NAME} \\
--hostname-override=${ALIBABA_CLOUD_NODE_NAME} \\
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \\
--config=/var/lib/kubelet/config.yaml \\
--kubeconfig=/etc/kubernetes/kubelet.conf \\
--cert-dir=/etc/kubernetes/pki/ \\
--cni-bin-dir=/opt/cni/bin \\
--cni-cache-dir=/opt/cni/cache \\
--cni-conf-dir=/etc/cni/net.d \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes/logs \\
--log-file=/var/log/kubernetes/logs/kubelet.log \\
--node-labels=${ALIBABA_CLOUD_LABELS} \\
--root-dir=/var/lib/kubelet \\
--provider-id=${ALIBABA_CLOUD_PROVIDE_ID} \\
--register-with-taints=${ALIBABA_CLOUD_TAINTS} \\
--v=4
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
自定义脚本的保存
支持存放在http文件服务器上,比如存放在OSS Bucket上:
https://kubelet-liusheng.oss-ap-southeast-3-internal.aliyuncs.com/attachnode.sh如何使用自定义脚本
在成功将本地数据中心集群接入注册集群后,注册集群的agent组件会自动在kube-system下创建名为 ack-agent-config 的ConfigMap,初始化配置如下:
apiVersion: v1
data:
addNodeScriptPath: ""
enableNodepool: "true"
isInit: "true"
kind: ConfigMap
metadata:
name: ack-agent-config
namespace: kube-system
您需要将自定义节点添加脚本的路径
配置到addNodeScriptPath字段区域并保存即可。如下所示:
apiVersion: v1
data:
addNodeScriptPath: https://kubelet-liusheng.oss-ap-southeast-3-internal.aliyuncs.com/attachnode.sh
enableNodepool: "true"
isInit: "true"
kind: ConfigMap
metadata:
name: ack-agent-config
namespace: kube-system
Kubeadm初始化示例
1) 使用Kubeadm搭建的集群,示例脚本如下所示。
```
#!/bin/bash
# 卸载旧版本
yum remove -y docker \
docker-client \
docker-client-latest \
docker-ce-cli \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
# 设置 yum repository
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
yum-config-manager --add-repo
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# 安装并启动 docker
yum install -y docker-ce-19.03.13 docker-ce-cli-19.03.13 containerd.io-1.4.3 conntrack
# Restart Docker
systemctl enable docker
systemctl restart docker
# 关闭 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
# 修改 /etc/sysctl.conf
# 如果有配置,则修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf
# 可能没有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
# 执行命令以应用
sysctl -p
# 配置K8S的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
yum remove -y kubelet kubeadm kubectl
# 安装kubelet、kubeadm、kubectl
yum install -y kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4
# 配置node labels,taints,node name,provider id
KUBEADM_CONFIG_FILE="/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf"
if [[ $ALIBABA_CLOUD_LABELS != "" ]];then
option="--node-labels"
if grep -- "${option}=" $KUBEADM_CONFIG_FILE &> /dev/null;then
sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_LABELS},@g" $KUBEADM_CONFIG_FILE
elif grep "KUBELET_EXTRA_ARGS=" $KUBEADM_CONFIG_FILE &> /dev/null;then
sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS} @g" $KUBEADM_CONFIG_FILE
else
sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS}\"" $KUBEADM_CONFIG_FILE
fi
fi
if [[ $ALIBABA_CLOUD_TAINTS != "" ]];then
option="--register-with-taints"
sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_TAINTS},@g" $KUBEADM_CONFIG_FILE
sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS} @g" $KUBEADM_CONFIG_FILE
sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS}\"" $KUBEADM_CONFIG_FILE
if [[ $ALIBABA_CLOUD_NODE_NAME != "" ]];then
option="--hostname-override"
sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_NODE_NAME},@g" $KUBEADM_CONFIG_FILE
sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME} @g" $KUBEADM_CONFIG_FILE
sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME}\"" $KUBEADM_CONFIG_FILE
if [[ $ALIBABA_CLOUD_PROVIDER_ID != "" ]];then
option="--provider-id"
sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_PROVIDER_ID},@g" $KUBEADM_CONFIG_FILE
sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID} @g" $KUBEADM_CONFIG_FILE
sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID}\"" $KUBEADM_CONFIG_FILE
# 重启 docker,并启动 kubelet
systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet
kubeadm join --node-name $ALIBABA_CLOUD_NODE_NAME --token 2q3s0u.w3d10wtsndqjitrg 172.16.0.153:6443 --discovery-token-unsafe-skip-ca-verification