天天看点

kubernetes 升级指南二

书接上文

​​升级文档一​​

1.19 升级到 1.20.15

docker兼容性

docker 版本 备注
18.09.9 已验证
19.03.15 已验证

操作系统兼容性

CentOS 版本 备注
7.9 已验证
7.8 已验证

组件变更说明

addons

  • Calico v3.15.2
  • etcd v3.4.13
  • kubeadm

控制组件的node label ​

​node-role.kubernetes.io/master​

​ 已经被舍弃,更改为​

​node-role.kubernetes.io/control-plane​

​kubeadm alpha certs​

​ 变更为 ​

​kubeadm certs​

  • kube-apiserver

以下参数会在v1.24移除

​--address​

​and​

​--insecure-bind-address​

​--port​

​and​

​--insecure-port​

解决ServiceAccount 的安全问题

​TokenRequest and TokenRequestProjection​

​ 已经GA,以下参数必须添加到API server

  • ​--service-account-issuer​

    ​,可以稳定访问到API server 的URL
  • ​--service-account-key-file​

  • ​--service-account-signing-key-file​

示例如下

--service-account-issuer=https://kubernetes.default.svc.cluster.local
 --service-account-key-file=/etc/kubernetes/pki/sa.pub
 --service-account-signing-key-file=/etc/kubernetes/pki/sa.key      

相关issue

https://github.com/kelseyhightower/kubernetes-the-hard-way/issues/626

阿里文档说明

https://help.aliyun.com/document_detail/160384.html

其实kubernetes产品成熟,升级步骤基本一致了

升级master

yum install -y kubeadm-1.20.15 kubelet-1.20.15 kubectl-1.20.15 --disableexcludes=kubernetes 
kubeadm upgrade plan
kubeadm upgrade apply v1.20.15
systemctl daemon-reload
systemctl restart kubelet      
如果不想更换证书和配置文件
kubeadm upgrade apply v1.20.15 --certificate-renewal=false      

升级其他master

yum install -y kubeadm-1.20.15 kubelet-1.20.15 kubectl-1.20.15 --disableexcludes=kubernetes 
kubeadm upgrade node
systemctl daemon-reload
systemctl restart kubelet      
如果不想更换证书和配置文件
kubeadm upgrade node --certificate-renewal=false      

升级worker nodes

我没有腾空节点,看个人需求
yum install -y kubeadm-1.20.15 kubelet-1.20.15 kubectl-1.20.15 --disableexcludes=kubernetes
kubeadm upgrade node
systemctl daemon-reload 
systemctl restart kubelet      

1.18.20 升级到1.19.16

docker兼容性

docker 版本 备注
18.09.9 已验证
19.03.15 已验证

操作系统兼容性

CentOS 版本 备注
7.9 已验证
7.8 已验证
其实kubernetes产品成熟,升级步骤基本一致了

​升级master​

yum install -y kubeadm-1.19.16 kubelet-1.19.16 kubectl-1.19.16 --disableexcludes=kubernetes 
kubeadm upgrade plan
kubeadm upgrade apply v1.19.16
systemctl daemon-reload
systemctl restart kubelet      
如果不想更换证书和配置文件
kubeadm upgrade apply v1.19.16 --certificate-renewal=false      

​升级其他master​

yum install -y kubeadm-1.19.16 kubelet-1.19.16 kubectl-1.19.16 --disableexcludes=kubernetes 
kubeadm upgrade node
systemctl daemon-reload
systemctl restart kubelet      
如果不想更换证书和配置文件
kubeadm upgrade node --certificate-renewal=false      

​升级worker nodes​

我没有腾空节点,看个人需求
yum install -y kubeadm-1.18.20 kubelet-1.18.20 kubectl-1.18.20 --disableexcludes=kubernetes
kubeadm upgrade node
systemctl daemon-reload 
systemctl restart kubelet      

注意事项

  • /var/lib/kubelet/kubeadm-flags.env 取消 --cgroup-driver,改成kubelet的 config.yaml
  • 取消 kubeadm config view,改成 kubectl get cm -o yaml -n kube-system kubeadm-config

issue

kube-proxy ipvs报错,导致ipvs 不可用。1.19版本更改代码逻辑

如果内核不大于4.1 就刷日志,如果大于4.1 会更新参数

​​https://github.com/kubernetes/kubernetes/pull/88541​​

如果内核低于4.1 ,kube-proxy日志报错

# 需要升级内核到4.1以上
can't set sysctl net/ipv4/vs/conn_reuse_mode, kernel version must be at least 4.1      

普通内核设置如下

net.ipv4.vs.conntrack=1
net.ipv4.vs.conn_reuse_mode=0
net.ipv4.vs.expire_nodest_conn=1      

5.9 以上内核修改如下

​​https://github.com/kubernetes/kubernetes/issues/93297​​

net.ipv4.vs.conntrack = 1 
net.ipv4.vs.conn_reuse_mode = 1 (previously 0) 
net.ipv4.vs.expire_nodest_conn = 1 (unchanged) 
net.ipv4.vs.expire_quiescent_template = 1 (unchanged)      

1.17.17 升级到1.18.20

docker兼容性

docker 版本 备注
18.09.9 已验证
19.03.15 已验证

操作系统兼容性

CentOS 版本 备注
7.9 已验证
7.8 已验证
7.7 已验证

​升级master​

yum install -y kubeadm-1.18.20 kubelet-1.18.20 kubectl-1.18.20 --disableexcludes=kubernetes 
kubeadm upgrade plan
kubeadm upgrade apply v1.18.20
systemctl daemon-reload
systemctl restart kubelet      
如果不想更换证书和配置文件
kubeadm upgrade apply v1.18.20 --certificate-renewal=false      

​升级其他master​

yum install -y kubeadm-1.18.20 kubelet-1.18.20 kubectl-1.18.20 --disableexcludes=kubernetes 
kubeadm upgrade node
systemctl daemon-reload
systemctl restart kubelet      
如果不想更换证书和配置文件
kubeadm upgrade node --certificate-renewal=false      

​升级worker nodes​

yum install -y kubeadm-1.18.20 kubelet-1.18.20 kubectl-1.18.20 --disableexcludes=kubernetes
kubeadm upgrade node
systemctl daemon-reload 
systemctl restart kubelet      

注意事项

issue:

1、利用 cni 漏A洞A攻A击 https://github.com/kubernetes/kubernetes/issues/91507

以下版本已经修复

  • kubelet v1.19.0+ (master branch ​​#91370​​)
  • kubelet v1.18.4+ (​​#91387​​)
  • kubelet v1.17.7+ (​​#91386​​)
  • kubelet v1.16.11+ (​​#91388​​)
最终修复版本是1.19

2、1.18开始kubernetes 使用的ipvs 模块 较新,导致集群kube-proxy ipvs模式异常

​​https://github.com/kubernetes/kubernetes/issues/89520​​

​​https://github.com/kubernetes/kubernetes/issues/82065​​

解决方式:

  • 需要更新内核 大于3.18
  • 更新k8s 1.18.2 https://github.com/kubernetes/kubernetes/pull/90555
  • 升级到1.18.9后发现 pod 网络是host,"dnsPolicy": "ClusterFirstWithHostNet", 仍然触发bug,如果要用3.10 内核,需要升级k8s 到1.19

原因是高版本k8s 调用了新版本的ipvs 模块,就内核不支持。导致 svc 的endpoint 无法转发

​​managedFields 显示在yaml 里面,如何屏蔽或者自定义显示​​

​​官方blog 说明​​

kubectl_yaml() {
kubectl -o yaml "$@" \
| yq d - 'items[*].metadata.managedFields' \
| yq d - '.metadata.managedFields' \
| yq d - 'items[*].metadata.ownerReferences' \
| yq d - 'metadata.ownerReferences' \
| yq d - 'items[*].status' \
| yq d - 'status'
}

kubectl_json() {
kubectl -o json "$@" \
| jq 'del(.items[]?.metadata.managedFields)' \
| jq 'del(.metadata.managedFields)' \
| jq 'del(.items[]?.metadata.ownerReferences)' \
| jq 'del(.metadata.ownerReferences)' \
| jq 'del(.items[]?.status)' \
| jq 'del(.status)'
}

kubectl_yaml get pods
kubectl_json get service my-service      

变更

继续阅读