天天看點

kubeadm手動安裝kubernetes 1.13高可用叢集(使用calico網絡)

初始化叢集:

配置hosts檔案

vim /etc/hosts

192.168.3.147test-master01
192.168.3.148test-master02
192.168.3.149test-master03
192.168.3.150test-work01
           

配置免密登入

ssh-keygen
ssh-copy-id test-master01
ssh-copy-id test-master02
ssh-copy-id test-master03
ssh-copy-id test-work01
           

設定參數

  • 關閉防火牆

    systemctl stop firewalld

    systemctl disable firewalld

  • 關閉swap

    swapoff -a

    sed -i ‘/ swap / s/^(.*)$/#\1/g’ /etc/fstab

修改 /etc/fstab 檔案,注釋掉 SWAP 的自動挂載,使用free -m确認swap已經關閉。

  • 關閉selinux

    sed-i ‘s/SELINUX=permissive/SELINUX=disabled/’ /etc/sysconfig/selinux

    setenforce0

  • 配置轉發相關參數,否則可能會出錯

    cat < /etc/sysctl.d/k8s.conf

    net.bridge.bridge-nf-call-ip6tables = 1

    net.bridge.bridge-nf-call-iptables = 1

    net.ipv4.ip_forward=1

    net.ipv4.tcp_tw_recycle=0

    vm.swappiness=0

    vm.overcommit_memory=1

    vm.panic_on_oom=0

    fs.inotify.max_user_watches=89100

    fs.file-max=52706963

    fs.nr_open=52706963

    net.ipv6.conf.all.disable_ipv6=1

    net.netfilter.nf_conntrack_max=2310720

    EOF

    sysctl --system

以上在所有的Kubernetes節點執行指令使修改生效

  • kube-proxy開啟ipvs

在所有work節點執行:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

    上面腳本建立了的/etc/sysconfig/modules/ipvs.modules檔案,保證在節點重新開機後能自動加載所需子產品。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4指令檢視是否已經正确加載所需的核心子產品.

    接下來還需要確定各個節點上已經安裝了ipset軟體包yum install ipset。 為了便于檢視ipvs的代理規則,最好安裝一下管理工具ipvsadm yum install ipvsadm

yum install ipset -y
yum install ipvsadm -y
           

如果以上前提條件如果不滿足,則即使kube-proxy的配置開啟了ipvs模式,也會退回到iptables模式

  • 系統優化參數

    systemctl enable ntpdate.service

    echo ‘*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1’> /tmp/crontab2.tmp

    crontab /tmp/crontab2.tmp

    systemctl start ntpdate.service

    echo “* soft nofile 65536” >> /etc/security/limits.conf

    echo “* hard nofile 65536” >> /etc/security/limits.conf

    echo “* soft nproc 65536” >>/etc/security/limits.conf

    echo “* hard nproc 65536” >>/etc/security/limits.conf

    echo “* soft memlock unlimited” >> /etc/security/limits.conf

    echo “* hard memlock unlimited” >>/etc/security/limits.conf

安裝docker

yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-toolswget vim  ntpdate libseccomp libtool-ltdltelnet rsync bind-utils
yum install -y https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-18.06.1.ce-3.el7.x86_64.rpm 
           

配置docker國内鏡像:

所有節點安裝docker

編輯/etc/docker/daemon.json,添加以下一行

{
"registry-mirrors":["https://registry.docker-cn.com"]
}
           

重新開機docker

systemctl daemon-reload
systemctl enable docker
systemctl start docker
           

注:如果使用overlay2的寫法:

daemon.json
{
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m",
        "max-file": "10"
    },
    "registry-mirrors": ["https://pqbap4ya.mirror.aliyuncs.com"],
        "storage-driver": "overlay2",
    "storage-opts":["overlay2.override_kernel_check=true"]
}
           

如果要使用overlay2,前提條件為使用ext4,如果使用xfs,需要格式化磁盤加上參數 mkfs.xfs -n ftype=1 /path/to/your/device ,ftype=1這個參數需要配置為1

安裝keepalived+haproxy

三台master 節點

網上一大把,這裡略過

VIP : 192.168.3.80

安裝 kubeadm, kubelet 和 kubectl

所有節點都執行

設定yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
           

安裝元件

yum install -y kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1
           

開機啟動

systemctl enable kubelet.service
           

初始化K8S叢集

編輯kubeadm配置檔案:

下面配置是kubeadm安裝etcd寫法:

cat > kubeadm-config.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.1
apiServer:
  certSANs:
    - "192.168.3.80"
controlPlaneEndpoint: "192.168.3.80:8443"
networking:
  podSubnet: "10.50.0.0/16"
imageRepository: "harbor.oneitfarm.com/k8s-cluster-images"
EOF
           

CNI使用Calico,設定podSubnet: “10.50.0.0/16”

192.168.3.80是剛才安裝haproxy+keepalived的VIP

初始化第一個master

kubeadm init --config kubeadm-config.yaml
...
[[email protected] ~]# mkdir -p $HOME/.kube
[[email protected] ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[[email protected] ~]# chown $(id -u):$(id -g) $HOME/.kube/config
           

安裝網絡插件

按官網方式:

Installing with the Kubernetes API datastore—50 nodes or less:

kubectl apply -f \
https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

kubectl apply -f \
https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
           

以上建議先wget下來,需要根據自己網絡修改配置 :

- name: CALICO_IPV4POOL_CIDR
  value: "10.50.0.0/16"
           

複制相關檔案到其他master節點

ssh [email protected] mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf [email protected]:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} [email protected]:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* [email protected]:/etc/kubernetes/pki/etcd
           

部署master-other

在其它slave節點上執行下面指令,加入叢集

kubeadm join 192.168.3.80:8443 --token pv2a9n.uh2yx1082ffpdf7n --discovery-token-ca-cert-hash sha256:872cac35b0bfec28fab8f626a727afa6529e2a63e3b7b75a3397e6412c06ebc5 --experimental-control-plane
           

kube-proxy開啟ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”:

kubectl edit configmap kube-proxy -n kube-system
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -- grace-period=0 --force -n kube-system")}'
           

檢查測試

檢視kubernetes叢集狀态

kubectl get nodes -o wide
kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    controller-manager   Healthy   ok
    scheduler            Healthy   ok
    etcd-0               Healthy   {"health": "true"}
           

檢視etcd叢集狀态

本文通過kubeadm自動安裝etcd,也就是docker方式安裝的etcd,可以exec進去容器内檢查:

kubectl exec -ti -n kube-system etcd-an-master01 sh
/ # export ETCDCTL_API=3
/ # etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key member list
           

安裝失敗清理叢集

叢集初始化如果遇到問題,可以使用下面的指令進行清理:

kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker
           

設定資源排程

使用kubeadm初始化的叢集,出于安全考慮Pod不會被排程到Master Node上,也就是說Master Node不參與工作負載。這是因為目前的master節點被打上了node-role.kubernetes.io/master:NoSchedule的污點:

kubectl describe node master01 | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule
           

檢查join進叢集的master和work節點,如果排程不對,可以通過如下方式設定:

[[email protected] ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
an-master01   Ready    master   4h39m   v1.13.1
an-master02   Ready    <none>   4h32m   v1.13.1
an-master03   Ready    master   86m     v1.13.1
an-work01     Ready    <none>   85m     v1.13.1

檢視目前狀态:
kubectl describe nodes/an-master02 |grep -E '(Roles|Taints)'
Roles:              <none>
Taints:             <none>

設定為master節點且不排程:
kubectl label node an-master02 node-role.kubernetes.io/master=
kubectl taint nodes an-master02 node-role.kubernetes.io/master=:NoSchedule
如果想去除限制的話:
kubectl taint nodes an-master03 node-role.kubernetes.io/master-

work節點設定:
kubectl label node an-work01 node-role.kubernetes.io/work=
kubectl describe nodes/an-work01 |grep -E '(Roles|Taints)'
Roles:              work
Taints:             <none>