天天看點

kubernetes介紹之kubernetes高可用叢集安裝部署kubernetes介紹之kubernetes叢集安裝部署

kubernetes介紹之kubernetes叢集安裝部署

kubernetes叢集的節點資訊:

主機名 主機ip
k8s-master 172.24.211.217
k8s-master-1 172.24.211.220
k8s-node-1 172.24.211.218
k8s-node-2 172.24.211.219

一、下載下傳及準備server端檔案

cd /opt/k8s/work
wget https://dl.k8s.io/v1.15.0/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf  kubernetes-src.tar.gz 
           

檔案大概400M多點, 如果指令下載下傳速度太慢,可以手動用浏覽器下載下傳對應檔案,然後傳到對應伺服器相應目錄下。檔案下載下傳完後需要解壓,複制到/opt/k8s/bin檔案夾下,并賦予二進制檔案可執行權限。(ps:由于隻有4台伺服器,是以本次部署master節點隻有2個,另外兩個作為worker節點。在本文中,兩個master節點提前部署好的etcd叢集,并通過Nginx代理,使kube-apiservice提供https://127.0.0.1:8443作為服務位址。上一篇已經介紹了如何運用Nginx進行代理。)

二、master部署

拷貝檔案:

cp kubernetes/server/bin/{apiextensions-apiserver,cloud-controller-manager,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} /opt/k8s/bin/
chmod +x /opt/k8s/bin/*

scp kubernetes/server/bin/{apiextensions-apiserver,cloud-controller-manager,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} [email protected]:/opt/k8s/bin/
ssh [email protected] "chmod +x /opt/k8s/bin/*"
           

master上需要運作以下元件:kube-apiserver,kube-scheduler,kube-controller-manager

2.1 kube-apiserver部署

(ps:配置生成相關操作均在/opt/k8s/work目錄下進行)

準備kubernetes證書配置檔案并生成kubernetes證書(10.254.0.1是kubernetes 的服務 IP,一般是 SERVICE_CIDR 網段中第一個IP,SERVICE_CIDR 即 kube-apiserver.service 配置檔案中的 –service-cluster-ip-range 配置項,代表服務網段):

cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "172.24.211.220",
    "172.24.211.217",
    "10.254.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local."
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
           

将生成的證書和私鑰檔案拷貝到所有 master 節點:

scp kubernetes*.pem [email protected]:/etc/kubernetes/cert/
scp kubernetes*.pem [email protected]:/etc/kubernetes/cert/
           

建立加密配置檔案,并拷貝到是以master節點(“head -c 32 /dev/urandom | base64”生成由一串随機字元串base64編碼後的字元串):

cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: $(head -c 32 /dev/urandom | base64)
      - identity: {}
EOF

scp encryption-config.yaml [email protected]:/etc/kubernetes/encryption-config.yaml
scp encryption-config.yaml [email protected]:/etc/kubernetes/encryption-config.yaml
           

建立審計配置檔案,并拷貝到是以master節點(該配置可以使kubernetes少記錄很多不必要的日志,預設配置會對所有請求進行審計,有興趣的可以去k8s官網檢視相關内容):

cat > audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
  # The following requests were manually identified as high-volume and low-risk, so drop them.
  - level: None
    resources:
      - group: ""
        resources:
          - endpoints
          - services
          - services/status
    users:
      - 'system:kube-proxy'
    verbs:
      - watch

  - level: None
    resources:
      - group: ""
        resources:
          - nodes
          - nodes/status
    userGroups:
      - 'system:nodes'
    verbs:
      - get

  - level: None
    namespaces:
      - kube-system
    resources:
      - group: ""
        resources:
          - endpoints
    users:
      - 'system:kube-controller-manager'
      - 'system:kube-scheduler'
      - 'system:serviceaccount:kube-system:endpoint-controller'
    verbs:
      - get
      - update

  - level: None
    resources:
      - group: ""
        resources:
          - namespaces
          - namespaces/status
          - namespaces/finalize
    users:
      - 'system:apiserver'
    verbs:
      - get

  # Don't log HPA fetching metrics.
  - level: None
    resources:
      - group: metrics.k8s.io
    users:
      - 'system:kube-controller-manager'
    verbs:
      - get
      - list

  # Don't log these read-only URLs.
  - level: None
    nonResourceURLs:
      - '/healthz*'
      - /version
      - '/swagger*'

  # Don't log events requests.
  - level: None
    resources:
      - group: ""
        resources:
          - events

  # node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    users:
      - kubelet
      - 'system:node-problem-detector'
      - 'system:serviceaccount:kube-system:node-problem-detector'
    verbs:
      - update
      - patch

  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    userGroups:
      - 'system:nodes'
    verbs:
      - update
      - patch

  # deletecollection calls can be large, don't log responses for expected namespace deletions
  - level: Request
    omitStages:
      - RequestReceived
    users:
      - 'system:serviceaccount:kube-system:namespace-controller'
    verbs:
      - deletecollection

  # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
  # so only log at the Metadata level.
  - level: Metadata
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - secrets
          - configmaps
      - group: authentication.k8s.io
        resources:
          - tokenreviews
  # Get repsonses can be large; skip them.
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
    verbs:
      - get
      - list
      - watch

  # Default level for known APIs
  - level: RequestResponse
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
      
  # Default level for all other requests.
  - level: Metadata
    omitStages:
      - RequestReceived
EOF
           

審計配置檔案分發到所有master節點:

scp audit-policy.yaml [email protected]:/etc/kubernetes/audit-policy.yaml
scp audit-policy.yaml [email protected]:/etc/kubernetes/audit-policy.yaml
           

建立後續通路 metrics-server 使用的證書:

#注意:CN需要與下文kube-apiserver.service配置檔案的--requestheader-allowed-names配置項一緻
#證書配置檔案
cat > proxy-client-csr.json <<EOF
{
  "CN": "aggregator",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

#生成證書
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem -ca-key=/etc/kubernetes/cert/ca-key.pem  -config=/etc/kubernetes/cert/ca-config.json  -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client

#分發證書(master節點)
scp proxy-client*.pem [email protected]:/etc/kubernetes/cert/
scp proxy-client*.pem [email protected]:/etc/kubernetes/cert/
           

建立啟動參數配置檔案(注意替換自己的ip):

vim /etc/kubernetes/kube-apiserver.conf

#配置起始:ifconfig | grep -A 1 eth0 |grep inet |awk '{print $2}' 可用于擷取私網ip--172.24.211.217
KUBE_API_ARGS="--advertise-address=172.24.211.217 \
  --default-not-ready-toleration-seconds=360 \
  --default-unreachable-toleration-seconds=360 \
  --feature-gates=DynamicAuditing=true \
  --max-mutating-requests-inflight=2000 \
  --max-requests-inflight=4000 \
  --default-watch-cache-size=200 \
  --delete-collection-workers=2 \
  --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \
  --etcd-cafile=/etc/kubernetes/cert/ca.pem \
  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \
  --etcd-servers=https://172.24.211.217:2379,https://172.24.211.218:2379,https://172.24.211.219:2379 \
  --bind-address=172.24.211.217 \
  --secure-port=6443 \
  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \
  --insecure-port=8080 \
  --audit-log-maxage=15 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/data/k8s/kube-apiserver/audit.log \
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \
  --anonymous-auth=false \
  --client-ca-file=/etc/kubernetes/cert/ca.pem \
  --enable-bootstrap-token-auth \
  --requestheader-allowed-names="aggregator" \
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --service-account-key-file=/etc/kubernetes/cert/ca.pem \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-admission-plugins=NodeRestriction \
  --allow-privileged=true \
  --apiserver-count=3 \
  --event-ttl=168h \
  --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \
  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \
  --kubelet-https=true \
  --kubelet-timeout=10s \
  --proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \
  --proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \
  --service-cluster-ip-range=10.254.0.0/16 \
  --service-node-port-range=20000-32767 \
  --logtostderr=true \
  --v=2"
  
  #分發到其他master節點(需要修改對應的私網ip:172.24.211.220)
  scp /etc/kubernetes/kube-apiserver.conf [email protected]:/etc/kubernetes/kube-apiserver.conf
           

建立kube-apiserver啟動配置檔案:

vim /usr/lib/systemd/system/kube-apiserver.service
#配置起始
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/kubernetes/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

#分發到其他master
scp /usr/lib/systemd/system/kube-apiserver.service [email protected]:/usr/lib/systemd/system/kube-apiserver.service
           

至此,kube-apiserver的相關配置已完成,在各個master節點運作以下指令即可啟動:

systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver
           

檢視是否啟動成功,如果所有master節點都啟動成功,則kube-apiserver部署完成:

systemctl status kube-apiserver |grep 'Active:'
           
kubernetes介紹之kubernetes高可用叢集安裝部署kubernetes介紹之kubernetes叢集安裝部署

最後,在執行 kubectl exec、run、logs 等指令時,apiserver 會将請求轉發到 kubelet 的 https 端口。這裡定義 RBAC 規則,授權 apiserver 使用的證書(kubernetes.pem)使用者名(CN:kuberntes)通路 kubelet API 的權限:

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
           

2.2 kube-controller-manager部署

Kubernetes的Controller Manager 由 kube-controller-manager 和 cloud-controller-manager 組成, 是Kubernetes 的大腦, 它通過 apiserver 監控整個叢集的狀态, 并確定叢集處于預期的工作狀态。

其中cloud-controller-manager 在 Kubernetes 啟用 Cloud Provider 的時候才需要。

本部分僅介紹kube-controller-manager的部署和啟動。本文部署的kube-controller-manager包含 2 個節點,啟動後将通過競争選舉機制産生一個 leader 節點,其它節點為阻塞狀态。當 leader 節點不可用時,阻塞的節點将再次進行選舉産生新的 leader 節點,進而保證服務的可用性。

建立kube-controller-manager的證書和私鑰:

證書配置檔案中,CN 和 O 均為 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 賦予 kube-controller-manager 工作所需的權限。

cd /opt/k8s/work
cat > kube-controller-manager-csr.json <<EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "172.24.211.220",
      "172.24.211.217"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-controller-manager",
        "OU": "System"
      }
    ]
}
EOF

#生成證書
cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
           

把證書分發到是以master節點:

scp kube-controller-manager*.pem [email protected]:/etc/kubernetes/cert/
scp kube-controller-manager*.pem [email protected]:/etc/kubernetes/cert/
           

kube-controller-manager使用 kubeconfig 檔案通路 apiserver,是以需要為其配置相應的kubeconfig 檔案,該檔案提供apiserver 位址、嵌入的 CA 證書和 kube-controller-manager證書。依次運作下面4句指令:

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:8443 \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
           

分發 kube-controller-manager.kubeconfig 到所有 master 節點:

scp kube-controller-manager.kubeconfig [email protected]:/etc/kubernetes/
scp kube-controller-manager.kubeconfig [email protected]:/etc/kubernetes/
           

編輯啟動配置檔案kube-controller-manager.service:

cat > kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/data/k8s/kube-controller-manager
ExecStart=/opt/k8s/bin/kube-controller-manager \\
  --profiling \\
  --cluster-name=kubernetes \\
  --controllers=*,bootstrapsigner,tokencleaner \\
  --kube-api-qps=1000 \\
  --kube-api-burst=2000 \\
  --leader-elect \\
  --use-service-account-credentials\\
  --concurrent-service-syncs=2 \\
  --bind-address=127.0.0.1 \\
  --secure-port=10252 \\
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\
  --port=0 \\
  --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-allowed-names="" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\
  --experimental-cluster-signing-duration=876000h \\
  --horizontal-pod-autoscaler-sync-period=10s \\
  --concurrent-deployment-syncs=10 \\
  --concurrent-gc-syncs=30 \\
  --node-cidr-mask-size=24 \\
  --service-cluster-ip-range=10.254.0.0/16 \\
  --pod-eviction-timeout=6m \\
  --terminated-pod-gc-threshold=10000 \\
  --root-ca-file=/etc/kubernetes/cert/ca.pem \\
  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
           

把kube-controller-manager.service分發到各master節點:

scp kube-controller-manager.service [email protected]:/usr/lib/systemd/system/kube-controller-manager.service
scp kube-controller-manager.service [email protected]:/usr/lib/systemd/system/kube-controller-manager.service
           

每個master節點都準備好将會用到的目錄,然後執行啟動指令:

mkdir -p /data/k8s/kube-controller-manager
systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager
           

檢視是否成功:

systemctl status kube-controller-manager|grep Active
           

2.3 kube-scheduler部署

生成kube-scheduler 證書和私鑰:

#配置
cat > kube-scheduler-csr.json <<EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "172.24.211.217",
      "172.24.211.220"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler",
        "OU": "System"
      }
    ]
}
EOF

#生成證書和私鑰
cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

#分發
scp kube-scheduler*.pem [email protected]:/etc/kubernetes/cert/
scp kube-scheduler*.pem [email protected]:/etc/kubernetes/cert/
           

kube-scheduler 使用 kubeconfig 檔案通路 apiserver,是以需要為其配置相應的kubeconfig 檔案,該檔案提供apiserver 位址、嵌入的 CA 證書和 kube-scheduler 證書。依次運作下面4句指令:

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:8443 \
  --kubeconfig=kube-scheduler.kubeconfig 
  
kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig
  
kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
           

分發 kube-scheduler.kubeconfig 到所有 master 節點:

scp kube-scheduler.kubeconfig [email protected]:/etc/kubernetes/
scp kube-scheduler.kubeconfig [email protected]:/etc/kubernetes/
           

建立kube-scheduler 配置的模闆檔案(##NODE_IP##會通過sed指令替換成master節點對應的ip):

cat >kube-scheduler.yaml.template <<EOF
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
bindTimeoutSeconds: 600
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig"
  qps: 100
enableContentionProfiling: false
enableProfiling: true
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: ##NODE_IP##:10251
leaderElection:
  leaderElect: true
metricsBindAddress: ##NODE_IP##:10251
EOF
           

根據模闆檔案生成配置檔案,并分發到對應的master節點:

sed  -e "s/##NODE_IP##/172.24.211.217/" kube-scheduler.yaml.template > kube-scheduler-master0.yaml
scp kube-scheduler-master0.yaml [email protected]:/etc/kubernetes/kube-scheduler.yaml
sed  -e "s/##NODE_IP##/172.24.211.220/" kube-scheduler.yaml.template > kube-scheduler-master1.yaml
scp kube-scheduler-master1.yaml [email protected]:/etc/kubernetes/kube-scheduler.yaml
           

編輯啟動配置檔案kube-scheduler.service:

cat > kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/data/k8s/kube-scheduler
EnvironmentFile=/etc/kubernetes/kube-scheduler.conf
ExecStart=/opt/k8s/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF
           

把kube-scheduler.service分發到各master節點:

scp kube-scheduler.service [email protected]:/usr/lib/systemd/system/kube-scheduler.service
scp kube-scheduler.service [email protected]:/usr/lib/systemd/system/kube-scheduler.service
           

添加配置檔案kube-scheduler.conf,設定上面用到的運作參數$KUBE_SCHEDULER_ARGS:

#先制作一個模闆,然後使用sed指令替換節點ip
vim kube-scheduler.conf.template
KUBE_SCHEDULER_ARGS="--config=/etc/kubernetes/kube-scheduler.yaml \
  --bind-address=##NODE_IP## \
  --secure-port=10259 \
  --port=0 \
  --tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem \
  --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  --client-ca-file=/etc/kubernetes/cert/ca.pem \
  --requestheader-allowed-names="" \
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  --logtostderr=true \
  --v=2"
  
  #生成最終的.conf檔案并分發到相應master節點
  sed -e "s/##NODE_IP##/172.24.211.217/" kube-scheduler.conf.template > kube-scheduler-master0.conf
  sed -e "s/##NODE_IP##/172.24.211.220/" kube-scheduler.conf.template > kube-scheduler-master1.conf
  scp kube-scheduler-master0.conf [email protected]:/etc/kubernetes/kube-scheduler.conf
  scp kube-scheduler-master1.conf [email protected]:/etc/kubernetes/kube-scheduler.conf
           

每個master節點都準備好将會用到的目錄,然後執行啟動指令:

mkdir -p /data/k8s/kube-scheduler
systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler
           

檢視是否成功:

systemctl status kube-scheduler|grep Active
           

三、worker節點部署

拷貝檔案:

cp kubernetes/server/bin/{apiextensions-apiserver,cloud-controller-manager,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} [email protected]:/opt/k8s/bin/
[email protected] "chmod +x /opt/k8s/bin/*"

scp kubernetes/server/bin/{apiextensions-apiserver,cloud-controller-manager,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} [email protected]:/opt/k8s/bin/
ssh [email protected] "chmod +x /opt/k8s/bin/*"
           

worker 節點目前沒有部署的元件如下:kubelet、kube-proxy。

(以下均在k8s-master節點上操作)

3.1 kubelet部署

1)配置kubeconfig(以node-1為例)

準備BOOTSTRAP_TOKEN:

#每個節點node節點都生成一個,參考指令如下:
export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:k8s-node-1 \
      --kubeconfig ~/.kube/config)
 
#驗證,差不多這種格式:qpfh6q.gmbvj5c50omhl1oc
echo ${BOOTSTRAP_TOKEN}
           

建立kubelet bootstrap kubeconfig 檔案:

#設定叢集參數
kubectl config set-cluster kubernetes \
      --certificate-authority=/etc/kubernetes/cert/ca.pem \
      --embed-certs=true \
      --server=https://127.0.0.1:8443 \
      --kubeconfig=kubelet-bootstrap-node-1.kubeconfig
      
# 設定用戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=kubelet-bootstrap-node-1.kubeconfig

# 設定上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=kubelet-bootstrap-node-1.kubeconfig

# 設定預設上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap-node-1.kubeconfig

#分發到node-1
scp kubelet-bootstrap-node-1.kubeconfig [email protected]:/etc/kubernetes/kubelet-bootstrap.kubeconfig
           

按照上面的步驟,其他節點也需要配置好kubeconfig。

2)建立和分發kubelet 參數配置檔案:

#podCIDR與flannel配置的網段要一緻; clusterDomain不能以"."結尾;clusterDNS從 10.254.0.0/16 中預配置設定一個ip
cat > kubelet-config.yaml.template <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "##NODE_IP##"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:
  mode: Webhook
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "##NODE_IP##"
clusterDomain: "cluster.local"
clusterDNS:
  - "10.254.0.2"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: cgroupfs
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "172.30.0.0/16"
podPidsLimit: -1
resolvConf: /etc/resolv.conf
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:
  memory.available:  "100Mi"
nodefs.available:  "10%"
nodefs.inodesFree: "5%"
imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOF
           

sed替換及分發:

sed -e "s/##NODE_IP##/172.24.211.217/" kubelet-config.yaml.template > kubelet-config-node-1.yaml.template
sed -e "s/##NODE_IP##/172.24.211.220/" kubelet-config.yaml.template > kubelet-config-node-2.yaml.template

scp kubelet-config-node-1.yaml.template [email protected]:/etc/kubernetes/kubelet-config.yaml
scp kubelet-config-node-2.yaml.template [email protected]:/etc/kubernetes/kubelet-config.yaml
           

3)建立啟動配置檔案:

cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/data/k8s/kubelet
ExecStart=/opt/k8s/bin/kubelet \\
  --allow-privileged=true \\
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --cert-dir=/etc/kubernetes/cert \\
  --cni-conf-dir=/etc/cni/net.d \\
  --container-runtime=docker \\
  --container-runtime-endpoint=unix:///var/run/dockershim.sock \\
  --root-dir=/data/k8s/kubelet \\
  --kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --config=/etc/kubernetes/kubelet-config.yaml \\
  --hostname-override=##NODE_NAME## \\
  --pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 \\
  --image-pull-progress-deadline=15m \\
  --volume-plugin-dir=/data/k8s/kubelet/kubelet-plugins/volume/exec/ \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF
           

sed替換及分發:

sed -e "s/##NODE_NAME##/k8s-node-1/" kubelet.service.template > kubelet-node-1.service
sed -e "s/##NODE_NAME##/k8s-node-2/" kubelet.service.template > kubelet-node-2.service

scp kubelet-node-1.service [email protected]:/etc/systemd/system/kubelet.service
scp kubelet-node-2.service [email protected]:/etc/systemd/system/kubelet.service
           

4)Bootstrap Token Auth問題解決:

kubelet 啟動時查找 --kubeletconfig 參數對應的檔案是否存在,如果不存在則使用 --bootstrap-kubeconfig 指定的 kubeconfig 檔案向 kube-apiserver 發送證書簽名請求 (CSR)。kube-apiserver 收到 CSR 請求後,對其中的 Token 進行認證,認證通過後将請求的 user 設定為 system:bootstrap:,group 設定為 system:bootstrappers,這一過程稱為 Bootstrap Token Auth。預設情況下,這個 user 和 group 沒有建立 CSR 的權限,kubelet 啟動會失敗。在任一安裝了kubectl的節點執行下面的指令,建立一個clusterrolebinding,即可解決該問題:

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
           

5)準備用到的目錄,然後啟動:

mkdir -p /data/k8s/kubelet/kubelet-plugins/volume/exec/
systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet
           

3.2 kube-proxy部署

ps:各節點需要安裝 ipvsadm 和 ipset 指令,加載 ip_vs 核心子產品。

1)建立 kube-proxy 證書:

cd /opt/k8s/work

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
           

2)建立和分發 kubeconfig 檔案:

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:8443 \
  --kubeconfig=kube-proxy.kubeconfig
  
kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
  
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

#分發
scp kube-proxy.kubeconfig [email protected]:/etc/kubernetes/
scp kube-proxy.kubeconfig [email protected]:/etc/kubernetes/
           

3)建立 kube-proxy 配置檔案:

從 v1.10 開始,kube-proxy 部分參數可以配置檔案中配置。

cat > kube-proxy-config.yaml.template <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
  qps: 100
bindAddress: ##NODE_IP##
healthzBindAddress: ##NODE_IP##:10256
metricsBindAddress: ##NODE_IP##:10249
enableProfiling: true
clusterCIDR: 10.254.0.0/16
hostnameOverride: ##NODE_NAME##
mode: "ipvs"
portRange: ""
kubeProxyIPTablesConfiguration:
  masqueradeAll: false
kubeProxyIPVSConfiguration:
  scheduler: rr
  excludeCIDRs: []
EOF
           

Sed替換及分發:

sed -e "s/##NODE_NAME##/k8s-node-1/" -e "s/##NODE_IP##/172.24.211.217/" kube-proxy-config.yaml.template > kube-proxy-config-node-1.yaml.template
sed -e "s/##NODE_NAME##/k8s-node-2/" -e "s/##NODE_IP##/172.24.211.220/" kube-proxy-config.yaml.template > kube-proxy-config-node-2.yaml.template

scp kube-proxy-config-node-1.yaml.template [email protected]:/etc/kubernetes/kube-proxy-config.yaml
scp kube-proxy-config-node-2.yaml.template [email protected]:/etc/kubernetes/kube-proxy-config.yaml
           

4)建立啟動配置檔案:

cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/data/k8s/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy-config.yaml \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#分發
scp kube-proxy.service [email protected]:/etc/systemd/system/
scp kube-proxy.service [email protected]:/etc/systemd/system/
           

5)準備檔案夾,然後啟動 kube-proxy 服務:

mkdir -p /data/k8s/kube-proxy
modprobe ip_vs_rr
systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy
systemctl status kube-proxy|grep Active
           

參考資料

[1]. https://github.com/opsnull/follow-me-install-kubernetes-cluster, 和我一步步部署kubernetes叢集.

[2]. https://blog.51cto.com/blief/2416018, kubernetes叢集安裝指南:kube-apiserver元件部署.

[3]. https://cizixs.com/2016/11/07/kubernetes-intro-api-server/, kubernetes 簡介:API Server 和 node.

[4]. http://kamalmarhubi.com/blog/2015/09/06/kubernetes-from-the-ground-up-the-api-server/, Kubernetes from the ground up: the API server.

繼續閱讀