天天看點

k8s筆記1--使用kubeadm快速部署一套k8s-v1.15.4

k8s筆記1--使用kubeadm快速部署一套k8s-v1.15.4

  • ​​1 k8s 基礎​​
  • ​​2 環境準備和kubeadm​​
  • ​​3 部署master與node加入叢集​​
  • ​​4 部署k8s UI​​
  • ​​5注意事項​​
  • ​​5.1 coredns報錯​​
  • ​​5.2 kubeadm 1.19.0 版本安裝步驟​​
  • ​​5.3 安裝metrics-server​​
  • ​​6 說明​​

本文為筆者初學k8s時的一個部署記錄,後續将在次基礎上繼續完善優化kubeadm部署相關内容, 貼在此處以便于查閱,也給有需要的使用者參考!

1 k8s 基礎

  1. k8s 架構
  2. k8s筆記1--使用kubeadm快速部署一套k8s-v1.15.4
  • 控制管理

    K8S的控制管理可以包括Kubectl,UI,API 等3種方式

  • Master

    master 子產品主要包括scheduler、apiserver、controller-manager子子產品子產品

  • Nodes

    nodes 子產品中包括kubelet、kube-proxy、Docker Engine 三部分

  • Etcd Cluster

    Etcd 子產品主要包括etcd,用于儲存叢集所有的網絡配置和對象的狀态資訊

  1. k8s 部署方式
  • kubeadm

    初學者建議使用adm,下文就使用該方式部署

  • 二進制

    企業中使用最廣,有經驗建議使用二進制,友善後期排查問題

  • minikube

    多用于測試

  • yum

    使用較少

2 環境準備和kubeadm

1個master(192.168.2.132 ),2個nodes節點(192.168.2.133-134), 非master至少1c2g,master至少2c2g

1)禁止swap,

swapoff -a 臨時禁止

建議直接在 /etc/fstab 中通過注釋來永久禁止

2)設定hostname和hosts

在master中設定相關參數,node節點類似設定

/etc/hostname

k8s01

/etc/hosts

127.0.1.1 k8s01

192.168.2.132 k8s01

3)設定清華源頭,下載下傳基本程式

将 ​​​ubuntu 1604 源​​​ 寫入到/etc/apt/sources.list 中,然後apt-get update

apt-get -y install apt-transport-https ca-certificates curl software-properties-common

4)安裝docker

step 1: 安裝GPG證書
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
Step 2: 寫入軟體源資訊
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs)
Step 3: 更新并安裝 Docker-CE
apt-get -y update
安裝指定版本的Docker-CE,通過查找Docker-CE的版本:
apt-cache madison docker-ce
sudo apt-get -y install docker-ce=[VERSION]   //安裝格式
apt-get -y install docker-ce=18.06.3~ce~3-0~ubuntu
Step 4: 配置docker-hub源
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors":["https://docker.mirrors.ustc.edu.cn","http://hub-mirror.c.163.com"]
}
EOF
systemctl daemon-reload && systemctl restart docker
5)安裝kubeadm
新增源kubeadm相關的源
```bash
add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main"
apt-get update
檢視kubeadm相關新版本
apt-cache madison kubelet kubectl kubeadm |grep '1.15.4-00'        
apt install -y kubelet=1.15.4-00 kubectl=1.15.4-00 kubeadm=1.15.4-00       
安裝指定的版本,筆者初次安裝,是以直接參考其它使用者版本,暫未使用最新版本,以避免相容問題      

3 部署master與node加入叢集

  1. 初始化master

    注: master必須2core,否則會報錯

kubeadm init \
  --apiserver-advertise-address=192.168.2.132 \
  --kubernetes-version=v1.15.4 \
  --image-repository registry.aliyuncs.com/google_containers \
  --pod-network-cidr=10.24.0.0/16 \
  --ignore-preflight-errors=Swap      
root@k8s01:/home/xg# tee /etc/default/kubelet <<-'EOF'
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
EOF
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
root@k8s01:/home/xg# systemctl daemon-reload && systemctl restart kubelet
root@k8s01:/home/xg# kubeadm init   --apiserver-advertise-address=192.168.2.132   --kubernetes-version=v1.15.4   --image-repository registry.aliyuncs.com/google_containers   --pod-network-cidr=10.24.0.0/16   --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.15.4
[preflight] Running pre-flight checks
  [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.132]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s01 localhost] and IPs [192.168.2.132 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s01 localhost] and IPs [192.168.2.132 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.014975 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: omno4a.rgnhd0lfkoxm0yns
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.132:6443 --token omno4a.rgnhd0lfkoxm0yns \
    --discovery-token-ca-cert-hash sha256:783aac372134879f6f5daf1439c21ffe1cd651a43c9a98e00da6b89be0702276 
root@k8s01:/home/xg#      

建立正常使用者:

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config      

此時master正在啟動,由于網絡還沒有起來,是以是NotRead狀态

# kubectl get nodes      

啟動網絡:

# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f kube-flannel.yml      

建立過一為master節點就會變為Ready狀态,并啟動一系列的pod如下圖:

k8s筆記1--使用kubeadm快速部署一套k8s-v1.15.4

在master機器上檢視有那些網絡版本,在nodes節點上pull下來

# grep -i image kube-flannel.yml
        image: quay.io/coreos/flannel:v0.12.0-amd64
        image: quay.io/coreos/flannel:v0.12.0-amd64
        image: quay.io/coreos/flannel:v0.12.0-arm64
        image: quay.io/coreos/flannel:v0.12.0-arm64
        image: quay.io/coreos/flannel:v0.12.0-arm
        image: quay.io/coreos/flannel:v0.12.0-arm
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        image: quay.io/coreos/flannel:v0.12.0-s390x
        image: quay.io/coreos/flannel:v0.12.0-s390x
# docker pull quay.io/coreos/flannel:v0.12.0-amd64      

2)node節點加入叢集

在node節點上執行加入操作:

# kubeadm join 192.168.2.132:6443 --token omno4a.rgnhd0lfkoxm0yns \
>     --discovery-token-ca-cert-hash sha256:783aac372134879f6f5daf1439c21ffe1cd651a43c9a98e00da6b89be0702276
[preflight] Running pre-flight checks
  [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在master上get nodes,發現節點都成功加入了:
# kubectl get nodes
NAME    STATUS   ROLES    AGE    VERSION
k8s01   Ready    master   46m    v1.15.4
k8s02   Ready    <none>   107s   v1.15.4
k8s03   Ready    <none>      

4 部署k8s UI

在master上執行如下操作:

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

改動1:
添加 type: NodePort 和 type: NodePort
改動2:
# cat recommended.yaml |grep -C 2 k8s01
        k8s-app: kubernetes-dashboard
    spec:
      nodeName: k8s01 # 此處設定為master
      containers:
        - name: kubernetes-dashboard
--
        k8s-app: dashboard-metrics-scraper
    spec:
      nodeName: k8s01 # 此處設定為master      

kubectl apply -f recommended.yaml

此時檢視pods,可以發現服務都正常啟動:

k8s筆記1--使用kubeadm快速部署一套k8s-v1.15.4

建立nginx 執行個體:

kubectl create deployment nginx --image=nginx

kubectl expose deployment nginx --port=80 --type=NodePort

kubectl get pod,svc

k8s筆記1--使用kubeadm快速部署一套k8s-v1.15.4

此時可以通過node節點ip通路nginx端口了,如圖:

k8s筆記1--使用kubeadm快速部署一套k8s-v1.15.4

ui 事宜:

筆者初次建立的時候經常報錯:

是以删掉namespace後,重新建立ns,随後重新開機動pod

kubectl delete namespace kubernetes-dashboard

kubectl apply -f recommended.yaml

k8s筆記1--使用kubeadm快速部署一套k8s-v1.15.4

添加角色,生成功對應的證書,倒入到浏覽器中:

# 生成client-certificate-data
grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
# 生成client-key-data
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
# 生成p12
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"
... 此處省略一堆證書資訊
Type:  kubernetes.io/service-account-token

Data
====      

浏覽器配置證書:

在Privacy and security-> Your certificates->Import 上面生成的 kubecfg.p12 證書

k8s筆記1--使用kubeadm快速部署一套k8s-v1.15.4

最終正常啟動:

​​ https://192.168.2.132:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy​​

k8s筆記1--使用kubeadm快速部署一套k8s-v1.15.4
k8s筆記1--使用kubeadm快速部署一套k8s-v1.15.4

5注意事項

5.1 coredns報錯

筆者第二次安裝1.19版本時候,發現coredns使用沒有起來,具體報錯如下:

0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn’t tolerate.

Readiness probe failed: HTTP probe failed with statuscode: 503

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “1e82d567d26941b05c94d29dedd2fc358838034de91b73706e6fc8b21efaaa9b” network for pod “coredns-6d56c8448f-lkdxd”: networkPlugin cni failed to set up pod “coredns-6d56c8448f-lkdxd_kube-system” network: open /run/flannel/subnet.env: no such file or directory

解決方法:

kubectl get pod --all-namespaces -o wide 發現coredns不在master上

是以3節點上都進行reset,然後重新初始化master,再設定網絡,随後加入node節點到叢集

kubeadm reset

注意:初次使用建議iptables -F 将防火牆給關閉掉,否則可能會出現coredns無法啟動的情況;

5.2 kubeadm 1.19.0 版本安裝步驟

以下是筆者安裝1.19.0 版本的一個記錄,其主要思路和上面1.15.4 版本一樣,極少數不同

apt install -y kubelet kubectl kubeadm --allow-unauthenticated
apt-mark hold kubelet kubeadm kubectl docker

3個節點上都拉flannel鏡像:
docker pull quay.io/coreos/flannel:v0.12.0-amd64

docker save -o flannel-v0.12.0-amd64.tar.gz quay.io/coreos/flannel:v0.12.0-amd64
docker load -i flannel-v0.12.0-amd64.tar.gz registry.aliyuncs.com/google_containers/etcd:3.4.9-1

docker save -o etcd-3.4.9-1.tar.gz registry.aliyuncs.com/google_containers/etcd:3.4.9-1
docker load -i etcd-3.4.9-1.tar.gz

kubeadm init \
  --apiserver-advertise-address=192.168.2.132 \
  --kubernetes-version=v1.19.0 \
  --image-repository registry.aliyuncs.com/google_containers \
  --pod-network-cidr=10.0.0.0/16 \
  --ignore-preflight-errors=Swap

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

master機器k8s01:
kubectl apply -f kube-flannel.yml

flannel起來後再讓 node 節點加入叢集,否則會導緻coredns不在master,進而出現網絡異常:
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.132:6443 --token t04ywd.m6hau0x92qhmqn9e \
    --discovery-token-ca-cert-hash sha256:88c94e64151a236d2cd3282da36f6b59fbb1ca90836be947fa3e5947f07b6ced

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

ca.crt:     1066 bytes
namespace:  11 bytes
token:         eyJhbGciOiJSUzI1NiIsImtpZCI6IlJ0MWRMdVlMYmtjMHYzb3hROVcxS3R1dk00VXdZeVpLSTYyUGN5RFRtVTgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tY205ZjQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjQyNGFlOGYtM2EzMi00OWFmLTljYzktNDgzODMyZjNlMzc1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.HJ1HSr52BzaPv_lqiU9yqITokd5Upvq7atIezSRLgw1ygpIjAuHTJB0i3ikTOwRyzBY_zNNuGWdiQ6z_TuDeuoKYB3hL8-wd52Ifh365lihV7_erwxT7CyB-hQ7hgpWFpKQ5GbLUiUmHJhdo43vB9i1H8NKT4xpux33K6t0H2wgEtidrvVKqS-zq1t23RjoBUSAnU9WtBsxp-sQcNcN8mZBQgZkB0FUBVfwS3QIatR00McX0QniIp-WtzVWZTsprD0ab4I2z7xyb5zKOZBpllNY_pjwqrcENh1dOg48WAYFLppcBBmDPmAzTN7YNvurP1nZHwGZp3-A-0VFC_3L2ag

grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt

grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key

openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"      

5.3 安裝metrics-server

使用kubeadmin 部署k8s後,預設沒有部署metrics-server,此時如果通過top node檢視會報錯;

kubectl top node 會報錯:error: Metrics API not available

安裝方法:

\# kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

安裝成功後:
kube-system            metrics-server-5d5c49f488-m7p2m                           0/1     CrashLoopBackOff   6          5m37s
此時檢視發現pod沒有起來,報錯:
Readiness probe failed: Get "https://10.244.2.8:4443/readyz": dial tcp 10.244.2.8:4443: connect: connection refused
檢視官方文檔,發現缺少 --kubelet-insecure-tls 配置,是以在deployment中的args中添加該參數;

再次檢視,發現pod已經正常起來了;
\# kubectl get pods -A|grep metrics
kube-system            metrics-server-56c59cf9ff-zhr6k                           1/1     Running   0          3m23s

metrics-server 正常啟動後,就可以通過top node|pod 檢視資源使用情況
xg@xgmac ~ % kubectl top node                        
NAME                              CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
test01.i.xxx.net   1057m        4%     16872Mi         26%       
test02.i.xxx.net   1442m        6%     12243Mi         19%       
test03.i.xxx.net   749m         3%     14537Mi         22%       
xg@xgmac ~ % kubectl top pod 
NAME                     CPU(cores)   MEMORY(bytes)      

解決方法:

編輯deployment,在 containers 的args 中添加 --kubelet-insecure-tls 即可

k8s筆記1--使用kubeadm快速部署一套k8s-v1.15.4

6 說明

  1. 參考文檔

    ​​​Kubernetes 安裝 dashboard 報錯​​

  2. ubuntu18.04使用kubeadm部署k8s單節點​​​​
  3. ​​1天入門Kubernets/K8S​​​​使用kubeadm快速部署一個Kubernetes叢集(v1.18)​​​​setup/cri 設定容器環境 ​​
  4. 軟體說明

    部署系統: ubuntu 1604 server

    docker 版本: Docker version 18.06.3-ce, build d7080c1

    k8s 組建版本:kubelet=1.15.4-00 kubectl=1.15.4-00 kubeadm=1.15.4-00

  5. 配置檔案

    由于一般的機器可能通路不了該域名(raw.githubusercontent.com), 筆者此處已經将相關資源上傳到csdn,共有需要的使用者下載下傳; 目前上傳稽核中,過2天應該可以搜尋名稱或者連結下載下傳了

    ​快速部署一套k8s-配置檔案