天天看點

二進制方式安裝k8s平台:calico無法部署成功問題解決方法

如果上述問題自行無法排查,可以按照如下方式提供相關資訊進行提問:

1.        

提問問題時提供以下資訊:

k8s安裝方式        Kubeadm或二進制

k8s版本                比如1.19.0

伺服器系統版本     比如CentOS 8.0

伺服器核心版本     比如4.18

虛拟機還是實體機  虛拟化平台是什麼

系統日志 tail-f /var/log/messages

,需要自行檢索error日志,最好開兩個視窗,一個用于重新開機異常的元件,一個用于看日志

異常容器日志 kubectl logs -f xxxxx -n xxxxxx,一般報錯資訊在最上面

2.        

較長的描述自己的問題,并說明前因後果

比如:我安裝了叢集,然後使用create-f

建立了metrics的資源,但是發現metrics server一直在重新開機,是使用logs -f 沒有看到異常日志。

3.        

自己嘗試的解決辦法

描述一下自己的嘗試過的解決方案,如果沒有嘗試過,請不要在群裡提問。

1    問題描述

我按照二進制安裝步驟一直執行到metric server章節。

calico的pod無法正常運作檢查

已經在calico-etcd.yaml中增加了如下内容來選擇網卡,問題依舊。

        - name:IP_AUTODETECTION_METHOD          value:

interface=ens160

[root@k8s-master01calico]# k logs -f calico-kube-controllers-cdd5755b9-jldd7 -n kube-system

2021-09-1005:57:37.942 [INFO][1] main.go 88: Loaded configuration from environment

config=&config.Config{LogLevel:"info", WorkloadEndpointWorkers:1,

ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"",

DatastoreType:"etcdv3"}

I091005:57:37.943785       1 client.go:360]

parsed scheme: "endpoint"

I091005:57:37.943853       1 endpoint.go:68]

ccResolverWrapper: sending new addresses to cc: [{https://192.168.1.121:2379

0  <nil>} {https://192.168.1.122:2379

0  <nil>}

{https://192.168.1.123:2379 0 

<nil>}]

W091005:57:37.943935       1

client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.

2021-09-1005:57:37.947 [INFO][1] main.go 109: Ensuring Calico datastore is initialized

W091005:57:37.957741       1

clientconn.go:1120] grpc: addrConn.createTransport failed to connect to

{https://192.168.1.122:2379 0 

<nil>}. Err :connection error: desc = "transport: authentication

handshake failed: x509: certificate signed by unknown authority (possibly

because of \"crypto/rsa: verification error\" while trying to verify

candidate authority certificate \"etcd\")". Reconnecting...

……

W091005:57:43.411781       1 clientconn.go:1120]

grpc: addrConn.createTransport failed to connect to {https://192.168.1.123:2379

0  <nil>}. Err :connection error:

desc = "transport: authentication handshake failed: x509: certificate

signed by unknown authority (possibly because of \"crypto/rsa:

verification error\" while trying to verify candidate authority

certificate \"etcd\")". Reconnecting...

W091005:57:43.479821       1

<nil>}. Err :connection error: desc = "transport:

authentication handshake failed: x509: certificate signed by unknown authority

(possibly because of \"crypto/rsa: verification error\" while trying

to verify candidate authority certificate \"etcd\")".

Reconnecting...

W091005:57:43.750584       1

{https://192.168.1.121:2379 0 

W091005:57:46.986261       1

W091005:57:47.439900       1

W091005:57:47.515303       1

{"level":"warn","ts":"2021-09-10T05:57:47.948Z","caller":"clientv3/retry_interceptor.go:62","msg":"retryingof unary invoker failed","target":"endpoint://client-7da517f6-373e-4568-a8d6-6effe3ff783e/192.168.1.121:2379","attempt":0,"error":"rpc

error: code = DeadlineExceeded desc = latest connection error: connection

error: desc = \"transport: authentication handshake failed: x509:

certificate signed by unknown authority (possibly because of \\\"crypto/rsa:

verification error\\\" while trying to verify candidate authority

certificate \\\"etcd\\\")\""}

2021-09-1005:57:47.948 [ERROR][1] client.go 261: Error getting cluster information config

ClusterInformation="default" error=context deadline exceeded

2021-09-1005:57:47.948 [FATAL][1] main.go 114: Failed to initialize Calico datastore

error=context deadline exceeded

[root@k8s-master01calico]# k describe pod calico-kube-controllers-cdd5755b9-jldd7 -n kube-system

Name:                 calico-kube-controllers-cdd5755b9-jldd7

Namespace:            kube-system

Priority:             2000000000

PriorityClass Name:  system-cluster-critical

Node:                 k8s-node02/192.168.1.125

StartTime:           Fri, 10 Sep 2021 13:56:24

+0800

Labels:               k8s-app=calico-kube-controllers

                     pod-template-hash=cdd5755b9

Annotations:          <none>

Status:               Running

IP:                   192.168.1.125

IPs:

  IP:          192.168.1.125

ControlledBy:  ReplicaSet/calico-kube-controllers-cdd5755b9

Containers:

  calico-kube-controllers:

    Container ID:  docker://ab3d09c8c85e8937f2501e6ea70f3ad3f087a7c0ab9f73d3832340d79a3a1d46

    Image:         registry.cn-beijing.aliyuncs.com/dotbalo/kube-controllers:v3.15.3

    Image ID:      docker-pullable://registry.cn-beijing.aliyuncs.com/dotbalo/kube-controllers@sha256:e59cc3287cb44ef835ef78e51b3835eabcddf8b16239a4d78abba5bb8260281c

    Port:           <none>

    Host Port:      <none>

    State:          Waiting

      Reason:       CrashLoopBackOff

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Fri, 10 Sep 2021 14:08:22 +0800

      Finished:     Fri, 10 Sep 2021 14:08:32 +0800

    Ready:          False

    Restart Count:  7

    Readiness:      exec [/usr/bin/check-status -r] delay=0stimeout=1s period=10s #success=1 #failure=3

    Environment:

      ETCD_ENDPOINTS:       <set to the key 'etcd_endpoints' ofconfig map 'calico-config'>  Optional:

false

      ETCD_CA_CERT_FILE:    <set to the key 'etcd_ca' of config map'calico-config'>         Optional:

      ETCD_KEY_FILE:        <set to the key 'etcd_key' of configmap 'calico-config'>        Optional:

      ETCD_CERT_FILE:       <set to the key 'etcd_cert' of configmap 'calico-config'>       Optional:

      ENABLED_CONTROLLERS: policy,namespace,serviceaccount,workloadendpoint,node

    Mounts:

      /calico-secrets from etcd-certs (rw)

     /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5pqcv

(ro)

Conditions:

  Type              Status

  Initialized       True

  Ready             False

  ContainersReady   False

  PodScheduled      True

Volumes:

  etcd-certs:

    Type:       Secret (a volume populated by a Secret)

    SecretName: calico-etcd-secrets

    Optional:   false

  kube-api-access-5pqcv:

    Type:                    Projected (a volume thatcontains injected data from multiple sources)

    TokenExpirationSeconds:  3607

    ConfigMapName:           kube-root-ca.crt

    ConfigMapOptional:       <nil>

    DownwardAPI:             true

QoSClass:                   BestEffort

Node-Selectors:              kubernetes.io/os=linux

Tolerations:                 CriticalAddonsOnly op=Exists

                            node-role.kubernetes.io/master:NoSchedule

                            node.kubernetes.io/not-ready:NoExecute op=Exists for 300s

                             node.kubernetes.io/unreachable:NoExecuteop=Exists for 300s

Events:

  Type    Reason     Age                 From               Message

  ----    ------     ----                ----               -------

  Normal  Scheduled  15m                 default-scheduler  Successfully assigned

kube-system/calico-kube-controllers-cdd5755b9-jldd7 to k8s-node02

  Normal  Pulled     14m (x4 over 15m)   kubelet            Container image

"registry.cn-beijing.aliyuncs.com/dotbalo/kube-controllers:v3.15.3"

already present on machine

  Normal  Created    14m (x4 over 15m)   kubelet            Created container

calico-kube-controllers

  Normal  Started    14m (x4 over 15m)   kubelet            Started container

  Warning Unhealthy  14m (x8 over 15m)   kubelet            Readiness probe failed: Failed to

read status file status.json: open status.json: no such file or directory

  Warning BackOff    34s (x64 over 15m)  kubelet            Back-off restarting failed

container

2    二進制方式全新安裝叢集

2.1   kubernetes版本

[root@k8s-master01selinux]# kubectl version

ClientVersion: version.Info{Major:"1", Minor:"21",

GitVersion:"v1.21.3",

GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39",

GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6",

Compiler:"gc", Platform:"linux/amd64"}

ServerVersion: version.Info{Major:"1", Minor:"21",

GitVersion:"

v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39",GitTreeState:"clean", BuildDate:"2021-07-15T20:59:07Z",

GoVersion:"go1.16.6", Compiler:"gc",

Platform:"linux/amd64"}

2.2   docker版本

[root@k8s-master01selinux]# docker version

Client:Docker Engine - Community

 Version:           20.10.8

 API version:       1.41

 Go version:        go1.16.6

 Git commit:        3967b7d

 Built:            Fri Jul 30 19:55:49 2021

 OS/Arch:           linux/amd64

 Context:           default

 Experimental:      true

Server:Docker Engine - Community

 Engine:

  Version:          20.10.8

  API version:      1.41 (minimum version 1.12)

  Go version:      go1.16.6

  Git commit:       75249d8

  Built:            Fri Jul 30 19:54:13 2021

  OS/Arch:          linux/amd64

  Experimental:     false

 containerd:

  Version:          1.4.9

  GitCommit:       e25210fe30a0a703442421b0f60afac609f950a3

 runc:

  Version:          1.0.1

  GitCommit:        v1.0.1-0-g4144b63

 docker-init:

  Version:          0.19.0

  GitCommit:        de40ad0

2.3   伺服器系統版本

centOS7.9

2.4   核心版本

[root@k8s-master01selinux]# uname -a

Linuxk8s-master01

4.19.12-1.el7.elrepo.x86_64 #1SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

2.5   平台

VMware esxi6.0   一共6個虛拟機,非克隆

2.6   網段劃分

2.6.1  主機網段:

192.168.1.121 -  126  ,apiserver的IP位址是192.168.1.120

2.6.2  pod網段

K8s Pod網段:172.16.0.0/12

2.6.3  service網段

K8s Service網段:10.10.0.0/16

2.7   元件啟動狀态

2.7.1  etcd

[root@k8s-master01~]# export ETCDCTL_API=3

[root@k8s-master01~]# etcdctl --endpoints="192.168.1.123:2379,192.168.1.122:2379,192.168.1.121:2379"

--cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem

--cert=/etc/kubernetes/pki/etcd/etcd.pem

--key=/etc/kubernetes/pki/etcd/etcd-key.pem 

endpoint status --write-out=table

+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

|      ENDPOINT      |       ID        | VERSION | DB SIZE | IS

LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |

|192.168.1.123:2379 |  43697987d6800e1

|  3.4.13 |  2.7 MB |     

true |      false |      1053 |      33548 |              33548 |        |

|192.168.1.122:2379 | 17de01a4b21e79c5 | 

3.4.13 |  2.7 MB |     false |      false |      1053 |      33548 |              33548 |        |

|192.168.1.121:2379 | 74ae75b016cc5110 | 

3.4.13 |  2.7 MB |     false |   

  false |      1053 |      33548 |              33548 |        |

2.7.2  kubelet

[root@k8s-master01calico]# systemctl status kubelet -l

● kubelet.service -Kubernetes Kubelet

   Loaded: loaded(/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)

  Drop-In:/etc/systemd/system/kubelet.service.d

           └─10-kubelet.conf

   Active: active (running) since Fri 2021-09-10 10:37:09 CST;2h 11min ago

     Docs:https://github.com/kubernetes/kubernetes

 Main PID: 1492 (kubelet)

    Tasks: 28

   Memory: 182.2M

   CGroup: /system.slice/kubelet.service

           ├─ 1492 /usr/local/bin/kubelet--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

--kubeconfig=/etc/kubernetes/kubelet.kubeconfig

--config=/etc/kubernetes/kubelet-conf.yml

--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1

--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin

--node-labels=node.kubernetes.io/node=

--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

--image-pull-progress-deadline=30m

           ├─22843 /opt/cni/bin/calico

           └─22878 /opt/cni/bin/calico

Sep10 12:48:05 k8s-master01 kubelet[1492]: E0910 12:48:05.671381    1492 cadvisor_stats_provider.go:151]

"Unable to fetch pod etc hosts stats" err="failed to get stats

failed command 'du' ($ nice -n 19 du -x -s -B 1) on path

/var/lib/kubelet/pods/717c76ea-76f4-4955-aae1-b81523fc6833/etc-hosts with error

exit status 1" pod="kube-system/metrics-server-64c6c494dc-cjlww"

Sep10 12:48:05 k8s-master01 kubelet[1492]: E0910 12:48:05.675672    1492 cadvisor_stats_provider.go:151]

/var/lib/kubelet/pods/7514f3ad-79e9-42de-8fc7-d3c66ae63586/etc-hosts with error

exit status 1" pod="kube-system/coredns-684d86ff88-v2x7v"

Sep10 12:48:12 k8s-master01 kubelet[1492]: I0910 12:48:12.818836    1492 scope.go:111]

"RemoveContainer"

containerID="48628d9c91bcdd9f6ed48a7db3e5dd744dccf0512242d06b58e63fb9ef4b2d4e"

Sep10 12:48:12 k8s-master01 kubelet[1492]: E0910 12:48:12.819844    1492 pod_workers.go:190] "Error

syncing pod, skipping" err="failed to \"StartContainer\"

for \"calico-node\" with CrashLoopBackOff: \"back-off 5m0s

restarting failed container=calico-node pod=calico-node-ssqbk_kube-system(b3e3dc8e-10bc-42ea-a5cf-04da5cc50a6b)\""

pod="kube-system/calico-node-ssqbk"

podUID=b3e3dc8e-10bc-42ea-a5cf-04da5cc50a6b

Sep10 12:48:15 k8s-master01 kubelet[1492]: E0910 12:48:15.717323    1492 cadvisor_stats_provider.go:151]

Sep10 12:48:15 k8s-master01 kubelet[1492]: E0910 12:48:15.721152    1492 cadvisor_stats_provider.go:151]

Sep10 12:48:23 k8s-master01 kubelet[1492]: I0910 12:48:23.821023    1492 scope.go:111]

Sep10 12:48:23 k8s-master01 kubelet[1492]: E0910 12:48:23.822048    1492 pod_workers.go:190] "Error

restarting failed container=calico-node

pod=calico-node-ssqbk_kube-system(b3e3dc8e-10bc-42ea-a5cf-04da5cc50a6b)\""

Sep10 12:48:25 k8s-master01 kubelet[1492]: E0910 12:48:25.747377    1492 cadvisor_stats_provider.go:151]

Sep10 12:48:25 k8s-master01 kubelet[1492]: E0910 12:48:25.752155    1492 cadvisor_stats_provider.go:151]

除master01外所有節點的kubelet服務存在如下報錯

[root@k8s-master02~]# systemctl status kubelet -l

● kubelet.service - KubernetesKubelet

   Active: active (running) since Fri 2021-09-10 10:37:01 CST;2h 12min ago

 Main PID: 1571 (kubelet)

    Tasks: 15

   Memory: 162.6M

           └─1571 /usr/local/bin/kubelet--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

Sep10 12:48:45 k8s-master02 kubelet[1571]: I0910 12:48:45.437795    1571 scope.go:111]

containerID="87f8e22822a7b98c896d49604f854e239583fe3412e104aaff15d0a14781c3ac"

Sep10 12:48:45 k8s-master02 kubelet[1571]: E0910 12:48:45.438864    1571 pod_workers.go:190] "Error

pod=calico-node-8dwz2_kube-system(24f3455e-25fc-45da-b7ef-9d4c13538358)\""

pod="kube-system/calico-node-8dwz2" podUID=24f3455e-25fc-45da-b7ef-9d4c13538358

Sep10 12:48:50 k8s-master02 kubelet[1571]: I0910 12:48:50.437436    1571 scope.go:111]

containerID="d87459075069d32e963986883a2a3de417c99e833f5b51eba4c1fca313e5bec9"

Sep10 12:48:50 k8s-master02 kubelet[1571]: E0910 12:48:50.438035    1571 pod_workers.go:190] "Error

for \"calico-kube-controllers\" with CrashLoopBackOff: \"back-off

5m0s restarting failed container=calico-kube-controllers pod=calico-kube-controllers-cdd5755b9-48f28_kube-system(802dd17f-50d3-44d3-b945-3588d30b7cda)\""

pod="kube-system/calico-kube-controllers-cdd5755b9-48f28"

podUID=802dd17f-50d3-44d3-b945-3588d30b7cda

Sep10 12:48:57 k8s-master02 kubelet[1571]: I0910 12:48:57.437418    1571

scope.go:111] "RemoveContainer"

Sep10 12:48:57 k8s-master02 kubelet[1571]: E0910 12:48:57.438692    1571 pod_workers.go:190] "Error

Sep10 12:49:04 k8s-master02 kubelet[1571]: I0910 12:49:04.438012    1571 scope.go:111]

Sep10 12:49:04 k8s-master02 kubelet[1571]: E0910 12:49:04.438570    1571 pod_workers.go:190] "Error

for \"calico-kube-controllers\" with CrashLoopBackOff:

\"back-off 5m0s restarting failed container=calico-kube-controllers

pod=calico-kube-controllers-cdd5755b9-48f28_kube-system(802dd17f-50d3-44d3-b945-3588d30b7cda)\""

Sep10 12:49:10 k8s-master02 kubelet[1571]: I0910 12:49:10.439566    1571 scope.go:111]

Sep10 12:49:10 k8s-master02 kubelet[1571]: E0910 12:49:10.440623    1571 pod_workers.go:190] "Error

2.7.3  kube-proxy

[root@k8s-master01calico]# systemctl status kube-proxy -l

● kube-proxy.service -Kubernetes Kube Proxy

   Loaded: loaded(/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)

   Active: active (running) since Fri 2021-09-10 10:36:36 CST;2h 13min ago

 Main PID: 1000 (kube-proxy)

    Tasks: 5

   Memory: 52.6M

   CGroup: /system.slice/kube-proxy.service

           └─1000 /usr/local/bin/kube-proxy--config=/etc/kubernetes/kube-proxy.conf --v=2

Sep10 12:48:46 k8s-master01 kube-proxy[1000]: I0910 12:48:46.522496    1000 proxier.go:1034] Not syncing ipvs

rules until Services and Endpoints have been received from master

Sep10 12:48:46 k8s-master01 kube-proxy[1000]: I0910 12:48:46.524785    1000 proxier.go:1034] Not syncing ipvs

Sep10 12:48:53 k8s-master01 kube-proxy[1000]: E0910 12:48:53.480329    1000 reflector.go:138]

k8s.io/client-go/informers/factory.go:134: Failed to watch

*v1beta1.EndpointSlice: failed to list *v1beta1.EndpointSlice: Unauthorized

Sep10 12:49:16 k8s-master01 kube-proxy[1000]: I0910 12:49:16.523566    1000 proxier.go:1034] Not syncing ipvs

Sep10 12:49:16 k8s-master01 kube-proxy[1000]: I0910 12:49:16.525915    1000 proxier.go:1034] Not syncing ipvs

Sep 10 12:49:16 k8s-master01 kube-proxy[1000]:E0910 12:49:16.870887    1000

reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch

*v1.Service: failed to list *v1.Service: Unauthorized

Sep10 12:49:39 k8s-master01 kube-proxy[1000]: E0910 12:49:39.660888    1000 reflector.go:138]

Sep10 12:49:46 k8s-master01 kube-proxy[1000]: I0910 12:49:46.523929    1000 proxier.go:1034] Not syncing ipvs

Sep10 12:49:46 k8s-master01 kube-proxy[1000]: I0910 12:49:46.526263    1000 proxier.go:1034] Not syncing ipvs

Sep 1012:50:08 k8s-master01 kube-proxy[1000]: E0910 12:50:08.267885    1000 reflector.go:138]

k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed

to list *v1.Service: Unauthorized

2.7.4  kube-apiserver

kube-apiserver裡重複出現如下紅色字型的報錯

[root@k8s-master01calico]# systemctl status kube-apiserver -l

● kube-apiserver.service- Kubernetes API Server

   Loaded: loaded(/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset:

disabled)

   Active: active (running) since Fri 2021-09-10 10:36:36 CST;2h 14min ago

 Main PID: 999 (kube-apiserver)

    Tasks: 8

   Memory: 432.8M

   CGroup: /system.slice/kube-apiserver.service

           └─999 /usr/local/bin/kube-apiserver--v=2 --logtostderr=true --allow-privileged=true --bind-address=0.0.0.0

--secure-port=6443 --insecure-port=0 --advertise-address=192.168.1.121

--service-cluster-ip-range=10.10.0.0/16 --service-node-port-range=30000-32767

--etcd-servers=https://192.168.1.121:2379,https://192.168.1.122:2379,https://192.168.1.123:2379

--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem --etcd-certfile=/etc/etcd/ssl/etcd.pem

--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem

--client-ca-file=/etc/kubernetes/pki/ca.pem

--tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem

--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem

--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem

--service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key

--service-account-issuer=https://kubernetes.default.svc.cluster.local

--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota

--authorization-mode=Node,RBAC --enable-bootstrap-token-auth=true

--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem

--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem

--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem

--requestheader-allowed-names=aggregator

--requestheader-group-headers=X-Remote-Group

--requestheader-extra-headers-prefix=X-Remote-Extra-

--requestheader-username-headers=X-Remote-Usera

--feature-gates=EphemeralContainers=true

Sep10 12:50:27 k8s-master01 kube-apiserver[999]: I0910 12:50:27.652902     999 clientconn.go:948] ClientConn

switching balancer to "pick_first"

Sep10 12:50:27 k8s-master01 kube-apiserver[999]: I0910 12:50:27.653172     999 balancer_conn_wrappers.go:78]

pickfirstBalancer: HandleSubConnStateChange: 0xc00e506c70, {CONNECTING

<nil>}

Sep10 12:50:27 k8s-master01 kube-apiserver[999]: I0910 12:50:27.678964     999 balancer_conn_wrappers.go:78] pickfirstBalancer:

HandleSubConnStateChange: 0xc00e506c70, {READY <nil>}

Sep10 12:50:27 k8s-master01 kube-apiserver[999]: I0910 12:50:27.681622     999 controlbuf.go:508] transport:

loopyWriter.run returning. connection error: desc = "transport is

closing"

Sep10 12:50:42 k8s-master01 kube-apiserver[999]: I0910 12:50:42.807898     999 client.go:360] parsed scheme:

"passthrough"

Sep10 12:50:42 k8s-master01 kube-apiserver[999]: I0910 12:50:42.807965     999 passthrough.go:48] ccResolverWrapper:

sending update to cc: {[{https://192.168.1.122:2379  <nil> 0 <nil>}] <nil>

Sep10 12:50:42 k8s-master01 kube-apiserver[999]: I0910 12:50:42.807988     999 clientconn.go:948] ClientConn

Sep10 12:50:42 k8s-master01 kube-apiserver[999]: I0910 12:50:42.808211     999 balancer_conn_wrappers.go:78]

pickfirstBalancer: HandleSubConnStateChange: 0xc00df9a1f0, {CONNECTING

Sep10 12:50:42 k8s-master01 kube-apiserver[999]: I0910 12:50:42.833075     999 balancer_conn_wrappers.go:78]

pickfirstBalancer: HandleSubConnStateChange: 0xc00df9a1f0, {READY <nil>}

Sep10 12:50:42 k8s-master01 kube-apiserver[999]: I0910 12:50:42.834371     999 controlbuf.go:508] transport:

[root@k8s-master01calico]#

2.7.5  kube-controller-manager

[root@k8s-master01calico]# systemctl status kube-controller-manager -l

kube-controller-manager.service - Kubernetes Controller Manager

   Loaded: loaded(/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor

preset: disabled)

 Main PID: 1006 (kube-controller)

    Tasks: 6

   Memory: 101.1M

   CGroup:/system.slice/kube-controller-manager.service

           └─1006/usr/local/bin/kube-controller-manager --v=2 --logtostderr=true

--address=127.0.0.1 --root-ca-file=/etc/kubernetes/pki/ca.pem

--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem

--service-account-private-key-file=/etc/kubernetes/pki/sa.key

--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig --leader-elect=true

--cluster-signing-duration=876000h0m0s --use-service-account-credentials=true

--node-monitor-grace-period=40s --node-monitor-period=5s

--pod-eviction-timeout=2m0s --controllers=*,bootstrapsigner,tokencleaner

--allocate-node-cidrs=true --cluster-cidr=172.16.0.0/12

--node-cidr-mask-size=24 --feature-gates=EphemeralContainers=true

Sep10 10:37:11 k8s-master01 kube-controller-manager[1006]: W0910

10:37:11.138214    1006

authorization.go:184] No authorization-kubeconfig provided, so

SubjectAccessReview of authorization tokens won't work.

Sep10 10:37:11 k8s-master01 kube-controller-manager[1006]: I0910

10:37:11.138277    1006

controllermanager.go:175] Version: v1.21.3

10:37:11.148097    1006 tlsconfig.go:178]

loaded client CA

[0/"request-header::/etc/kubernetes/pki/front-proxy-ca.pem"]:

"kubernetes" [] issuer="<self>" (2021-09-08 07:50:00

+0000 UTC to 2026-09-07 07:50:00 +0000 UTC (now=2021-09-10 02:37:11.148072604

+0000 UTC))

10:37:11.148376    1006 tlsconfig.go:200]

loaded serving cert ["Generated self signed cert"]:

"localhost@1631241427" [serving] validServingFor=[127.0.0.1,localhost,localhost]

issuer="localhost-ca@1631241426" (2021-09-10 01:37:03 +0000 UTC to

2022-09-10 01:37:03 +0000 UTC (now=2021-09-10 02:37:11.148361186 +0000 UTC))

10:37:11.148619    1006

named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]:

"apiserver-loopback-client@1631241431" [serving]

validServingFor=[apiserver-loopback-client]

issuer="apiserver-loopback-client-ca@1631241429" (2021-09-10 01:37:07

+0000 UTC to 2022-09-10 01:37:07 +0000 UTC (now=2021-09-10 02:37:11.148602782

10:37:11.148671    1006

secure_serving.go:197] Serving securely on [::]:10257

10:37:11.148771    1006 tlsconfig.go:240]

Starting DynamicServingCertificateController

10:37:11.148840    1006

dynamic_cafile_content.go:167] Starting

request-header::/etc/kubernetes/pki/front-proxy-ca.pem

10:37:11.149474    1006

deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252

10:37:11.149979    1006 leaderelection.go:243]

attempting to acquire leader lease kube-system/kube-controller-manager..

2.8   apiserver通路測試

從各節點telnet 192.168.1.120 8443 均成功

[root@k8s-master02etcd]# telnet 192.168.1.120 8443

Trying192.168.1.120...

Connectedto 192.168.1.120.

Escapecharacter is '^]'.

[root@k8s-master02etcd]#

curl https://192.168.1.120:8443/healthz -k

ok[root@k8s-master02 etcd]#

2.9   檢視kubelet證書下發情況

每個節點的kubelet證書下發成功

ok[root@k8s-node02pki]# ls -l /var/lib/kubelet/pki/

total12

-rw-------1 root root 1248 Sep 10 10:37 kubelet-client-2021-09-10-10-37-40.pem

lrwxrwxrwx1 root root   59 Sep 10 10:37

kubelet-client-current.pem ->

/var/lib/kubelet/pki/kubelet-client-2021-09-10-10-37-40.pem

-rw-r--r--1 root root 2266 Sep  8 15:04 kubelet.crt

-rw-------1 root root 1679 Sep  8 15:04 kubelet.key

3    嘗試解決

看起來是etcd的證書有問題,我試着重新生成一下看看。

3.1   檢查各元件配置一緻性【問題依舊】

我重新在master01上配置了各元件的配置檔案和service的啟動檔案,并拷貝至其它節點。

3.2   删除所有已經生成的etcd證書檔案,重新生成證書,并重新建立calico 【問題依舊】

4    最終解決方法:将calico從3.15更新到3.19,重新生成kube-proxy的證書,删除sa重建,并重新開機服務

删除原來的calico

kubectl delete -f calico-etcd.yaml

下載下傳新的calico的yaml檔案到本地:

https://gitee.com/dukuan/k8s-ha-install/blob/manual-installation-v1.22.x/calico/calico.yaml

修改calico.yaml中的 POD_CIDR這個字段,把這個字段改成你的POD網段,然後create -f

還是老樣子。pod會因為liveness和readiness檢查不通過,一直重新開機。

執行kubectl logs -f calico-node

看起來是kube-proxy的認證存在問題,重新生成kube-proxy的證書。

我試了下3.15的那個calico-etcd.yaml還是不行,用3.19的就可以。

看樣子是重新生成了證書之後kube-proxy的sa和clusterrolebinding要删了重建。

kubectl create serviceaccount kube-proxy -nkube-system

kubectl create clusterrolebindingsystem:kube-proxy         --clusterrole

system:node-proxier        

--serviceaccount kube-system:kube-proxy

繼續閱讀