天天看點

k8s測試環境搭建

更新:v1.16.0安裝參考

https://kuboard.cn/install/history-k8s/install-k8s-1.16.0.html

https://www.cnblogs.com/eastonliu/p/11637929.html

環境準備

機器準備:

10.90.14.125 master
10.90.15.45 node1
10.90.15.43 node2
10.90.15.44 node3
           

http代理環境變量(optional):

vi /etc/profile
export http_proxy=http://使用者名:密碼@代理伺服器ip:port/
export no_proxy="10.90.14.125,10.90.14.124,10.90.14.123,10.72.66.37,10.72.66.36,10.96.0.0/12,10.244.0.0/16"
source /etc/profile
           

yum的http代理環境變量(optional):

vi /etc/yum.conf
proxy=http://使用者名:密碼@代理伺服器ip:port/
proxy_username=使用者名
proxy_password=密碼 
           

使用網易yum倉庫:下載下傳指定版本的repo檔案,放到/etc/yum.repos.d目錄:

yum clean all
yum makecache
           

安裝docker:注意安裝k8s支援的版本

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce.x86_64  --showduplicates |sort -r
yum install -y --setopt=obsoletes=0 docker-ce-18.09.8-3.el7
systemctl start docker
systemctl enable docker
           

配置kubernetes.repo為阿裡雲:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg  http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
           

安裝kubelet,kubeadm,kubectl:

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet.service
           

kubeadm初始化叢集

Easily bootstrap a secure Kubernetes cluster

kubeadm --help

Usage:
  kubeadm [command]

Available Commands:
  alpha       Kubeadm experimental sub-commands
  completion  Output shell completion code for the specified shell (bash or zsh)
  config      Manage configuration for a kubeadm cluster persisted in a ConfigMap in the cluster
  help        Help about any command
  init        Run this command in order to set up the Kubernetes control plane
  join        Run this on any machine you wish to join an existing cluster
  reset       Run this to revert any changes made to this host by 'kubeadm init' or 'kubeadm join'
  token       Manage bootstrap tokens
  upgrade     Upgrade your cluster smoothly to a newer version with this command
  version     Print the version of kubeadm

Flags:
  -h, --help                     help for kubeadm
      --log-file string          If non-empty, use this log file
      --log-file-max-size uint   Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --rootfs string            [EXPERIMENTAL] The path to the 'real' host root filesystem.
      --skip-headers             If true, avoid header prefixes in the log messages
      --skip-log-headers         If true, avoid headers when opening log files
  -v, --v Level                  number for the log level verbosity

Use "kubeadm [command] --help" for more information about a command.
           

kubeadm init

剛開始接觸的時候,看看man文檔還是很有幫助的。

[[email protected] ~]# kubeadm init --help
Run this command in order to set up the Kubernetes control plane

The "init" command executes the following phases:
preflight                  Run pre-flight checks
kubelet-start              Write kubelet settings and (re)start the kubelet
certs                      Certificate generation
  /ca                        Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components
  /apiserver                 Generate the certificate for serving the Kubernetes API
  /apiserver-kubelet-client  Generate the certificate for the API server to connect to kubelet
  /etcd-ca                   Generate the self-signed CA to provision identities for etcd
  /etcd-server               Generate the certificate for serving etcd
  /etcd-peer                 Generate the certificate for etcd nodes to communicate with each other
  /apiserver-etcd-client     Generate the certificate the apiserver uses to access etcd
  /etcd-healthcheck-client   Generate the certificate for liveness probes to healtcheck etcd
  /front-proxy-ca            Generate the self-signed CA to provision identities for front proxy
  /front-proxy-client        Generate the certificate for the front proxy client
  /sa                        Generate a private key for signing service account tokens along with its public key
kubeconfig                 Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file
  /admin                     Generate a kubeconfig file for the admin to use and for kubeadm itself
  /kubelet                   Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes
  /controller-manager        Generate a kubeconfig file for the controller manager to use
  /scheduler                 Generate a kubeconfig file for the scheduler to use
control-plane              Generate all static Pod manifest files necessary to establish the control plane
  /apiserver                 Generates the kube-apiserver static Pod manifest
  /controller-manager        Generates the kube-controller-manager static Pod manifest
  /scheduler                 Generates the kube-scheduler static Pod manifest
etcd                       Generate static Pod manifest file for local etcd
  /local                     Generate the static Pod manifest file for a local, single-node local etcd instance
upload-config              Upload the kubeadm and kubelet configuration to a ConfigMap
  /kubeadm                   Upload the kubeadm ClusterConfiguration to a ConfigMap
  /kubelet                   Upload the kubelet component config to a ConfigMap
upload-certs               Upload certificates to kubeadm-certs
mark-control-plane         Mark a node as a control-plane
bootstrap-token            Generates bootstrap tokens used to join a node to a cluster
addon                      Install required addons for passing Conformance tests
  /coredns                   Install the CoreDNS addon to a Kubernetes cluster
  /kube-proxy                Install the kube-proxy addon to a Kubernetes cluster

Usage:
  kubeadm init [flags]
  kubeadm init [command]

Available Commands:
  phase       Use this command to invoke single phase of the init workflow

Flags:
      --apiserver-advertise-address string   The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
      --apiserver-bind-port int32            Port for the API Server to bind to. (default 6443)
      --apiserver-cert-extra-sans strings    Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
      --cert-dir string                      The path where to save and store the certificates. (default "/etc/kubernetes/pki")
      --certificate-key string               Key used to encrypt the control-plane certificates in the kubeadm-certs Secret.
      --config string                        Path to a kubeadm configuration file.
      --cri-socket string                    Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
      --dry-run                              Don't apply any changes; just output what would be done.
      --feature-gates string                 A set of key=value pairs that describe feature gates for various features. No feature gates are available in this release.
  -h, --help                                 help for init
      --ignore-preflight-errors strings      A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
      --image-repository string              Choose a container registry to pull control plane images from (default "k8s.gcr.io")
      --kubernetes-version string            Choose a specific Kubernetes version for the control plane. (default "stable-1")
      --node-name string                     Specify the node name.
      --pod-network-cidr string              Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
      --service-cidr string                  Use alternative range of IP address for service VIPs. (default "10.96.0.0/12")
      --service-dns-domain string            Use alternative domain for services, e.g. "myorg.internal". (default "cluster.local")
      --skip-certificate-key-print           Don't print the key used to encrypt the control-plane certificates.
      --skip-phases strings                  List of phases to be skipped
      --skip-token-print                     Skip printing of the default bootstrap token generated by 'kubeadm init'.
      --token string                         The token to use for establishing bidirectional trust between nodes and control-plane nodes. The format is [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef
      --token-ttl duration                   The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire (default 24h0m0s)
      --upload-certs                         Upload control-plane certificates to the kubeadm-certs Secret.

Global Flags:
      --log-file string          If non-empty, use this log file
      --log-file-max-size uint   Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --rootfs string            [EXPERIMENTAL] The path to the 'real' host root filesystem.
      --skip-headers             If true, avoid header prefixes in the log messages
      --skip-log-headers         If true, avoid headers when opening log files
  -v, --v Level                  number for the log level verbosity

Use "kubeadm init [command] --help" for more information about a command.
           

踩坑開始

[[email protected] ~]# kubeadm init
W0801 13:32:36.665845  100602 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0801 13:32:36.666067  100602 version.go:99] falling back to the local client version: v1.15.1
           

由于第一次安裝,看到警告都慌,幹掉它。加上–kubernetes-version v1.15.1參數:

[[email protected] ~]# kubeadm init --kubernetes-version v1.15.1
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
        [WARNING HTTPProxy]: Connection to "https://10.90.14.125" uses proxy "http://z15075:[email protected]:8080/". If that is not intended, adjust your proxy settings
        [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://z15075:[email protected]:8080/". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
           

由于我的虛拟機都在内網,設定了http代理,而沒有設定白名單,是以内部ip也走了代理。把相關ip加入白名單:包括所有節點ip,叢集cidr,pod cidr,nameserver。再來

vi /etc/profile
export http_proxy=http://使用者名:密碼@代理伺服器ip:port/
export no_proxy="10.90.14.125,10.90.14.124,10.90.14.123,10.72.66.37,10.72.66.36,10.96.0.0/12,10.244.0.0/16"
source /etc/profile
kubeadm init --kubernetes-version v1.15.1
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
           

docker版本太高,解除安裝docker,選擇合适的版本重裝:

yum -y remove docker-ce.x86_64
yum -y remove docker-ce-cli.x86_64
yum -y remove containerd.io.x86_64
rm -rf /var/lib/docker
yum list docker-ce.x86_64  --showduplicates |sort -r
yum install -y --setopt=obsoletes=0 docker-ce-18.09.8-3.el7
systemctl start docker
systemctl enable docker
           

再來:

[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
           

關閉所有swap:swapoff -a。

再來:

[[email protected] ~]#  kubeadm init --kubernetes-version v1.15.1
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
           

kubeadm從ks.gcr.id拉取相關鏡像失敗,被防火牆了。解決方案有:

  • 配置docker的http代理,能通路k8s.gcr.io的代理
  • 使用–image-repository,指定一個可用的倉庫
  • 用docker把該有的image都pull下來,打成谷歌的标簽

使用第二種方式再來:

[[email protected] ~]#  kubeadm init --kubernetes-version v1.15.1 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [esb-edi-test kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.90.14.125]
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [esb-edi-test localhost] and IPs [10.90.14.125 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [esb-edi-test localhost] and IPs [10.90.14.125 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.004059 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node esb-edi-test as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node esb-edi-test as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: zszy1a.8zcd3a5ah6p7zb19
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.90.14.125:6443 --token zszy1a.8zcd3a5ah6p7zb19 \
    --discovery-token-ca-cert-hash sha256:956a63dcf70eb07068f7d9bd676602a4195ae8bee07a8337a206a8eb3447aba8
           

成功了:再按照他的訓示來:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
           

檢查下元件:

[[email protected] ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}
[[email protected] ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
esb-edi-test   NotReady   master   8m28s   v1.15.1
           

master狀态為NotReady,需要配置網絡插件,流行的有flannel:

[[email protected] ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
           

就這點輸出,估計是出錯了,看看pod狀态:

[[email protected] ~]# kubectl get pods -n kube-system
NAME                                   READY   STATUS              RESTARTS   AGE
coredns-6967fb4995-btkt6               0/1     ContainerCreating   0          19m
coredns-6967fb4995-s85q6               0/1     ContainerCreating   0          19m
etcd-esb-edi-test                      1/1     Running             0          17m
kube-apiserver-esb-edi-test            1/1     Running             0          18m
kube-controller-manager-esb-edi-test   1/1     Running             0          18m
kube-flannel-ds-amd64-ph85z            0/1     CrashLoopBackOff    4          3m34s
kube-proxy-8rsft                       1/1     Running             0          19m
kube-scheduler-esb-edi-test            1/1     Running             0          18m
           

果然,coredns和kube-flannel都沒ready,怎麼辦?看下log先:

[[email protected] kube-flannel]# kubectl --namespace kube-system logs kube-flannel-ds-amd64-ph85z
I0801 07:57:57.784315       1 main.go:514] Determining IP address of default interface
I0801 07:57:57.876204       1 main.go:527] Using interface with name eth0 and address 10.90.14.125
I0801 07:57:57.876270       1 main.go:544] Defaulting external address to interface address (10.90.14.125)
I0801 07:57:57.890683       1 kube.go:126] Waiting 10m0s for node controller to sync
I0801 07:57:57.890888       1 kube.go:309] Starting kube subnet manager
I0801 07:57:58.890987       1 kube.go:133] Node controller sync successful
I0801 07:57:58.891043       1 main.go:244] Created subnet manager: Kubernetes Subnet Manager - esb-edi-test
I0801 07:57:58.891054       1 main.go:247] Installing signal handlers
I0801 07:57:58.891379       1 main.go:386] Found network config - Backend type: vxlan
I0801 07:57:58.891499       1 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
E0801 07:57:58.892231       1 main.go:289] Error registering network: failed to acquire lease: node "esb-edi-test" pod cidr not assigned
I0801 07:57:58.892332       1 main.go:366] Stopping shutdownHandler...
           

pod cidr not assigned,沒有給pod劃分子網?Google一番,重新init,這次加上pod cidr,索性把叢集cidr也加上:

kubeadm init \
--kubernetes-version v1.15.1 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=0.0.0.0 \
           

按之前步驟重新安裝後的狀态:

[[email protected] ~]# kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-6967fb4995-8mh8x               1/1     Running   0          15m
coredns-6967fb4995-9mp9d               1/1     Running   0          15m
etcd-esb-edi-test                      1/1     Running   0          14m
kube-apiserver-esb-edi-test            1/1     Running   0          14m
kube-controller-manager-esb-edi-test   1/1     Running   0          14m
kube-flannel-ds-amd64-xpsdj            1/1     Running   0          6m24s
kube-proxy-fwwl5                       1/1     Running   0          15m
kube-scheduler-esb-edi-test            1/1     Running   0          14m
           

加入節點

加入node可以使用kubeadm join指令,需要兩個參數,token和ca

在master節點列印token:

[[email protected] ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
ydlp5v.hefzcti5tlx8ls1u   52m       2019-08-02T16:40:13+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
           

在master節點列印ca證書的sha256:

openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -pubkey | openssl rsa -pubin -outform DER 2>/dev/null | sha256sum | cut -d' ' -f1
a8b4b7fc4965aa8779b1e8caf22949f123e1d13d87f0307e506a2e0a34c68a9f
           

在node1使用kubeadm join加入:

[[email protected] ~]# kubeadm join 10.90.14.125:6443 \
> --token ydlp5v.hefzcti5tlx8ls1u \
> --discovery-token-ca-cert-hash sha256:a8b4b7fc4965aa8779b1e8caf22949f123e1d13d87f0307e506a2e0a34c68a9f
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
           

在master節點檢視nodes資訊:

[[email protected] log]# kubectl get nodes
NAME           STATUS   ROLES    AGE     VERSION
edi1           Ready    <none>   7m19s   v1.15.1
esb-edi-test   Ready    master   23h     v1.15.1
           

依次把其它節點也加上:

[[email protected] log]# kubectl get nodes
NAME           STATUS   ROLES    AGE    VERSION
edi1           Ready    <none>   11m    v1.15.1
edi2           Ready    <none>   61s    v1.15.1
edi3           Ready    <none>   112s   v1.15.1
esb-edi-test   Ready    master   23h    v1.15.1
           

總結

使用kubeadm安裝叢集還是很友善的。但其中也有不少坑,說到底還是對底層基礎掌握的不夠好,特别時容器網絡這一塊,要好好學習下。

繼續閱讀