Kubernetes - Xshell連接配接虛拟機 & 搭建Kubernetes基礎叢集
部落客将會搭建一個
master
節點,兩個工作節點的Kubernetes基礎叢集。
部落客的虛拟機:
- 三台
系統的虛拟機。CentOS7
-
節點虛拟記憶體是master
,工作節點虛拟記憶體是2G
,雖然Kubernetes官方建議記憶體最好1G
及以上,但部落客電腦的記憶體不太夠,況且部落客隻是為了示範,并不是生産環境;工作節點虛拟記憶體偏小,可以通過一些參數來避免出現問題,這裡大家不用擔心(畢竟部落客因為虛拟記憶體配置設定過小的原因,已經重新搭建了好幾次,2G
節點虛拟記憶體最好不要低于master
,不然2G
這樣的指令不能正常使用,至少部落客出現了這樣的情況)。kubectl get node
- 虛拟硬碟都是
。8G
建立虛拟機:
- VirtualBox安裝CentOS7
現在假設大家已經建立好了三台虛拟機。
使用Xshell連接配接虛拟機
在VirtualBox中操作虛拟機來執行指令總是感覺不太友善,比如在虛拟機的指令行中不能複制和粘貼指令,還有就是虛拟機總是要獨占滑鼠和鍵盤。
是以部落客使用
Xshell
來連接配接這些虛拟機。
想要使用
Xshell
來連接配接這些虛拟機,先要擷取這些虛拟機的IP位址:ip addr指令介紹。
啟動虛拟機,執行
ip addr
指令。

IP位址為
192.168.1.238/24
,這台虛拟機會是Kubernetes叢集的
master
節點。
然後正常關閉該虛拟機即可。
另外兩台虛拟機的IP位址查詢就不示範了,是一樣的操作。
右擊虛拟機,點選啟動,選擇無界面啟動,因為我們使用
Xshell
來提供虛拟機的界面。
打開
Xshell
,點選建立圖示。
輸入名稱和虛拟機IP位址,點選連接配接。
點選接受并儲存。
輸入使用者名(一般是
root
),選擇記住使用者名,點選确定。
輸入密碼,選擇記住密碼,點選确定。
Xshell
成功連接配接虛拟機。
點選這個圖示,可以在一個
Xshell
界面裡面建立多個會話。
另外兩台虛拟機的連接配接就不示範了。
修改兩個工作節點的主機名分别為
node-1
、
node-3
,這些名稱大家可以随便起,但連接配接符不能使用
_
,可以使用
-
和
.
。修改工作節點的主機名,是為了三台虛拟機的主機名都不一樣,否則會出現問題,如果大家三台機器的主機名已經各不一樣,就不需要這些操作了。
hostnamectl set-hostname node-1
[root@localhost ~]# hostnamectl set-hostname node-1
[root@node-1 ~]#
hostnamectl set-hostname node-3
[root@localhost ~]# hostnamectl set-hostname node-3
[root@node-3 ~]#
安裝 runtime
runtime
三台虛拟機都要安裝
runtime
。
從
v1.14.0
版本起,
kubeadm
将通過觀察已知的UNIX域套接字來自動檢測Linux節點上的
runtime
。 下表中是可檢測到的正在運作的
runtime
和
socket
路徑。
| |
| |
| |
| |
部落客這裡選擇安裝Docker,
CentOS7
安裝Docker可以看一下下面這篇部落格,建議大家不要使用腳本自動安裝,最好手動安裝指定版本的Docker,部落客這裡安裝的Docker版本是
19.03.13
,大家也可以安裝一樣的版本,因為Kubernetes和Docker兩者的版本有可能會導緻相容問題。
- CentOS7.3安裝Docker
現在假設大家都安裝好了Docker。
出現下面這個問題,應該是Docker沒有啟動,部落客使用的是
root
使用者,如果不是
root
使用者,可以在指令前面加
sudo
,
sudo
是Linux系統管理指令,是允許系統管理者讓普通使用者執行一些或者全部的
root
指令的一個工具。
[root@localhost ~]# docker version
Client: Docker Engine - Community
Version: 19.03.13
API version: 1.40
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:03:45 2020
OS/Arch: linux/amd64
Experimental: false
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
直接啟動Docker即可。
[root@localhost ~]# service docker start
Redirecting to /bin/systemctl start docker.service
[root@localhost ~]# docker version
Client: Docker Engine - Community
Version: 19.03.13
API version: 1.40
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:03:45 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.13
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:02:21 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.3.7
GitCommit: 8fba4e9a7d01810a393d5d25a3621dc101981175
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
安裝kubeadm、kubelet和kubectl
需要在每台虛拟機上安裝以下的軟體包:
-
:用來初始化叢集的指令。kubeadm
-
:在叢集的每個節點上,用來啟動kubelet
和容器等。pod
-
:用來與叢集通信的指令行工具。kubectl
使用阿裡雲的
yum
鏡像倉庫,官方的
yum
鏡像倉庫安裝會逾時:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@localhost ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
> http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
将
SELinux
設定為
permissive
模式(相當于将其禁用), 這是允許容器通路主機檔案系統所必須的,例如正常使用
pod
網絡。 你必須這麼做,直到
kubelet
做出更新支援
SELinux
為止。
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
[root@localhost ~]# setenforce 0
[root@localhost ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
接下來安裝
kubeadm
、
kubelet
和
kubectl
。
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
[root@localhost ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* extras: mirrors.163.com
* updates: mirrors.163.com
base | 3.6 kB 00:00:00
docker-ce-stable | 3.5 kB 00:00:00
extras | 2.9 kB 00:00:00
kubernetes | 1.4 kB 00:00:00
updates | 2.9 kB 00:00:00
kubernetes/primary | 83 kB 00:00:00
kubernetes 612/612
正在解決依賴關系
--> 正在檢查事務
---> 軟體包 kubeadm.x86_64.0.1.20.1-0 将被 安裝
--> 正在處理依賴關系 kubernetes-cni >= 0.8.6,它被軟體包 kubeadm-1.20.1-0.x86_64 需要
--> 正在處理依賴關系 cri-tools >= 1.13.0,它被軟體包 kubeadm-1.20.1-0.x86_64 需要
---> 軟體包 kubectl.x86_64.0.1.20.1-0 将被 安裝
---> 軟體包 kubelet.x86_64.0.1.20.1-0 将被 安裝
--> 正在處理依賴關系 socat,它被軟體包 kubelet-1.20.1-0.x86_64 需要
--> 正在處理依賴關系 conntrack,它被軟體包 kubelet-1.20.1-0.x86_64 需要
--> 正在檢查事務
---> 軟體包 conntrack-tools.x86_64.0.1.4.4-7.el7 将被 安裝
--> 正在處理依賴關系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit),它被軟體包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在處理依賴關系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit),它被軟體包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在處理依賴關系 libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit),它被軟體包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在處理依賴關系 libnetfilter_queue.so.1()(64bit),它被軟體包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在處理依賴關系 libnetfilter_cttimeout.so.1()(64bit),它被軟體包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在處理依賴關系 libnetfilter_cthelper.so.0()(64bit),它被軟體包 conntrack-tools-1.4.4-7.el7.x86_64 需要
---> 軟體包 cri-tools.x86_64.0.1.13.0-0 将被 安裝
---> 軟體包 kubernetes-cni.x86_64.0.0.8.7-0 将被 安裝
---> 軟體包 socat.x86_64.0.1.7.3.2-2.el7 将被 安裝
--> 正在檢查事務
---> 軟體包 libnetfilter_cthelper.x86_64.0.1.0.0-11.el7 将被 安裝
---> 軟體包 libnetfilter_cttimeout.x86_64.0.1.0.0-7.el7 将被 安裝
---> 軟體包 libnetfilter_queue.x86_64.0.1.0.2-2.el7_2 将被 安裝
--> 解決依賴關系完成
依賴關系解決
=================================================================================================================================================================================================================
Package 架構 版本 源 大小
=================================================================================================================================================================================================================
正在安裝:
kubeadm x86_64 1.20.1-0 kubernetes 8.3 M
kubectl x86_64 1.20.1-0 kubernetes 8.5 M
kubelet x86_64 1.20.1-0 kubernetes 20 M
為依賴而安裝:
conntrack-tools x86_64 1.4.4-7.el7 base 187 k
cri-tools x86_64 1.13.0-0 kubernetes 5.1 M
kubernetes-cni x86_64 0.8.7-0 kubernetes 19 M
libnetfilter_cthelper x86_64 1.0.0-11.el7 base 18 k
libnetfilter_cttimeout x86_64 1.0.0-7.el7 base 18 k
libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k
socat x86_64 1.7.3.2-2.el7 base 290 k
事務概要
=================================================================================================================================================================================================================
安裝 3 軟體包 (+7 依賴軟體包)
總下載下傳量:61 M
安裝大小:262 M
Downloading packages:
(1/10): conntrack-tools-1.4.4-7.el7.x86_64.rpm | 187 kB 00:00:00
(2/10): e4c317024d29cf4972b71bde08e7bde5648beb1897005d3f3ebfe363d4d89b1b-kubeadm-1.20.1-0.x86_64.rpm | 8.3 MB 00:00:01
(3/10): f431ef4494e7301b760c73f9e2ea3048c7d6a443bf71602b41c86190a604a479-kubectl-1.20.1-0.x86_64.rpm | 8.5 MB 00:00:01
(4/10): 15c57dcc3d83abca74b887cba9e53c0be2b3329bcf4a0c534b99a76653971810-kubelet-1.20.1-0.x86_64.rpm | 20 MB 00:00:02
(5/10): libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm | 18 kB 00:00:00
(6/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm | 23 kB 00:00:00
(7/10): socat-1.7.3.2-2.el7.x86_64.rpm | 290 kB 00:00:00
(8/10): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm | 18 kB 00:00:00
(9/10): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm | 19 MB 00:00:02
(10/10): 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm | 5.1 MB 00:00:08
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
總計 7.4 MB/s | 61 MB 00:00:08
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安裝 : libnetfilter_cthelper-1.0.0-11.el7.x86_64 1/10
正在安裝 : socat-1.7.3.2-2.el7.x86_64 2/10
正在安裝 : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 3/10
正在安裝 : cri-tools-1.13.0-0.x86_64 4/10
正在安裝 : kubectl-1.20.1-0.x86_64 5/10
正在安裝 : libnetfilter_queue-1.0.2-2.el7_2.x86_64 6/10
正在安裝 : conntrack-tools-1.4.4-7.el7.x86_64 7/10
正在安裝 : kubernetes-cni-0.8.7-0.x86_64 8/10
正在安裝 : kubelet-1.20.1-0.x86_64 9/10
正在安裝 : kubeadm-1.20.1-0.x86_64 10/10
驗證中 : conntrack-tools-1.4.4-7.el7.x86_64 1/10
驗證中 : kubernetes-cni-0.8.7-0.x86_64 2/10
驗證中 : libnetfilter_queue-1.0.2-2.el7_2.x86_64 3/10
驗證中 : kubectl-1.20.1-0.x86_64 4/10
驗證中 : kubeadm-1.20.1-0.x86_64 5/10
驗證中 : cri-tools-1.13.0-0.x86_64 6/10
驗證中 : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 7/10
驗證中 : socat-1.7.3.2-2.el7.x86_64 8/10
驗證中 : libnetfilter_cthelper-1.0.0-11.el7.x86_64 9/10
驗證中 : kubelet-1.20.1-0.x86_64 10/10
已安裝:
kubeadm.x86_64 0:1.20.1-0 kubectl.x86_64 0:1.20.1-0 kubelet.x86_64 0:1.20.1-0
作為依賴被安裝:
conntrack-tools.x86_64 0:1.4.4-7.el7 cri-tools.x86_64 0:1.13.0-0 kubernetes-cni.x86_64 0:0.8.7-0 libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7
libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 socat.x86_64 0:1.7.3.2-2.el7
完畢!
啟動
kubelet
。
systemctl enable --now kubelet
[root@localhost ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
配置虛拟機環境
三台虛拟機都要配置環境。
關閉虛拟機的防火牆。
systemctl stop firewalld
檢視虛拟機防火牆的狀态。
systemctl status firewalld
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: inactive (dead) since 六 2020-12-26 21:41:39 CST; 48min ago
Docs: man:firewalld(1)
Main PID: 658 (code=exited, status=0/SUCCESS)
12月 26 21:39:29 localhost.localdomain firewalld[658]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -D OUTPUT -m addrtype --dst-type LOCAL -j DOCKER' failed: iptables: No chain/tar...by that name.
12月 26 21:39:29 localhost.localdomain firewalld[658]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -D PREROUTING' failed: iptables: Bad rule (does a matching rule exist in that chain?).
12月 26 21:39:29 localhost.localdomain firewalld[658]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -D OUTPUT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
12月 26 21:39:29 localhost.localdomain firewalld[658]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER' failed: iptables: Too many links.
12月 26 21:39:29 localhost.localdomain firewalld[658]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER-ISOLATION-STAGE-1' failed: iptables: Too many links.
12月 26 21:39:29 localhost.localdomain firewalld[658]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -F DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
12月 26 21:39:29 localhost.localdomain firewalld[658]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
12月 26 21:39:29 localhost.localdomain firewalld[658]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching r...that chain?).
12月 26 21:41:39 localhost.localdomain systemd[1]: Stopping firewalld - dynamic firewall daemon...
12月 26 21:41:39 localhost.localdomain systemd[1]: Stopped firewalld - dynamic firewall daemon.
Hint: Some lines were ellipsized, use -l to show in full.
再執行下面這條指令。
systemctl enable docker.service
再建立
daemon.json
。
vi /etc/docker/daemon.json
輸入以下内容:
{
"exec-opts":["native.cgroupdriver=systemd"]
}
重新開機Docker,再檢視Docker運作狀态。
systemctl restart docker
systemctl status docker
[root@localhost ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since 六 2020-12-26 22:38:39 CST; 8s ago
Docs: https://docs.docker.com
Main PID: 15618 (dockerd)
Tasks: 21
Memory: 76.2M
CGroup: /system.slice/docker.service
└─15618 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
12月 26 22:38:39 localhost.localdomain dockerd[15618]: time="2020-12-26T22:38:39.233733538+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="...s.TaskDelete"
12月 26 22:38:39 localhost.localdomain dockerd[15618]: time="2020-12-26T22:38:39.233762335+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="...s.TaskDelete"
12月 26 22:38:39 localhost.localdomain dockerd[15618]: time="2020-12-26T22:38:39.427987979+08:00" level=info msg="Removing stale sandbox f97dd1457382757e80c758b2e96a099dc36b43fb1203fa55402ecf7...c703891c889)"
12月 26 22:38:39 localhost.localdomain dockerd[15618]: time="2020-12-26T22:38:39.434206692+08:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [...retrying...."
12月 26 22:38:39 localhost.localdomain dockerd[15618]: time="2020-12-26T22:38:39.449258659+08:00" level=info msg="There are old running containers, the network config will not take affect"
12月 26 22:38:39 localhost.localdomain dockerd[15618]: time="2020-12-26T22:38:39.483874720+08:00" level=info msg="Loading containers: done."
12月 26 22:38:39 localhost.localdomain dockerd[15618]: time="2020-12-26T22:38:39.517010578+08:00" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
12月 26 22:38:39 localhost.localdomain dockerd[15618]: time="2020-12-26T22:38:39.517959158+08:00" level=info msg="Daemon has completed initialization"
12月 26 22:38:39 localhost.localdomain dockerd[15618]: time="2020-12-26T22:38:39.543602109+08:00" level=info msg="API listen on /var/run/docker.sock"
12月 26 22:38:39 localhost.localdomain systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.
再執行下面這條指令。
#禁用目前的 swap
swapoff -a
使用kubeadm建立叢集
在
master
節點使用
kubeadm
建立叢集,工作節點不需要執行下面的各種指令。
部落客假設大家在三台虛拟機上都配置好了環境,接下來需要在
master
節點使用
kubeadm
建立叢集。
檢視
kubeadm
指令有哪些參數。
kubeadm --help
[root@localhost ~]# kubeadm --help
┌──────────────────────────────────────────────────────────┐
│ KUBEADM │
│ Easily bootstrap a secure Kubernetes cluster │
│ │
│ Please give us feedback at: │
│ https://github.com/kubernetes/kubeadm/issues │
└──────────────────────────────────────────────────────────┘
Usage:
kubeadm [command]
Available Commands:
alpha Kubeadm experimental sub-commands
certs Commands related to handling kubernetes certificates
completion Output shell completion code for the specified shell (bash or zsh)
config Manage configuration for a kubeadm cluster persisted in a ConfigMap in the cluster
help Help about any command
init Run this command in order to set up the Kubernetes control plane
join Run this on any machine you wish to join an existing cluster
reset Performs a best effort revert of changes made to this host by 'kubeadm init' or 'kubeadm join'
token Manage bootstrap tokens
upgrade Upgrade your cluster smoothly to a newer version with this command
version Print the version of kubeadm
Flags:
--add-dir-header If true, adds the file directory to the header of the log messages
-h, --help help for kubeadm
--log-file string If non-empty, use this log file
--log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--one-output If true, only write logs to their native severity level (vs also writing to each lower severity level
--rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem.
--skip-headers If true, avoid header prefixes in the log messages
--skip-log-headers If true, avoid headers when opening log files
-v, --v Level number for the log level verbosity
Use "kubeadm [command] --help" for more information about a command.
示例用法:建立一個雙節點叢集,一個控制平面節點(即
master
節點,用來控制叢集)和一個工作節點(你的工作負載,如
Pods
和
Deployments
的運作)。這裡提到了兩個指令:
kubeadm init
(控制平面節點) 和
kubeadm join <arguments-returned-from-init>
(工作節點)。
Example usage:
Create a two-machine cluster with one control-plane node
(which controls the cluster), and one worker node
(where your workloads, like Pods and Deployments run).
┌──────────────────────────────────────────────────────────┐
│ On the first machine: │
├──────────────────────────────────────────────────────────┤
│ control-plane# kubeadm init │
└──────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────┐
│ On the second machine: │
├──────────────────────────────────────────────────────────┤
│ worker# kubeadm join <arguments-returned-from-init> │
└──────────────────────────────────────────────────────────┘
You can then repeat the second step on as many other machines as you like.
-
:執行此指令,初始化Kubernetes叢集的控制平面。kubeadm init
-
:執行此指令,加入到現有的Kubernetes叢集中。kubeadm join
檢視
kubeadm init
指令的參數。
kubeadm init --help
參數比較多,這裡就不列舉了,就介紹一下
kubeadm init
指令常用的參數:
-
:指定Kubernetes的版本,部落客選擇--kubernetes-version
這個版本,因為該版本的Kubernetes和部落客安裝的Docker版本相容。v1.20.1
-
: 指定--pod-network-cidr
網絡的IP位址,它的值取決于你選擇哪個網絡,比如部落客選擇的是pod
網絡,是以值需要指定為Flannel
,因為10.244.0.0/16
網絡相對于其他網絡要簡單一些,對于剛開始搭建Kubernetes叢集比較友好,需要的配置較少,功能也較為完善。Flannel
-
: 指定--apiserver-advertise-address
節點釋出的IP位址,如果不指定,則會自動檢測網絡接口,通常是内網IP。master
-
:檢查錯誤将顯示為檢查的警告清單。值為--ignore-preflight-errors
會忽略所有檢查中的錯誤。all
-
:選擇一個--image-repository
從中拉取控制平面鏡像(預設為Container Registry
,這個k8s.gcr.io
拉取鏡像特别慢,會導緻逾時,部落客會換成阿裡雲的Container Registry
)。Container Registry
Kubernetes叢集需要的硬體資源,如果本地環境沒辦法滿足,可以通過設定
--ignore-preflight-errors=all
來避免出現問題。
現在終于可以執行
kubeadm init
指令了。部落客搭建的Kubernetes叢集的
master
節點IP位址為
192.168.1.238
,大家這裡填自己
master
節點的IP位址即可。
kubeadm init --kubernetes-version=v1.20.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.238 --ignore-preflight-errors=all --image-repository=registry.aliyuncs.com/google_containers
[root@localhost ~]# kubeadm init --kubernetes-version=v1.20.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.238 --ignore-preflight-errors=all --image-repository=registry.aliyuncs.com/google_containers
[init] Using Kubernetes version: v1.20.1
[preflight] Running pre-flight checks
[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost.localdomain] and IPs [10.96.0.1 192.168.1.238]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [192.168.1.238 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [192.168.1.238 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.501884 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ramuta.ko6wlounsq2uxzyt
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.238:6443 --token ramuta.ko6wlounsq2uxzyt \
--discovery-token-ca-cert-hash sha256:aacb271cc8b80f1eda32aef55158c83ce69ba391138fd14533f4c05400bbc5c4
如果是
root
使用者,執行下面這條指令。
export KUBECONFIG=/etc/kubernetes/admin.conf
如果不是
root
使用者,執行下面這些指令。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
再執行下面這條指令。
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
出現問題,如果你沒有出現問題,就不需要下面這些操作。
[root@localhost ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
網上很多辦法都不行(打開VPN也不行),部落客打開VPN直接通路
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
,再複制裡面的内容。
在本地直接建立一個
yaml
檔案。
點選下方圖示,本地需要安裝
Xftp
。
直接使用
Xftp
拖進去即可。
[root@localhost ~]# ll
總用量 8
-rw-------. 1 root root 1245 12月 25 21:45 anaconda-ks.cfg
-rw-r--r--. 1 root root 5042 12月 27 00:00 kube-flannel.yaml
[root@localhost ~]# kubectl apply -f kube-flannel.yaml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
這樣一個
master
節點的Kubernetes叢集就搭建好了,之後就需要往Kubernetes叢集中加入工作節點了。
工作節點加入Kubernetes叢集
工作節點加入Kubernetes叢集,兩台工作節點虛拟機需要執行下面這些指令要。
預設情況下,令牌會在24小時後過期。如果要在目前令牌過期後将節點加入叢集, 則可以通過在控制平面節點上運作以下指令來建立新令牌:
kubeadm token create
輸出類似于以下内容:
5didvk.d09sbcov8ph2amjw
如果你沒有
--discovery-token-ca-cert-hash
的值,則可以通過在控制平面節點上執行以下指令鍊來擷取它:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
輸出類似于以下内容:
8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
在兩台工作節點虛拟機中執行下面指令,将工作節點加入到Kubernetes叢集中,大家這裡填自己的
--token
和
--discovery-token-ca-cert-hash
。
kubeadm join 192.168.1.238:6443 --token ramuta.ko6wlounsq2uxzyt \
--discovery-token-ca-cert-hash sha256:aacb271cc8b80f1eda32aef55158c83ce69ba391138fd14533f4c05400bbc5c4 \
--ignore-preflight-errors=all
node-1
加入Kubernetes叢集。
[root@localhost ~]# kubeadm join 192.168.1.238:6443 --token ramuta.ko6wlounsq2uxzyt \
> --discovery-token-ca-cert-hash sha256:aacb271cc8b80f1eda32aef55158c83ce69ba391138fd14533f4c05400bbc5c4 \
> --ignore-preflight-errors=all
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "node-1" could not be reached
[WARNING Hostname]: hostname "node-1": lookup node-1 on 192.168.1.1:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
node-2
加入Kubernetes叢集。
[root@localhost ~]# kubeadm join 192.168.1.238:6443 --token ramuta.ko6wlounsq2uxzyt \
> --discovery-token-ca-cert-hash sha256:aacb271cc8b80f1eda32aef55158c83ce69ba391138fd14533f4c05400bbc5c4 \
> --ignore-preflight-errors=all
[preflight] Running pre-flight checks
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
[WARNING Hostname]: hostname "node-3" could not be reached
[WARNING Hostname]: hostname "node-3": lookup node-3 on 192.168.1.1:53: no such host
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在
master
節點上執行下面這條指令,可以擷取Kubernetes叢集的所有節點。
kubectl get node
[root@localhost ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
localhost.localdomain Ready control-plane,master 25m v1.20.1
node-1 Ready <none> 18m v1.20.1
node-3 Ready <none> 6m31s v1.20.1
Xshell
連接配接虛拟機和搭建Kubernetes基礎叢集就介紹到這裡。