一:簡介及規劃部署:
雲原生生态系統:
http://dockone.io/article/3006CNCF 最新景觀圖:
https://landscape.cncf.io/CNCF 元原生主要架構簡介:
https://www.kubernetes.org.cn/5482.htmlkubernetes 設計架構:
https://www.kubernetes.org.cn/kubernetes%E8%AE%BE%E8%AE%A1%E6%9E%B6%E6%9E%84
K8s 核心優勢:
基于 yaml 檔案實作容器的自動建立、删除
更快速實作業務的彈性橫向擴容
動态發現新擴容的容器并對自動使用者提供通路
更簡單、更快速的實作業務代碼更新和復原
1.1:k8s 元件介紹:
https://k8smeetup.github.io/docs/admin/kube-apiserver/kube-apiserver:Kubernetes API server 為 api 對象驗證并配置資料,包括 pods、 services、 replicationcontrollers 和其它 api 對象,API Server 提供 REST 操作和到叢集共享狀态的前端, 所有其他元件通過它進行互動。
https://k8smeetup.github.io/docs/admin/kube-scheduler/kube-scheduler 是一個擁有豐富政策、能夠感覺拓撲變化、支援特定負載的功能元件,它對 叢集的可用性、性能表現以及容量都影響巨大。scheduler 需要考慮獨立的和集體的資源需 求、服務品質需求、硬體/軟體/政策限制、親和與反親和規範、資料位置、内部負載接口、 截止時間等等。如有必要,特定的負載需求可以通過 API 暴露出來。
https://k8smeetup.github.io/docs/admin/kube-controller-manager/kube-controller-manager:Controller Manager 作為叢集内部的管理控制中心,負責叢集内的 Node、Pod 副本、服務端點(Endpoint)、命名空間(Namespace)、服務賬号(ServiceAccount)、 資源定額(ResourceQuota)的管理,當某個 Node 意外當機時,Controller Manager 會及時發 現并執行自動化修複流程,確定叢集始終處于預期的工作狀态。
https://k8smeetup.github.io/docs/admin/kube-proxy/kube-proxy:Kubernetes 網絡代理運作在 node 上,它反映了 node 上 Kubernetes API 中定 義的服務,并可以通過一組後端進行簡單的 TCP、UDP 流轉發或循環模式(round robin)) 的 TCP、UDP 轉發,使用者必須使用 apiserver API 建立一個服務來配置代理,其實就是 kube- proxy 通過在主機上維護網絡規則并執行連接配接轉發來實作 Kubernetes 服務通路。
https://k8smeetup.github.io/docs/admin/kubelet/kubelet:是主要的節點代理,它會監視已配置設定給節點的 pod,具體功能如 下:
向 master 彙報 node 節點的狀态資訊
接受指令并在 Pod 中建立 docker 容器
準備 Pod 所需的資料卷
傳回 pod 的運作狀态
在 node 節點執行容器健康檢查
https://github.com/etcd-io/etcdetcd:
etcd 是 CoreOS 公司開發目前是 Kubernetes 預設使用的 key-value 資料存儲系統,用于儲存 所有叢集資料,支援分布式叢集功能,生産環境使用時需要為 etcd 資料提供定期備份機制。
https://kubernetes.io/zh/docs/concepts/overview/components/#新版本元件介紹
1.2:部署規劃圖:
1.3:安裝方式:
1.3.1:部署工具:
使用批量部署工具如(ansible/ saltstack)、手動二進制、kubeadm、apt-get/yum 等方式安裝, 以守護程序的方式啟動在主控端上,類似于是 Nginx 一樣使用 service 腳本啟動。
1.3.2:kubeadm:
https://kubernetes.io/zh/docs/setup/independent/create-cluster-kubeadm/#beta 階段。
使用 k8s 官方提供的部署工具 kubeadm 自動安裝,需要在 master 和 node 節點上安裝 docker 等元件,然後初始化,把管理端的控制服務和 node 上的服務都以 pod 的方式運作。
1.3.3:kubeadm 介紹:
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm/V1.10 版本 kubeadm 介紹:
https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md1.3.4:安裝注意事項:
注意:禁用 swap,selinux,iptables 核心參數優化,net.ipv4.ip_forward = 1
Linux ip_forward 資料包轉發
https://www.jianshu.com/p/134eeae69281vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
sysctl -p
echo '* - nofile 65535' >> /etc/security/limits.conf
ulimit -SHn 65535
swapoff -a
vim /etc/fstab
# /swap
1.4:kubernetes 部署過程:
具體步驟:
目前部署版本為目前次新版本,因為後面需要使用 kubeadm 做 kubernetes 版本更新示範。 目前官方最新版本為 1.17.4,是以本次先以 1.17.3 版本 kubernetes 為例。
1、 基礎環境準備
2、 部署 harbor 及 haproxy 高可用反向代理
3、在所有 master 安裝指定版本的 kubeadm 、kubelet、kubectl、docker
4、在所有 node 節點安裝指定版本的 kubeadm 、kubelet、docker,在 node 節點 kubectl 為 可選安裝,看是否需要在 node 執行 kubectl 指令進行叢集管理及 pod 管理等操作。 5、master 節點運作 kubeadm init 初始化指令
6、驗證 master 節點狀态
7、在 node 節點使用 kubeadm 指令将自己加入 k8s master(需要使用 master 生成 token 認 證)
8、驗證 node 節點狀态
9、建立 pod 并測試網絡通信
10、部署 web 服務 Dashboard
11、k8s 叢集更新
1.4.1:基礎環境準備:
伺服器環境:
最小化安裝基礎系統,并關閉防火牆 selinux 和 swap,更新軟體源、時間同步、安裝常用命 令,重新開機後驗證基礎配置
1.4.2:harbor 及反向代理:
#### 1.4.2.1:keepalived:
# apt update
# apt install keepalived
root@ha1:~# find / -name "keepalived.conf*"
/var/lib/dpkg/info/keepalived.conffiles
/usr/share/man/man5/keepalived.conf.5.gz
/usr/share/doc/keepalived/samples/keepalived.conf.vrrp
# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
root@ha1:~# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_instance VI_1 {
state MASTER
interface ens33
garp_master_delay 10
smtp_alert
virtual_router_id 56
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.31.3.248 dev ens33 label ens33:1
}
}
root@ha2:~# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb02
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
garp_master_delay 10
smtp_alert
virtual_router_id 56
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.31.3.248 dev ens33 label ens33:1
}
}
# systemctl restart keepalived.service
ip a
1.4.2.2:haproxy:
# vim /etc/haproxy/haproxy.cfg
listen k8s-api-6443
bind 172.31.3.248:6443
mode tcp
server master1 172.31.3.101:6443 check inter 3s fall 3 rise 5
server master2 172.31.3.102:6443 check inter 3s fall 3 rise 5
server master3 172.31.3.103:6443 check inter 3s fall 3 rise 5
1.4.2.3:harbor:
root@habor:~# bash docker-install.sh
apt install docker-compose
harbor-offline-installer-v1.7.6.tgz
root@habor:/usr/local/src/harbor# vim harbor.cfg
hostname = harbor.linux39.com
harbor_admin_password = 123456
./install.sh
修改win hosts
1.4.3:安裝 kubeadm 等元件:
在 master 和 node 節點安裝 kubeadm 、kubelet、kubectl、docker 等軟體。
1.4.3.1:所有 master 和 node 節點安裝 docker:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.mddownloads-for-v1174 #安裝經過驗證的 docker 版本
裝docker 19.03
#安裝必要的一些系統工具
# sudo apt-get update
#apt-get -y install apt-transport-https ca-certificates curl software-properties-common
安裝 GPG 證書
# curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
寫入軟體源資訊
# sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
更新軟體源
# apt-get -y update
檢視可安裝的 Docker 版本
# apt-cache madison docker-ce docker-ce-cli
安裝并啟動 docker 19.03.8:
# apt install docker-ce=5:19.03.8~3-0~ubuntu-bionic docker-ce-cli=5:19.03.8~3-0~ubuntu- bionic -y
# systemctl start docker && systemctl enable docker
驗證 docker 版本:
# docker version
1.4.3.2:master 節點配置 docker 加速器:
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://rb55hpi7.mirror.aliyuncs.com"]
}
EOF
# sudo systemctl daemon-reload && sudo systemctl restart docker
1.4.3.3:所有節點(master和node) 安裝 kubelet kubeadm kubectl:
https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.3e221b11jNU2W5apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
#手動輸入 EOF
apt-get update
# kubeadm和k8s版本對應
apt-cache madison kubeadm
apt install kubeadm=1.17.2-00 kubectl=1.17.2-00 kubelet=1.17.2-00 -y
# node節點可以不裝kubectl,如果不需要在node上管理k8s的話
啟動并驗證 kubelet
# systemctl start kubelet && systemctl enable kubelet && systemctl status kubelet
1.4.3.3.3:驗證 master 節點 kubelet 服務:
目前啟動 kbelet 以下報錯:
1.4.5:master 節點運作 kubeadm init 初始化指令:
在三台 master 中任意一台 master 進行叢集初始化,而且叢集初始化隻需要初始化一次。
1.4.5.1:kubeadm 指令使用:
--help
Available Commands:
alpha #kubeadm 處于測試階段的指令
completion #bash 指令補全,需要安裝 bash-completion
#mkdir /data/scripts -p
#kubeadm completion bash > /data/scripts/kubeadm_completion.sh
#source /data/scripts/kubeadm_completion.sh
#vim /etc/profile
source /data/scripts/kubeadm_completion.sh
config #管理 kubeadm 叢集的配置,該配置保留在叢集的 ConfigMap 中
#kubeadm config print init-defaults
help Help about any command
init #啟動一個 Kubernetes 主節點
join #将節點加入到已經存在的 k8s master
reset 還原使用 kubeadm init 或者 kubeadm join 對系統産生的環境變化
token #管理 token
upgrade #更新 k8s 版本
version #檢視版本資訊
1.4.5.2:kubeadm init 指令簡介:
指令使用:
叢集初始化:
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/root@docker-node1:~# kubeadm init --help
1.4.5.3:驗證目前 kubeadm 版本:
# kubeadm version #檢視目前 kubeadm 版本
# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:12:12Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"
1.4.5.4:準備鏡像:
檢視安裝指定版本 k8s 需要的鏡像有哪些
# kubeadm config images list --kubernetes-version v1.17.2
k8s.gcr.io/kube-apiserver:v1.17.2
k8s.gcr.io/kube-controller-manager:v1.17.2
k8s.gcr.io/kube-scheduler:v1.17.2
k8s.gcr.io/kube-proxy:v1.17.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
1.4.5.5:master 節點鏡像下載下傳:
推薦提前在 master 節點下載下傳鏡像以減少安裝等待時間,但是鏡像預設使用 Google 的鏡像倉 庫,是以國内無法直接下載下傳,但是可以通過阿裡雲的鏡像倉庫把鏡像先提前下載下傳下來,可以 避免後期因鏡像下載下傳異常而導緻 k8s 部署異常。
#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5
1.4.5.6:單節點 master 初始化: (略過,此文檔示範高可用,執行下面1.4.5.7的高可用初始化)
# kubeadm init --apiserver-advertise-address=192.168.7.101 --apiserver-bind-port=6443 -- kubernetes-version=v1.17.3 --pod-network-cidr=10.10.0.0/16 --service-cidr=10.20.0.0/16 -- service-dns-domain=linux36.local --image-repository=registry.cn-
hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap
1.4.5.6.1:初始化結果:
1.4.5.7:高可用 master 初始化:
首先基于 keepalived 實作高可用 VIP,然後實作三台 k8s master 基于 VIP 實作高可用。
1.4.5.7.1:基于指令初始化高可用 master 方式:
$master1
kubeadm init --apiserver-advertise-address=172.31.3.101 --apiserver-bind-port=6443 --control-plane-endpoint=172.31.3.248 --ignore-preflight-errors=swap --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers/ --kubernetes-version=v1.17.2 --pod-network-cidr=10.10.0.0/16 --service-cidr=192.168.1.0/20 --service-dns-domain=linux39.local
master1初始化失敗。 換master2成功:
root@master2:~# kubeadm init --apiserver-advertise-address=172.31.3.102 --control-plane-endpoint=172.31.3.248 --apiserver-bind-port=6443 --kubernetes-version=v1.17.2 --pod-network-cidr=10.10.0.0/16 --service-cidr=172.26.0.0/16 --service-dns-domain=linux39.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap
成功
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 172.31.3.248:6443 --token w17ocw.41tbr749f4va31ir \
--discovery-token-ca-cert-hash sha256:3a75ac6d2f6340113e14c84dec36d0469e2f265796e29b61cffabcd1b5644b26 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.31.3.248:6443 --token w17ocw.41tbr749f4va31ir \
--discovery-token-ca-cert-hash sha256:3a75ac6d2f6340113e14c84dec36d0469e2f265796e29b61cffabcd1b5644b26
1.4.5.7.2:基于檔案初始化高可用 master 方式:
(兩種方式都行,kubectl reset 嘗試一下這種)
(文檔後面操作都是基于這個)
# kubeadm config print init-defaults > kubeadm-init.yaml #将預設配置輸出至檔案
root@master1:~# vim kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 48h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.31.3.101
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 172.31.3.248:6443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.17.2
networking:
dnsDomain: linux39.local
podSubnet: 10.10.0.0/16
serviceSubnet: 192.168.0.0/20
scheduler: {}
kubeadm init --config kubeadm-init.yaml #基于檔案執行 k8s master 初始化
推薦檔案方式初始化
(圖誤,基于上面的配置初始化後是1.17.2)
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 172.31.3.248:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6c81f84a68c240e6196a05c6e4e16dc608764abdadb2195a38a6019125a0678e \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.31.3.248:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6c81f84a68c240e6196a05c6e4e16dc608764abdadb2195a38a6019125a0678e
1.4.5.7.3:配置 kube-config 檔案
Kube-config 檔案中包含 kube-apiserver 位址及相關認證資訊
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# kubectl get node
NAME STATUS ROLES AGE VERSION k8s-master1.magedu.net NotReady master 10m v1.17.3
1.4.5.7.4:目前 master(初始化成功的master2) 生成證書用于添加新控制節點:
# kubeadm init phase upload-certs --upload-certs
W0322 11:38:39.512764 18966 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0322 11:38:39.512817 18966 validation.go:28] Cannot validate kubelet config - no validator is available
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
d463abc439854a1f42b0f4c5d811d67a8b1518b9ca675eb3dbd128dcd29b5b72
1.4.5.7.5:添加新 master 節點:
在另外一台已經安裝了 docker、kubeadm 和 kubelet 的 master 節點上執行以下操作:
(最好把haproxy裡沒啟起來的節點先删掉,不弄好像也行。。。)
# master3
kubeadm join 172.31.3.248:6443 --token w17ocw.41tbr749f4va31ir \
--discovery-token-ca-cert-hash sha256:3a75ac6d2f6340113e14c84dec36d0469e2f265796e29b61cffabcd1b5644b26 \
--control-plane --certificate-key d463abc439854a1f42b0f4c5d811d67a8b1518b9ca675eb3dbd128dcd29b5b72
master1也加進去
master1:~# kubeadm reset
kubeadm join 172.31.3.248:6443 --token w17ocw.41tbr749f4va31ir --discovery-token-ca-cert-hash sha256:3a75ac6d2f6340113e14c84dec36d0469e2f265796e29b61cffabcd1b5644b26 --control-plane --certificate-key d463abc439854a1f42b0f4c5d811d67a8b1518b9ca675eb3dbd128dcd29b5b72
root@master2:~# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 STATUS master 50s v1.17.2
master2 NotReady master 10h v1.17.2
master3 NotReady master 6m56s v1.17.2
裝網絡元件
STATUS NotReady 得裝網絡元件
https://github.com/coreos/flannel For Kubernetes v1.7+ :
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
因為這個檔案裡預設網段跟之前我們初始化的網段不一緻,是以把檔案下載下傳下來改一下
--service-cidr=172.26.0.0/16
vi kube-flannel.yml
net-conf.json: |
{
"Network": "10.10.0.0/16",
Network 網段一定要和pod位址一樣! (kubeadm-init.yaml裡的podSubnet
如果設定不一樣,容器内ping不出去,需要重裝
root@master1:~# kubectl delete -f kube-flannel.yml
root@master1:~# kubectl get pod -A
termiting..
root@master1:~# kubectl apply -f kube-flannel.yml
(這個檔案會下載下傳flannel鏡像,如果下載下傳不下來,改一下位址)
k8s中flannel:鏡像下載下傳不了
https://www.cnblogs.com/dalianpai/p/12070902.htmlroot@master2:~# kubectl apply -f kube-flannel.yml
驗證 master 節點狀态:
# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready master 26m v1.17.2
master2 Ready master 11h v1.17.2
master3 Ready master 32m v1.17.2
1.4.5.7.6:kubeadm init 建立 k8s 叢集流程:
https://k8smeetup.github.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow1.4.6:驗證 master 狀态:
驗證 master 狀态:
root@master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready master 3h26m v1.17.2
master2 Ready master 3h10m v1.17.2
master3 Ready master 3h9m v1.17.2
node1 Ready <none> 3h5m v1.17.2
node2 Ready <none> 3h5m v1.17.2
node3 Ready <none> 3h5m v1.17.2
1.4.6.1:驗證 k8s 叢集狀态:
root@master1:~# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
1.4.6.2:目前 csr 證書狀态:
# kubectl get csr
NAME AGE REQUESTOR CONDITION csr-7bzrk 14m system:bootstrap:0fpghu Approved,Issued csr-jns87 20m system:node:kubeadm-master1.magedu.net Approved,Issued csr-vcjdl 14m system:bootstrap:0fpghu Approved,Issued
1.4.7:k8s 叢集添加 node 節點:
各需要加入到 k8s master 叢集中的 node 節點都要安裝 docker kubeadm kubelet ,是以都要 重新執行安裝 docker kubeadm kubelet 的步驟,即配置 apt 倉庫、配置 docker 加速器、安裝 指令、啟動 kubelet 服務。
- 添加指令為 master 端 kubeadm init 初始化完成之後傳回的添加指令:
kubeadm join 172.31.7.248:6443 --token 0fpghu.wt0t8adybh86jzvk \
--discovery-token-ca-cert-hash
sha256:aae2c43db9929b06c094e65a4614112676f8cafb80809c8071f2ee141edfc787
注:Node 節點會自動加入到 master 節點,下載下傳鏡像并啟動 flannel,直到最終在 master 看 到 node 處于 Ready 狀态。
1.4.8:驗證 node 節點狀态:
(拉鏡像可能比較慢,多等會)
/var/log/syslog
1.4.9:k8s 建立容器并測試網絡:
建立測試容器,測試網絡連接配接是否可以通信:
# kubectl run net-test1 --image=alpine --replicas=2 sleep 360000 (運作sleep指令)
重建dns
發現dns有問題,進入容器無法ping域名
root@master1:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default net-test1-5fcc69db59-fzlzg 1/1 Running 1 2d3h
default net-test1-5fcc69db59-l5w6h 1/1 Running 0 4m34s
kube-system coredns-7f9c544f75-2bljh 1/1 Running 0 2d7h
kube-system coredns-7f9c544f75-4vrmb 1/1 Running 0 2d7h
指定namespace删除
root@master1:~# kubectl delete pod coredns-7f9c544f75-2bljh -n kube-system
pod "coredns-7f9c544f75-2bljh" deleted
root@master1:~# kubectl delete pod coredns-7f9c544f75-4vrmb -n kube-system
pod "coredns-7f9c544f75-4vrmb" deleted
root@master1:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default net-test1-5fcc69db59-fzlzg 1/1 Running 1 2d3h
default net-test1-5fcc69db59-l5w6h 1/1 Running 0 8m25s
kube-system coredns-7f9c544f75-hncbm 1/1 Running 0 38s
kube-system coredns-7f9c544f75-m2dwv 1/1 Running 0 65s
通了
root@master1:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
net-test1-5fcc69db59-fzlzg 1/1 Running 1 2d3h
net-test1-5fcc69db59-l5w6h 1/1 Running 0 8m51s
root@master1:~# kubectl exec -it net-test1-5fcc69db59-l5w6h sh
/ # cat /etc/resolv.conf
nameserver 192.168.0.10
search default.svc.linux39.local svc.linux39.local linux39.local
options ndots:5
/ # ping baidu.com
PING baidu.com (39.156.69.79): 56 data bytes
64 bytes from 39.156.69.79: seq=0 ttl=127 time=20.698 ms
(可能是網絡元件沒有在短時間内起來,pod起了後沒網)
1.4.10:部署 web 服務 dashboard:
https://github.com/kubernetes/dashboard1.4.10.1:部署 dashboard 2.0.0-rc6:
root@master1:~# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry harbor.linux39.com
root@master1:~# systemctl daemon-reload
root@master1:~# systemctl restart docker
把node上也加上--insecure-registry harbor.linux39.com
root@master1:~# docker login harbor.linux39.com
Username: admin
Password:
root@master1:~# docker pull kubernetesui/dashboard:v2.0.0-rc6
root@master1:~# docker tag cdc71b5a8a0e harbor.linux39.com/baseimages/dashboard:v2.0.0-rc6
docker push harbor.linux39.com/baseimages/dashboard:v2.0.0-rc6
root@master1:~# docker tag docker.io/kubernetesui/metrics-scraper:v1.0.3 harbor.linux39.com/baseimages/metrics-scraper:v1.0.3
root@master1:~# docker push harbor.linux39.com/baseimages/metrics-scraper:v1.0.3
(metrics-scraper是用來收集一些名額資料的)
文檔裡有yml檔案
# root@master1:~# kubectl apply -f dash_board-2.0.0-rc6.yml
root@master1:~# kubectl apply -f ad_min-user.yml
[k8s]kubernetes安裝dashboard步驟
https://www.jianshu.com/p/073577bdec98如果teminating,删除
kubectl delete pod kubernetes-dashboard-d498886d6-mbqn9 -n kubernetes-dashboard
删不掉..
kubectl delete pod kubernetes-dashboard-d498886d6-mbqn9 -n kubernetes-dashboard --grace-period=0 --force
https://blog.csdn.net/zorsea/article/details/103141237?utm_medium=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-1.nonecase&depth_1-utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-1.nonecase 1.4.10.2:擷取登入 token:
# kubectl get secret -A | grep admin-user
kubernetes-dashboard admin-user-token-lkwbr
kubernetes.io/service-account-token 3 3m15s
# kubectl describe secret admin-user-token-lkwbr -n kubernetes-dashboard
1.4.10.3:驗證 NodePort:
1.4.10.4:登入 dashboard:
(通路哪個node ip都行,因為nodeport端口會在任何node節點上都會監聽)
https://172.31.3.107:30002/#/login1.4.10.5:登入成功:
1.4.11:k8s 叢集更新:
更新 k8s 叢集必須 先更新 kubeadm 版本到目的 k8s 版本,也就是說 kubeadm 是 k8s 更新 的 ”準升證”。
1.4.11.1:更新 k8s master 服務:
在 k8s 的所有 master 進行更新,将管理端服務 kube-controller-manager、kube-apiserver、 kube-scheduler、kube-proxy
1.4.11.1.1:驗證當 k8s 前版本:
# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:27:49Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
1.4.11.1.2:各 master 安裝指定新版本 kubeadm:
# apt-cache madison kubeadm #檢視 k8s 版本清單
# apt-get install kubeadm=1.17.4-00 #安裝新版本 kubeadm
(master1,2,3都執行)
# kubeadm version #驗證 kubeadm 版本
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:01:11Z", GoVersion:"go1.13.8", Compiler:"gc",
Platform:"linux/amd64"}
1.4.11.1.3:kubeadm 更新指令使用幫助:
# kubeadm upgrade --help
Upgrade your cluster smoothly to a newer version with this command
Usage:
kubeadm upgrade [flags]
kubeadm upgrade [command]
Available Commands:
apply Upgrade your Kubernetes cluster to the specified version
diff Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run
node Upgrade commands for a node in the cluster
plan Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. To skip the internet check, pass in the optional [version] parameter
1.4.11.1.4:更新計劃:
# kubeadm upgrade plan #檢視更新計劃
更新前最好在haproxy裡把它摘下去,避免請求轉發給正在更新的機器
1.4.11.1.5:開始更新:
拉鏡像太慢了,也可以先從阿裡雲拉下來
#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers//etcd:3.4.3-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5
bash k8s-1.17.4-images-download.sh
#kubeadm upgrade apply v1.17.4 #master1
#kubeadm upgrade apply v1.17.4 #master2
#kubeadm upgrade apply v1.17.4 #master3
(不要同時更新)
1.4.11.1.6:驗證鏡像:
1.4.11.2:更新 k8s node 服務:
更新用戶端服務 kubectl kubelet
apt install kubectl=1.17.4-00 kubelet=1.17.4-00
1.4.11.2.1:驗證目前 node 版本資訊: node 節點還是舊版本:
root@master3:~# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready master 18h v1.17.4
master2 Ready master 18h v1.17.4
master3 Ready master 17h v1.17.4
node1 NotReady <none> 16h v1.17.2
node2 Ready <none> 16h v1.17.2
node3 Ready <none> 16h v1.17.2
1.4.15.1.7:更新各 node 節點配置檔案:
# kubeadm upgrade node --kubelet-version 1.17.4
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node.
[upgrade] Using kubelet config version 1.17.4, while kubernetes-version is v1.17.4
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
1.4.15.8:各 Node 節點更新 kubelet 二進制包: 包含 master 節點在的 kubectl、kubelet 也要更新
apt install kubelet=1.17.4-00 kubeadm=1.17.4-00 kubectl=1.17.4-00
驗證最終版本:
1.5:測試運作 Nginx+Tomcat:
https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/測試運作 nginx,并最終可以将實作動靜分離
官方測試yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
1.5.1:運作 Nginx:
root@master1:/opt/kubdadm-yaml/nginx# docker pull nginx:1.14.2
root@master1:/opt/kubdadm-yaml/nginx# docker tag nginx:1.14.2 harbor.linux39.com/baseimages/nginx:1.14.2
root@master1:/opt/kubdadm-yaml/nginx# docker push harbor.linux39.com/baseimages/nginx:1.14.2
# pwd
/opt/kubdadm-yaml
# cat nginx/nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: harbor.linux39.com/baseimages/nginx:1.14.2
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
labels:
app: magedu-nginx-service-label
name: magedu-nginx-service
namespace: default
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30004
selector:
app: nginx
# kubectl apply -f nginx/nginx.yml
deployment.apps/nginx-deployment created
service/magedu-nginx-service created
# kubectl get pod
進入每個pod,測試頁面
root@master1:/opt/kubdadm-yaml/nginx# kubectl get pod
NAME READY STATUS RESTARTS AGE
net-test1-5fcc69db59-fzlzg 1/1 Running 2 4d1h
net-test1-5fcc69db59-l5w6h 1/1 Running 1 45h
nginx-deployment-66fc88798-s5w4m 1/1 Running 0 3m25s
nginx-deployment-66fc88798-vdpxl 1/1 Running 0 7m14s
nginx-deployment-66fc88798-w29zq 1/1 Running 0 3m25s
root@master1:/opt/kubdadm-yaml/nginx# kubectl exec -it nginx-deployment-66fc88798-vdpxl bash
root@nginx-deployment-66fc88798-vdpxl:/# cd /usr/share/nginx/html/
root@nginx-deployment-66fc88798-vdpxl:/usr/share/nginx/html# ls
50x.html index.html
root@nginx-deployment-66fc88798-vdpxl:/usr/share/nginx/html# echo pod2 > index.html
...
基于 haproxy 和 keepalived 實作高可用的反向代理,并通路到運作在 kubernetes 叢集中業務 Pod。
再弄一個VIP 172.31.3.249
haproxy來排程後端 nginx pod
root@ha1:/etc/haproxy# vim haproxy.cfg
listen k8s-api-6443
bind 172.31.3.248:6443
mode tcp
server master1 172.31.3.101:6443 check inter 3s fall 3 rise 5
server master2 172.31.3.102:6443 check inter 3s fall 3 rise 5
server master3 172.31.3.103:6443 check inter 3s fall 3 rise 5
listen k8s-web-80
bind 172.31.3.249:80
mode tcp
balance roundrobin
server 172.31.3.107 172.31.3.107:30004 check inter 2s fall 3 rise 5
server 172.31.3.108 172.31.3.108:30004 check inter 2s fall 3 rise 5
server 172.31.3.109 172.31.3.109:30004 check inter 2s fall 3 rise 5
root@ha1:/etc/haproxy# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_instance VI_1 {
state MASTER
interface ens33
garp_master_delay 10
smtp_alert
virtual_router_id 56
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.31.3.248 dev ens33 label ens33:1
172.31.3.249 dev ens33 label ens33:2
}
}
172.31.3.249 www.linux39.com
1.5.2:運作 tomcat:
root@master1:~# docker pull tomcat
root@master1:~# docker run -it --rm -p 8080:8080 tomcat
root@master1:~# docker exec -it c89dbb6c7462 bash
root@c89dbb6c7462:/usr/local/tomcat# cd webapps
root@c89dbb6c7462:/usr/local/tomcat/webapps# mkdir app
root@c89dbb6c7462:/usr/local/tomcat/webapps# echo "tomcat app" > app/index.html
root@master1:/opt/kubdadm-yaml/tomcat# vim Dockerfile
FROM tomcat
ADD ./app /usr/local/tomcat/webapps/app/
root@master1:/opt/kubdadm-yaml/tomcat# mkdir app
root@master1:/opt/kubdadm-yaml/tomcat# echo "tomcat app images page" > app/index.html
root@master1:/opt/kubdadm-yaml/tomcat# docker build -t harbor.linux39.com/linux39/tomcat:app .
root@master1:/opt/kubdadm-yaml/tomcat#
docker run -it --rm -p 8080:8080 harbor.linux39.com/linux39/tomcat:app
docker push harbor.linux39.com/linux39/tomcat:app
root@master1:/opt/kubdadm-yaml/tomcat# cp ../nginx/nginx.yml ./tomcat.yml
root@master1:/opt/kubdadm-yaml/tomcat# vim ./tomcat.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
labels:
app: tomcat
spec:
replicas: 3
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: harbor.linux39.com/linux39/tomcat:app
ports:
- containerPort: 8080 # 容器端口
---
kind: Service
apiVersion: v1
metadata:
labels:
app: magedu-tomcat-service-label
name: magedu-tomcat-service
namespace: default
spec:
type: NodePort
ports:
- name: http
port: 80 # service端口
protocol: TCP
targetPort: 8080 # 轉發到容器哪個端口
nodePort: 30005 #
selector:
app: tomcat
root@master1:/opt/kubdadm-yaml/tomcat# kubectl apply -f tomcat.yml
驗證tomcat沒問題,把# nodePort: 30005 關掉不對外
為友善測試,把 replicas: 1
注釋掉
# nodePort: 30005
重新
kubectl apply -f tomcat.yml
kubectl apply -f nginx.yml
1.5.3:dashboard 驗證 Pod:
1.5.3.1:web 界面驗證 Pod:
1.5.3.2:從 dashboard 進入容器:
1.5.8:Nginx 實作動靜分離:
進入到 nginx Pod (dashborad直接進也可)
# kubectl exec -it nginx-deployment-67656986d9-f74lb bash
# cat /etc/issue
Debian GNU/Linux 9 \n \l
更新軟體源并安裝基礎指令
# apt update
# apt install procps vim iputils-ping net-tools curl -y
測試 service 解析
/# ping magedu-tomcat-service
PING magedu-tomcat-service.default.svc.magedu.local (172.26.188.168) 56(84) bytes of data. 測試在 nginx Pod
通過 tomcat Pod 的 service 域名通路:
/# curl magedu-tomcat-service/app/index.html
Tomcat ...
修改 Nginx 配置檔案實作動靜分離,Nginx 一旦接受到有/app 的 uri 就轉發給 tomcat /
# vim /etc/nginx/conf.d/default.conf
location /app {
proxy_pass http://magedu-tomcat-service;
}
測試配置檔案
/# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
重新加載配置檔案
/# nginx -s reload
2020/03/22 09:10:49 [notice] 4057#4057: signal process started
haproxy VIP > NGINX > TOMCAT
1.6:kubeadm 其他指令: 1.6.1:token 管理:
# kubeadm token --help
create #建立 token,預設有效期 24 小時
delete #删除 token
generate #生成并列印 token,但不在伺服器上建立,即将 token 用于其他操作
list #列出伺服器所有的 token
如果到期了,想再加master,就可以create生成新的
1.6.2:reset 指令:
# kubeadm reset #還原 kubeadm 操作
kubeadm部署的,3台master隻能挂1個,5個master可以挂2