天天看點

kubernetes搭建!/bin/bash添加如下:

Kubeadm部署kubernates叢集(單master+2node)           

Kubeadm介紹

Kubeadm概述

Kubeadm是一個工具,它提供了kubeadm init 以及kubeadm join這兩個指令作為快速建立kubernetes叢集的最佳實踐。

Kubeadm通過執行必要的操作來啟動和運作一個最小可用的叢集。Kubeadm隻關心啟動叢集,而不關心其他工作,如部署前的節點準備工作、安裝各種kubenertes Dashboard 、監控解決方案以及特定雲提供商的各種插件,這些都不屬于kubeadm關注範圍

Kubeadm功能

Kubeadm init啟動一個kubenets主節點;

Kubeadm join 啟動一個kubenertes工作節點并且将其加入到叢集;

Kubeadm upgrade 更新一個kubenertes 叢集的到信版本;

Kubeadm config如果使用V1.7.x或者更低版本的kubeadm初始化叢集,你需要對叢集做一些配置以便使用kubeadm upgrade 指令

Kubeadm token 管理 kubeadm join 使用的令牌

Kubeadm reset 還原kubeadm init 或者kubeadm join 對主機所做的任何更改

Kubeadm version 列印kubeadm版本

Kubeadm alpan 預覽一組可用的新功能以便從社群搜集回報

Name ip role cpu mem

Node7 150 master 2 2

Node8 151 Node1 2 2

Node9 152 Node2 2 2

暫時以這樣的config做此實驗(出了問題 概不負責)

Env -----ready

3個節點,都是centos系統,核心版本:3.10.0-1062.el7.x86_64(uname -r)

vim /etc/hosts

192.168.58.150 master

192.168.58.151 node1

192.168.58.152 node2

hostnamectl set-hostname master

hostnamectl set-hostname node1

hostnamectl set-hostname node2

生成私鑰免密登入

ssh-keygen

for host in master node{1..2}

do

echo “ >>>${host}”

ssh-copy-id -i ~/.ssh/id_rsa.pub root@ ${host}

done

停止防火牆

systemctl stop firewald.service

禁用selinx

getenforce

由于開啟核心ipv4轉發需要加載br_netfilter

modprobe br_netfilter

建立 /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

使其生效

sysctl -p /etc/sysctl.d/k8s.conf

安裝ipvs

cat > /etc/sysconfig/modules/ipvs.modules <<EOF

!/bin/bash

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack_ipv4

EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash

/etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv

上面腳本建立了的/etc/sysconfig/modules/ipvs.modules 檔案,保證在節點重新開機後能自動加載所需子產品。使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4 指令檢視是否已經正确加載所需的核心子產品。

接下來還需要確定各個節點上已經安裝了 ipset 軟體包

yum install ipset

為了友善檢視ipvs的代理規則 ,最好安裝ipvsadm

yum install ipvsadm

同步伺服器時間

yum install chrony -y

systemctl start chronyd

systemctl enable chronyd

chronyc sources

關閉swap 分區

swapoff -a

修改 /etc/fstab檔案,注釋掉swap的自動挂載,使用free -m确認swap已經關閉。Swappiness參數調整,修改/etc/sysctl.d/k8s.conf添加下面一行

vm.swappiness = 0

安裝docker

yum install -y yum-utils device-mapper-persistent-data lvm2

yum install -y docker-ce-19.03.11

vim /etc/docker/daemon.json

{

"exec-opts": ["native.cgroupdriver=systemd"],

"log-driver": "json-file",

"log-opts": {

"max-size": "100m"

},

"storage-driver": "overlay2",

"registry-mirrors": ["https://hsg7ghdv.mirror.aliyuncs.com"]

}

systemctl start docker

systemctl enable docker

配置kubenetes.repo

安裝kubeadm kubelet kubectl

--disableexcludes=Kubernetes 禁用kubernetes之外的别的倉庫

yum install -y kubelet-1.19.3 kubeadm-1.19.3 kubectl-1.19.3 --disableexcludes=Kubernetes

kubeadm version

kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:47:53Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

systemctl enable --now kubelet

初始化叢集 在master上執行

kubeadm config print init-defaults > kubeadm.yaml

然後修改 kubeadm.yaml

[root@node2 ~]# vim kubeadm.yaml

apiVersion: kubeadm.k8s.io/v1beta2

bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token

    token: abcdef.0123456789abcdef

    ttl: 24h0m0s

    usages:

    • signing
    • authentication

kind: InitConfiguration

localAPIEndpoint:

advertiseAddress: 192.168.58.150 #節點内網ip

bindPort: 6443

nodeRegistration:

criSocket: /var/run/dockershim.sock

name: node2

taints:

  • effect: NoSchedule

    key: node-role.kubernetes.io/master

apiServer:

timeoutForControlPlane: 4m0s

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controllerManager: {}

dns:

type: CoreDNS

etcd:

local:

dataDir: /var/lib/etcd           

imageRepository: registry.aliyuncs.com/google_containers #修改為阿裡雲的鏡像源

kind: ClusterConfiguration

kubernetesVersion: v1.19.0

networking:

dnsDomain: cluster.local

podSubnet: 10.244.0.0/16 #falnnel需要這個網段

serviceSubnet: 10.96.0.0/12

scheduler: {}

apiVersion: kubeproxy.config.k8s.io/v1alpha1

kind: KubeProxyConfiguration

mode: ipvs # kube-proxy 模式

kubeadm init --config kubeadm.yaml

若是以上的方法 比較麻煩 可以使用一下的方式 (隻是建議嘗試以上的方式)

kubeadm init --kubernetes-version=1.19.0 \

--apiserver-advertise-address=192.168.58.150 \

--image-repository registry.aliyuncs.com/google_containers \

--service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16

如下圖代表 master搞好了

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl get nodes

NAME STATUS ROLES AGE VERSION

master NotReady master 85m v1.19.4

加入其它的節點 執行成功運作get nodes指令

kubeadm join 192.168.58.150:6443 --token abcdef.0123456789abcdef \

--discovery-token-ca-cert-hash sha256:822767b4ca97cd7403902266a9c94ad9a351a668cf1abed521d3b8770e649cac

[root@master ~]# kubectl get nodes

master NotReady master 87m v1.19.4

node1 NotReady 25s v1.19.4

node2 NotReady 21s v1.19.4

kubectl get pods -n kube-system

NAME READY STATUS RESTARTS AGE

coredns-6d56c8448f-d4srn 0/1 Pending 0 89m

coredns-6d56c8448f-qlrlv 0/1 Pending 0 89m

etcd-master 1/1 Running 0 90m

kube-apiserver-master 1/1 Running 0 90m

kube-controller-manager-master 1/1 Running 0 90m

kube-proxy-5f58g 1/1 Running 0 3m30s

kube-proxy-qz4sm 1/1 Running 0 3m35s

kube-proxy-swrrc 1/1 Running 0 89m

kube-scheduler-master 1/1 Running 0 90m

kubectl describe pod coredns-6d56c8448f-d4srn -n kube-system

wget

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f kube-flannel.yml

安裝dashboard

https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

kubectl apply -f recommended.yaml

[root@master ~]# kubectl get pods -n kubernetes-dashboard

NAME READY STATUS RESTARTS AGE

dashboard-metrics-scraper-7b59f7d4df-968cr 1/1 Running 0 20m

kubernetes-dashboard-665f4c5ff-nsx4h 1/1 Running 0 20m

[root@master ~]# kubectl get svc -n kubernetes-dashboard

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

dashboard-metrics-scraper ClusterIP 10.109.45.147 8000/TCP 24m

kubernetes-dashboard NodePort 10.102.141.248 443:30457/TCP 24m

[root@master ~]# kubectl get svc -A

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

default kubernetes ClusterIP 10.96.0.1 443/TCP 4h26m

kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 4h26m

kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.109.45.147 8000/TCP 35m

kubernetes-dashboard kubernetes-dashboard NodePort 10.102.141.248 443:30457/TCP 35m

建立全局的使用者

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: admin

roleRef:

kind: ClusterRole

name: cluster-admin

apiGroup: rbac.authorization.k8s.io

subjects:

  • kind: ServiceAccount

    name: admin

    namespace: kubernetes-dashboard

apiVersion: v1

namespace: kubernetes-dashboard

直接建立

kubectl apply -f admin.yaml

[root@master ~]# kubectl get secret -n kubernetes-dashboard|grep admin-token

admin-token-9rpvn kubernetes.io/service-account-token 3 3m35s

[root@master ~]# kubectl describe secret admin-token-9rpvn -n kubernetes-dashboard

清理叢集

如果你的叢集過程中遇到其他的問題,我們可以使用以下的指令進行重置

kubeadm reset

ifconfig cni0 down && ip link delete cni0

ifconfig flannel.1 down && ip link delete flannel.1

rm -rf /var/lib/cni/

添加其他的master節點

kubeadm join : \

--token \

--discovery-token-ca-cert-hash sha256: \

--control-plane --certificate-key

編輯confimap

vim kubeadm.yaml

kubectl edit configmap kubeadm-config -n kube-system

添加如下:

certSANs:

  • master
  • master2
  • 192.168.58.150
  • 192.168.58.153

controlPlaneEndpoint: 192.168.58.153:6443

建立token

kubeadm token create --print-join-command --config kubeadm.yaml

擷取token

kubeadm token list

擷取hash

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outf

orm der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

擷取—certificate-key

kubeadm init phase upload-certs --upload-certs

添加 master2節點 好像的加keepalive+proxy

kubeadm join 172.30.112.14:6443 --token abcdef.0123456789abcdef --discoverytoken-

ca-cert-hash

sha256:13c72c5df001626f62b31a57a6a03cfed32addb290cfd3ed5e48b7d12dd4adc2 --

control-plane --certificate-key

e65dd0140640a0510f30fe8bb8f49623901b9085f69a006895bd38bdc00dac89

檢查狀态

kubectl get nodes

NAME STATUS ROLES AGE VERSION

k8smaster Ready master 17h v1.19.3

k8smaster2 Ready master 11h v1.19.3

k8snode1 Ready 17h v1.19.3

k8snode2 Ready 17h v1.19.3