天天看点

k8s的部署

(一)kubeadm安装:

(1)部署方式介绍:

①yum安装:

1、优点:安装,配置很简单,适合新手学习。

2、缺点:版本较低,目前仅支持K8S 1.5.2版本,很多功能不支持

②kind安装:

1、kind让你能够在本地计算机上运行Kubernetes kind要求你安装并配置好Docker

2、推荐阅读:https://kind.sigs.k8s.io/docs/user/quick-start/

③minikube部署:

1、minikube是一个工具, 能让你在本地运行Kubernetes

2、minikube在你本地的个人计算机(包括 Windows、macOS 和 Linux PC)运行一个单节点的Kubernetes集群,以便你来尝试 Kubernetes 或者开展每天的开发工作因此很适合开发人员体验K8S

3、推荐阅读:https://minikube.sigs.k8s.io/docs/start/

④kubeadm:

1、可以使用kubeadm工具来创建和管理Kubernetes集群,适合在生产环境部署

2、该工具能够执行必要的动作并用一种用户友好的方式启动一个可用的、安全的集群

3、推荐阅读:

https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/

https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

4、kubectl使得你可以对Kubernetes集群运行命令。 你可以使用kubectl来部署应用、监测和管理集群资源以及查看日志

5、推荐阅读:https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands​

6、大规模部署阅读:

​​https://kubernetes​​.io/zh/docs/setup/best-practices/cluster-large/

⑤二进制部署:

1、安装步骤比较繁琐,但可以更加了解细节适合运维人员生产环境中使用

⑥源码编译安装:

1、难度最大,请做好各种故障排查的心理准备其实这样一点对于K8S二次开发的人员应该不是很难

(2)kubectl安装部署:

①准备:{各个节点都需执行}

1、文档阅读:

​​https://kubernetes​​.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

2、每台机器2g或更多的内存

3、cpu2核心及以上

4、集群中的所有机器的网络彼此均能相互连接公网和内网都可以

②安装前的检查:{每个节点都要执行}

1、禁用交换分区:

[root@k8s_1 ~]# swapoff -a && sysctl -w vm.swappiness=0

[root@k8s_1 ~]# sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

[root@k8s_1 ~]#

2、节点之中不可以有重复的主机名、MAC 地址或product_uuid

[root@k8s_1 ~]# ifconfig eth0 | grep ether | awk '{print $2}'

[root@k8s_1 ~]# cat /sys/class/dmi/id/product_uuid

564DAB00-4845-F4D0-01E1-87D371046673

[root@k8s_1 ~]#

3、各个节点互相ping通:

4、配置host解析:

[root@k8s_1 ~]# cat >> /etc/hosts <<'EOF'

10.1.1.41 k8s41.itter.com

10.1.1.42 k8s42.itter.com

10.1.1.43 k8s43.itter.com

10.1.1.44 k8s44.itter.com

EOF

cat /etc/hosts

5、允许iptables检查桥接流量:

[root@k8s_1 ~]#cat <<EOF | tee /etc/modules-load.d/k8s.conf

br_netfilter

EOF

cat <<EOF | tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system

6、设置内核转发:

[root@k8s41 ~]# sysctl -w net.ipv4.ip_forward=1

[root@k8s41 ~]# cat /etc/sysctl.conf

net.ipv4.ip_forward = 1

[root@k8s41 ~]# sysctl -p

7、检查端口是否被占用:{https://kubernetes.io/zh/docs/reference/ports-and-protocols/}

TCP 入站 6443 Kubernetes API server 所有

TCP 入站 2379-2380 etcd server client API kube-apiserver, etcd

TCP 入站 10250 Kubelet API 自身, 控制面

TCP 入站 10259 kube-scheduler 自身

TCP 入站 10257 kube-controller-manager 自身

TCP 入站 10250 Kubelet API 自身, 控制面

TCP 入站 30000-32767 NodePort Services† 所有

③配置docker:{每个节点都要执行}

1、阅读链接:

​​https://github​​.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md#unchanged

2、配置源:

[root@k8s_1 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

[root@k8s_1 ~]# curl -o /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo

[root@k8s_1 ~]# sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo

[root@k8s_1 ~]#

3、安装docker:

[root@k8s_1 ~]# yum -y install docker-ce-18.09.9 docker-ce-cli-18.09.9

yum -y install bash-completion

source /usr/share/bash-completion/bash_completion

4、配置docker的优化:

mkdir -pv /etc/docker && cat <<EOF | sudo tee /etc/docker/daemon.json

{

"insecure-registries": ["k8s41.itter.com:5000"],

"registry-mirrors": ["https://tuv7rqqq.mirror.aliyuncs.com"],

"exec-opts": ["native.cgroupdriver=systemd"]

}

EOF

5、配置开机启动:

[root@k8s41 ~]# systemctl enable --now docker && systemctl status docker

6、禁用防火墙:

[root@k8s41 ~]# systemctl disable --now firewalld

[root@k8s41 ~]#

7、禁用selinux:

[root@k8s41 ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

[root@k8s41 ~]# grep ^SELINUX= /etc/selinux/config

SELINUX=disabled

[root@k8s41 ~]#

8、在41节点安装镜像仓库:

[root@k8s41 ~]# docker run -dp 5000:5000 --restart always --name test-registry registry:2

④安装k8s:

1、阅读:

​​https://kubernetes​​.io/zh/docs/tasks/tools/install-kubectl-linux/

2、安装软件源:

[root@k8s41 ~]# cat > /etc/yum.repos.d/kubernetes.repo <<EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=0

repo_gpgcheck=0

EOF

3、查看版本:

[root@k8s41 ~]# yum -y list kubeadm --showduplicates | sort -r

4、安装:{每一个k8s节点都要安装}

[root@k8s41 ~]# yum -y install kubeadm-1.15.12-0 kubelet-1.15.12-0 kubectl-1.15.12-0

5、启动:{启动kubelet服务,启动失败时正常现象,其会自动重启,因为缺失配置文件,初始化集群后恢复}

[root@k8s41 ~]# systemctl enable --now kubelet && systemctl status kubelet

6、使用kubeadm初始化master节点:{只在41节点也就是master上执行}

[root@k8s41 ~]# kubeadm init --kubernetes-version=v1.15.12 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.254.0.0/16

--kubernetes-version:指定K8S master组件的版本号

--image-repository:指定下载k8s master组件的镜像仓库地址

--pod-network-cidr:指定Pod的网段地址

--service-cidr:指定SVC的网段

7、如果以上执行成功会看到success的提示:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

​​ https://kubernetes​​.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.1.1.41:6443 --token n8ixpk.d5t3sg5gbidc2enl \

--discovery-token-ca-cert-hash sha256:3012aada56c007e8547333c60801603b735d381fdb30aa1f6d365811077590ec

[root@k8s41 ~]#

8、创建文件和目录:

[root@k8s41 ~]#mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

9、查看集群节点:

[root@k8s41 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

k8s41.itter.com NotReady master 4m28s v1.15.12

[root@k8s41 ~]#

10、查看api server:{如果能看到如下信息,则说明master组件部署完成}

[root@k8s41 ~]# cat $HOME/.kube/config

apiVersion: v1

clusters:

- cluster:

certificate-authority-data:

server: https://10.1.1.41:6443

name: kubernetes

[root@k8s41 ~]# kubectl get cs,no

NAME STATUS MESSAGE ERROR

componentstatus/scheduler Healthy ok

componentstatus/controller-manager Healthy ok

componentstatus/etcd-0 Healthy {"health":"true"}

NAME STATUS ROLES AGE VERSION

node/k8s41.itter.com NotReady master 9m27s v1.15.12

[root@k8s41 ~]#

11、其他节点加入master:{下面的这些都是在master初始化成功后显示的}

[root@k8s42 ~]# kubeadm join 10.1.1.41:6443 --token n8ixpk.d5t3sg5gbidc2enl \

--discovery-token-ca-cert-hash sha256:3012aada56c007e8547333c60801603b735d381fdb30aa1f6d365811077590ec

[root@k8s43 ~]#kubeadm join 10.1.1.41:6443 --token n8ixpk.d5t3sg5gbidc2enl \

--discovery-token-ca-cert-hash sha256:3012aada56c007e8547333c60801603b735d381fdb30aa1f6d365811077590ec

[root@k8s44 ~]# 44节点暂时不加入

12、查看加入的情况:

[root@k8s41 ~]# kubectl get cs,no

NAME STATUS MESSAGE ERROR

componentstatus/scheduler Healthy ok

componentstatus/controller-manager Healthy ok

componentstatus/etcd-0 Healthy {"health":"true"}

NAME STATUS ROLES AGE VERSION

node/k8s41.itter.com NotReady master 29m v1.15.12

node/k8s42.itter.com NotReady <none> 3m55s v1.15.12

node/k8s43.itter.com NotReady <none> 2m33s v1.15.12

[root@k8s41 ~]#

[root@k8s41 ~]#

13、k8s命令补全:

[root@k8s41 flannel]# echo "source <(kubectl completion bash)" >> ~/.bashrc && source ~/.bashrc

⑤安装k8s的网络:

1、先查看:{纯净的}

[root@k8s41 ~]# kubectl get pods -A

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system coredns-94d74667-2qhcw 0/1 Pending 0 21m

kube-system coredns-94d74667-sx2fq 0/1 Pending 0 21m

kube-system etcd-k8s41.itter.com 1/1 Running 0 20m

kube-system kube-apiserver-k8s41.itter.com 1/1 Running 0 20m

kube-system kube-controller-manager-k8s41.itter.com 1/1 Running 0 20m

kube-system kube-proxy-9w2nz 1/1 Running 0 21m

kube-system kube-proxy-g6pss 1/1 Running 0 8m59s

kube-system kube-proxy-g7g7k 1/1 Running 0 61s

kube-system kube-scheduler-k8s41.itter.com 1/1 Running 0 20m

[root@k8s41 ~]#

2、官方提供:{好像不行,暂时不要执行这步}

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-legacy.yml

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

3、有效链接:{可以把yml文件下载下来,然后本地执行,以下方式任选1即可}

1.下载,在执行:{或者在线执行任选其一}

[root@k8s41 flannel]# pwd

/root/flannel

[root@k8s41 flannel]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

2.下载完执行:{这个执行完,可能还需要在执行一个我找的那个}

[root@k8s41 flannel]# wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

[root@k8s41 flannel]# ls -l

drwxr-xr-x 2 root root 90 Oct 6 07:43 bad

-rw-r--r-- 1 root root 10599 Oct 6 19:14 kube-flannel.yml

[root@k8s41 flannel]# kubectl apply -f kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.extensions/kube-flannel-ds-amd64 created

daemonset.extensions/kube-flannel-ds-arm64 created

daemonset.extensions/kube-flannel-ds-arm created

daemonset.extensions/kube-flannel-ds-ppc64le created

daemonset.extensions/kube-flannel-ds-s390x created

[root@k8s41 flannel]#

4、是否创建了flannel组件

[root@k8s41 flannel]# kubectl get pods -A

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system coredns-94d74667-2qhcw 0/1 ContainerCreating 0 24m

kube-system coredns-94d74667-sx2fq 0/1 ContainerCreating 0 24m

kube-system etcd-k8s41.itter.com 1/1 Running 0 23m

kube-system kube-apiserver-k8s41.itter.com 1/1 Running 0 23m

kube-system kube-controller-manager-k8s41.itter.com 1/1 Running 0 23m

kube-system kube-flannel-ds-amd64-2dwvf 1/1 Running 0 73s

kube-system kube-flannel-ds-amd64-pq89n 1/1 Running 0 73s

kube-system kube-flannel-ds-amd64-qg27w 1/1 Running 0 73s

kube-system kube-proxy-9w2nz 1/1 Running 0 24m

kube-system kube-proxy-g6pss 1/1 Running 0 11m

kube-system kube-proxy-g7g7k 1/1 Running 0 4m

kube-system kube-scheduler-k8s41.itter.com 1/1 Running 0 23m

[root@k8s41 flannel]#

[root@k8s41 flannel]#

[root@k8s41 flannel]#

[root@k8s41 flannel]#

5、验证:

[root@k8s41 flannel]#kubectl get cs,no

NAME STATUS MESSAGE ERROR

componentstatus/scheduler Healthy ok

componentstatus/controller-manager Healthy ok

componentstatus/etcd-0 Healthy {"health":"true"}

NAME STATUS ROLES AGE VERSION

node/k8s41.itter.com Ready master 31m v1.15.12

node/k8s42.itter.com Ready <none> 18m v1.15.12

node/k8s43.itter.com Ready <none> 10m v1.15.12

[root@k8s41 flannel]#

[root@k8s41 flannel]#kubectl get pods -A -o wide|grep flannel

kube-system kube-flannel-ds-amd64-2dwvf 1/1 Running 0 7m35s 10.1.1.42 k8s42.itter.com <none> <none>

kube-system kube-flannel-ds-amd64-pq89n 1/1 Running 0 7m35s 10.1.1.41 k8s41.itter.com <none> <none>

kube-system kube-flannel-ds-amd64-qg27w 1/1 Running 0 7m35s 10.1.1.43 k8s43.itter.com <none> <none>

[root@k8s41 flannel]#

⑥组件的处理问题:

1、查看节点情况:

[root@localhost ~]# kubectl get pods -A

[root@localhost ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

k8s51.itter.com Ready master 3m36s v1.15.12

k8s52.itter.com Ready <none> 2m50s v1.15.12

k8s53.itter.com Ready <none> 2m48s v1.15.12

[root@localhost ~]#

[root@localhost ~]# kubectl get pods -o wide

2、删除flannel组件:

[root@k8s51 ~]# kubectl delete -f kube-flannel.yml

[root@k8s51 ~]# kubectl delete -f .

[root@k8s51 ~]# kubectl delete -f​​ https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml​​

[root@k8s41 flanner]# kubectl delete pod/kube-flannel-ds-6cp9h -n kube-system --grace-period=0 --force

3、如果加入集群的命令丢失了:{重新生成token}

[root@k8s41 ~]# kubeadm token create --print-join-command

kubeadm join 10.1.1.41:6443 --token h3es5v.4r3f1drdg48j5knd --discovery-token-ca-cert-hash sha256:71adc9bab2c293db9e8f89c66f8b01abb78d10f284011569cdf0f001f01450ad

[root@k8s41 ~]#

4、查看日志:

[root@k8s41 ~]# journalctl -f -u kubelet.service

[root@k8s41 ~]# kubectl describe

k8s

继续阅读