天天看點

手把手教你在CentOS上搭建Kubernetes叢集

作者:ChamPly

安裝CentOS

1.安裝net-tools

[root@localhost ~]# yum install -y net-tools

2.關閉firewalld

[root@localhost ~]# systemctl stop firewalld && systemctl disable firewalld

Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.

Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@localhost ~]# setenforce 0

[root@localhost ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

安裝Docker

如今Docker分為了Docker-CE和Docker-EE兩個版本,CE為社群版即免費版,EE為企業版即商業版。我們選擇使用CE版。

1.安裝yum源工具包

[root@localhost ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

2.下載下傳docker-ce官方的yum源配置檔案

[root@localhost ~]# yum-config-manager --add-repo

https://download.docker.com/linux/centos/docker-ce.repo

3.禁用docker-c-edge源配edge是不開發版,不穩定,下載下傳stable版

yum-config-manager --disable docker-ce-edge

4.更新本地YUM源緩存

yum makecache fast

5.安裝Docker-ce相應版本的

yum -y install docker-ce

6.運作hello world

[root@localhost ~]# systemctl start docker

[root@localhost ~]# docker run hello-world

Unable to find image 'hello-world:latest' locally

latest: Pulling from library/hello-world

9a0669468bf7: Pull complete

Digest: sha256:0e06ef5e1945a718b02a8c319e15bae44f47039005530bc617a5d071190ed3fc

Status: Downloaded newer image for hello-world:latest

Hello from Docker!

This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:

  1. The Docker client contacted the Docker daemon.
  2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
  3. The Docker daemon created a new container from that image which runs the

       executable that produces the output you are currently reading.

  4. The Docker daemon streamed that output to the Docker client, which sent it

       to your terminal.

To try something more ambitious, you can run an Ubuntu container with:

$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

https://cloud.docker.com/

For more examples and ideas, visit:

https://docs.docker.com/engine/userguide/

安裝kubelet與kubeadm包

使用kubeadm init指令初始化叢集之下載下傳Docker鏡像到所有主機的實始化時會下載下傳kubeadm必要的依賴鏡像,同時安裝etcd,kube-dns,kube-proxy,由于我們GFW防火牆問題我們不能直接通路,是以先通過其它方法下載下傳下面清單中的鏡像,然後導入到系統中,再使用kubeadm init來初始化叢集

1.使用DaoCloud加速器(可以跳過這一步)

[root@localhost ~]# curl -sSL

https://get.daocloud.io/daotools/set_mirror.sh

| sh -s

http://0d236e3f.m.daocloud.io

docker version >= 1.12

{"registry-mirrors": ["http://0d236e3f.m.daocloud.io"]}

Success.

You need to restart docker to take effect: sudo systemctl restart docker

[root@localhost ~]# systemctl restart docker

2.下載下傳鏡像,自己通過Dockerfile到dockerhub生成對鏡像,也可以克隆我的

images=(kube-controller-manager-amd64 etcd-amd64 k8s-dns-sidecar-amd64 kube-proxy-amd64 kube-apiserver-amd64 kube-scheduler-amd64 pause-amd64 k8s-dns-dnsmasq-nanny-amd64 k8s-dns-kube-dns-amd64)

for imageName in ${images[@]} ; do

 docker pull champly/$imageName

 docker tag champly/$imageName gcr.io/google_containers/$imageName

 docker rmi champly/$imageName

done

3.修改版本

docker tag gcr.io/google_containers/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17 && \

docker rmi gcr.io/google_containers/etcd-amd64 && \

docker tag gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 && \

docker rmi gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 && \

docker tag gcr.io/google_containers/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 && \

docker rmi gcr.io/google_containers/k8s-dns-kube-dns-amd64 && \

docker tag gcr.io/google_containers/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2 && \

docker rmi gcr.io/google_containers/k8s-dns-sidecar-amd64 && \

docker tag gcr.io/google_containers/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.7.5 && \

docker rmi gcr.io/google_containers/kube-apiserver-amd64 && \

docker tag gcr.io/google_containers/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.7.5 && \

docker rmi gcr.io/google_containers/kube-controller-manager-amd64 && \

docker tag gcr.io/google_containers/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.6.0 && \

docker rmi gcr.io/google_containers/kube-proxy-amd64 && \

docker tag gcr.io/google_containers/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.7.5 && \

docker rmi gcr.io/google_containers/kube-scheduler-amd64 && \

docker tag gcr.io/google_containers/pause-amd64 gcr.io/google_containers/pause-amd64:3.0 && \

docker rmi gcr.io/google_containers/pause-amd64

4.添加阿裡源

[root@localhost ~]#  cat >> /etc/yum.repos.d/kubernetes.repo << EOF

[kubernetes]

name=Kubernetes

baseurl=

https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=0

EOF

5.檢視kubectl kubelet kubeadm kubernetes-cni清單

[root@localhost ~]# yum list kubectl kubelet kubeadm kubernetes-cni

已加載插件:fastestmirror

Loading mirror speeds from cached hostfile

  • base: mirrors.tuna.tsinghua.edu.cn
  • extras: mirrors.sohu.com
  • updates: mirrors.sohu.com

    可安裝的軟體包

kubeadm.x86_64                                                     1.7.5-0                                              kubernetes

kubectl.x86_64                                                     1.7.5-0                                              kubernetes

kubelet.x86_64                                                     1.7.5-0                                              kubernetes

kubernetes-cni.x86_64                                              0.5.1-0                                              kubernetes

[root@localhost ~]#

6.安裝kubectl kubelet kubeadm kubernetes-cni

[root@localhost ~]# yum install -y kubectl kubelet kubeadm kubernetes-cni

修改cgroups

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

update KUBELET_CGROUP_ARGS=--cgroup-driver=systemd to KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs

修改kubelet中的cAdvisor監控的端口,預設為0改為4194,這樣就可以通過浏器檢視kubelet的監控cAdvisor的web頁

[root@kub-master ~]# vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=4194"

啟動所有主機上的kubelet服務

[root@master ~]# systemctl enable kubelet && systemctl start kubelet

初始化master master節點上操作

[root@master ~]# kubeadm reset && kubeadm init --apiserver-advertise-address=192.168.0.100 --kubernetes-version=v1.7.5 --pod-network-cidr=10.200.0.0/16

[preflight] Running pre-flight checks

[reset] Stopping the kubelet service

[reset] Unmounting mounted directories in "/var/lib/kubelet"

[reset] Removing kubernetes-managed containers

[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/lib/etcd]

[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]

[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.

[init] Using Kubernetes version: v1.7.5

[init] Using Authorization modes: [Node RBAC]

[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 1.12

[preflight] Starting the kubelet service

[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)

[certificates] Generated CA certificate and key.

[certificates] Generated API server certificate and key.

[certificates] API Server serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.100]

[certificates] Generated API server kubelet client certificate and key.

[certificates] Generated service account token signing key and public key.

[certificates] Generated front-proxy CA certificate and key.

[certificates] Generated front-proxy client certificate and key.

[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"

[apiclient] Created API client, waiting for the control plane to become ready

[apiclient] All control plane components are healthy after 34.002949 seconds

[token] Using token: 0696ed.7cd261f787453bd9

[apiconfig] Created RBAC rules

[addons] Applied essential addon: kube-proxy

[addons] Applied essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

 mkdir -p $HOME/.kube

 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

 sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node

as root:

 kubeadm join --token 0696ed.7cd261f787453bd9 192.168.0.100:6443

[root@master ~]#

kubeadm join --token 0696ed.7cd261f787453bd9 192.168.0.100:6443 這個一定要記住,以後無法重制,添加節點需要

添加節點

[root@node1 ~]# kubeadm join --token 0696ed.7cd261f787453bd9 192.168.0.100:6443

[preflight] WARNING: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

[discovery] Trying to connect to API Server "192.168.0.100:6443"

[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.100:6443"

[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.0.100:6443"

[discovery] Successfully established connection with API Server "192.168.0.100:6443"

[bootstrap] Detected server version: v1.7.10

[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)

[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request

[csr] Received signed certificate from the API server, generating KubeConfig...

Node join complete:

  • Certificate signing request sent to master and response

     received.

  • Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

在master配置kubectl的kubeconfig檔案

[root@master ~]# mkdir -p $HOME/.kube

[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

在Master上安裝flannel

docker pull quay.io/coreos/flannel:v0.8.0-amd64

kubectl apply -f

https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml

檢視叢集

[root@master ~]# kubectl get cs

NAME                 STATUS    MESSAGE              ERROR

scheduler            Healthy   ok

controller-manager   Healthy   ok

etcd-0               Healthy   {"health": "true"}

[root@master ~]# kubectl get nodes

NAME      STATUS     AGE       VERSION

master    Ready      24m       v1.7.5

node1     NotReady   45s       v1.7.5

node2     NotReady   7s        v1.7.5

[root@master ~]# kubectl get pods --all-namespaces

NAMESPACE     NAME                             READY     STATUS              RESTARTS   AGE

kube-system   etcd-master                      1/1       Running             0          24m

kube-system   kube-apiserver-master            1/1       Running             0          24m

kube-system   kube-controller-manager-master   1/1       Running             0          24m

kube-system   kube-dns-2425271678-h48rw        0/3       ImagePullBackOff    0          25m

kube-system   kube-flannel-ds-28n3w            1/2       CrashLoopBackOff    13         24m

kube-system   kube-flannel-ds-ndspr            0/2       ContainerCreating   0          41s

kube-system   kube-flannel-ds-zvx9j            0/2       ContainerCreating   0          1m

kube-system   kube-proxy-qxxzr                 0/1       ImagePullBackOff    0          41s

kube-system   kube-proxy-shkmx                 0/1       ImagePullBackOff    0          25m

kube-system   kube-proxy-vtk52                 0/1       ContainerCreating   0          1m

kube-system   kube-scheduler-master            1/1       Running             0          24m

如果出現:The connection to the server localhost:8080 was refused - did you specify the right host or port?

解決辦法: 為了使用kubectl通路apiserver,在~/.bash_profile中追加下面的環境變量: export KUBECONFIG=/etc/kubernetes/admin.conf source ~/.bash_profile 重新初始化kubectl

來源:

https://my.oschina.net/ChamPly/blog/1575888

繼續閱讀