天天看點

kubeadm方式搭建k8s v1.11.0叢集1、準備工作2、在所有節點上安裝kubeadm3、使用kubeadm初始化master節點 4、安裝網絡插件flannel 5、node節點加入叢集搭建參考文檔

1、準備工作

本次安裝建議至少3台伺服器或者虛拟機,建議配置至少2C4G,基本架構為1台master節點,2台node節點。整個安裝過程将在Ubuntu 16.04.1上完成,包括完成kubeadm,kubernetes的基本叢集和flannel網絡的安裝,節點資訊如下:

角色 主機名 内網IP位址 公網IP位址
Master k8s-master 172.17.0.11 111.231.79.97
node k8s-node1 172.17.0.17 118.25.99.129
node k8s-node2 172.17.0.15 122.152.206.67
  • 作業系統版本

kubeadm方式搭建k8s v1.11.0叢集1、準備工作2、在所有節點上安裝kubeadm3、使用kubeadm初始化master節點 4、安裝網絡插件flannel 5、node節點加入叢集搭建參考文檔
  • 各主機角色

kubeadm方式搭建k8s v1.11.0叢集1、準備工作2、在所有節點上安裝kubeadm3、使用kubeadm初始化master節點 4、安裝網絡插件flannel 5、node節點加入叢集搭建參考文檔

2、在所有節點上安裝kubeadm

  • 配置apt安裝源

使用阿裡的系統安裝源和kubernetes源,并可以忽略gpg相關的警告

$ cat /etc/apt/source.list
# 系統安裝源
deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
# kubeadm及kubernetes元件安裝源
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
           
$ sudo apt-get update
W: GPG error: https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
W: The repository 'https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease' is not signed.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
           
  • 安裝docker

[email protected]:~$ sudo apt-get install docker.io
Reading package lists... Done
Building dependency tree       
Reading state information... Done
docker.io is already the newest version (17.03.2-0ubuntu2~16.04.1).
0 upgraded, 0 newly installed, 0 to remove and 222 not upgraded.
           
  • 安裝kubeadm,kubelet,kubectl

[email protected]:~$ sudo apt-get install -y kubelet kubeadm kubectl --allow-unauthenticated
Reading package lists... Done
Building dependency tree
Reading state information... Done
kubeadm is already the newest version (1.11.1-00).
kubectl is already the newest version (1.11.1-00).
kubelet is already the newest version (1.11.1-00).
0 upgraded, 0 newly installed, 0 to remove and 222 not upgraded.
           

3、使用kubeadm初始化master節點

3.1、配置科學上網

由于k8s的官方鏡像都在gcr(google container registry)上,不科學上網無法下載下傳;其次就算不用官方鏡像庫,使用别人搭建在github上的鏡像庫,也最好加速一下,不然下載下傳速度感人,話不多說開幹。

  • 安裝shadowsocks

由于shadowsocks是基于Python開發的,是以必須安裝python,然後是python的包管理工具python-pip,最後是shadowsocks

$ sudo apt-get install python
$ sudo apt-get install python-pip
$ sudo pip install shadowsocks
           
  • 配置shadowsocks

建立一個配置檔案shadowsocks.json,然後配置相應的參數,裡面的一些參數相信都應該懂的

[email protected]:~$ cat shadowsocks.json 
{
  "server": "your server",
  "server_port": your server port,
  "local_address": "127.0.0.1",
  "local_port": 1080,
  "password": "your passwd",
  "timeout": 600,
  "method": "aes-256-cfb"
}
           
  • 啟動shadowsocks

$ sudo sslocal -c shawdowsocks.json &
           
  • 配置全局代理

啟動shadowsocks服務後,發現并不能科學上網,這是因為shadowsocks是socks 5代理,需要用戶端配合才能科學。為了讓整個系統都走shadowsocks通道,需要配置全局代理,可以通過polipo實作。

1、安裝polipo

$ sudo apt-get install polipo
           

2、配置polipo

$ vim /etc/polipo/config
logSyslog = true
logFile = /var/log/polipo/polipo.log
proxyAddress = "0.0.0.0"
 
socksParentProxy = "127.0.0.1:1080"
socksProxyType = socks5
 
chunkHighMark = 50331648
objectHighMark = 16384
 
serverMaxSlots = 64
serverSlots = 16
serverSlots1 = 32
           

3、重新開機polipo

$ /etc/init.d/polipo restart
           

4、為終端配置http代理

$ export http_proxy="http://127.0.0.1:8123/"
           

5、接着測試下能否科學上網,如果有響應,則全局代理配置成功

$ curl www.google.com
           

注意事項,伺服器重新開機後,下面兩句需要重新執行:

$ sudo sslocal -c shadowsocks.json &
$ export http_proxy="http://127.0.0.1:8123/"
           

 3.2、手工下載下傳k8s鏡像

大牛在Github上搭建的鏡像庫,定時從gcr.io同步最新鏡像

https://github.com/anjia0532/gcr.io_mirror/tree/master/google-containers

下載下傳鏡像時的轉換方法

k8s.gcr.io/{image}/{tag} <==> gcr.io/google-containers/{image}/{tag} <==> anjia0532/google-containers.{image}/{tag}

  • kubernetes v1.11.0叢集所需的鏡像

(截止到2018.7.29最新版本為v1.11.1,但鏡像沒有push到官方庫上,無法pull,故使用v1.11.0)

k8s.gcr.io/kube-apiserver-amd64:v1.11.0
k8s.gcr.io/kube-controller-manager-amd64:v1.11.0
k8s.gcr.io/kube-scheduler-amd64:v1.11.0
k8s.gcr.io/kube-proxy-amd64:v1.11.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd-amd64:3.2.18
k8s.gcr.io/coredns:1.1.3
           
  • 轉換成github的鏡像庫後,再下載下傳

sudo docker pull anjia0532/google-containers.kube-apiserver-amd64:v1.11.0
sudo docker pull anjia0532/google-containers.kube-controller-manager-amd64:v1.11.0
sudo docker pull anjia0532/google-containers.kube-scheduler-amd64:v1.11.0
sudo docker pull anjia0532/google-containers.kube-proxy-amd64:v1.11.0
sudo docker pull anjia0532/google-containers.pause:3.1
sudo docker pull anjia0532/google-containers.etcd-amd64:3.2.18
sudo docker pull anjia0532/google-containers.coredns:1.1.3
           
  •  給鏡像打新的tag

docker tag anjia0532/google-containers.kube-apiserver-amd64:v1.11.0 k8s.gcr.io/kube-apiserver-amd64:v1.11.0
docker tag anjia0532/google-containers.kube-controller-manager-amd64:v1.11.0 k8s.gcr.io/kube-controller-manager-amd64:v1.11.0
docker tag anjia0532/google-containers.kube-scheduler-amd64:v1.11.0 k8s.gcr.io/kube-scheduler-amd64:v1.11.0
docker tag anjia0532/google-containers.kube-proxy-amd64:v1.11.0 k8s.gcr.io/kube-proxy-amd64:v1.11.0
docker tag anjia0532/google-containers.pause:3.1 k8s.gcr.io/pause:3.1
docker tag anjia0532/google-containers.etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
docker tag anjia0532/google-containers.coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
           

3.3、kubeadm初始化master

$ kubeadm init --kubernetes-version=1.11.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.17.0.11

[init] Using Kubernetes version: v1.11.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
	[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [ 172.17.0.11]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ubuntu-master] and IPs [172.17.0.11]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 28.003828 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node ubuntu-master as master by adding a label and a taint
[markmaster] Master ubuntu-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: rw4enn.mvk547juq7qi2b5f
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.17.0.11:6443 --token 54f68j.arg4usxkj5qyoaca
           

 --kubernetes-version 指定版本為1.11.0,否則預設是1.11.1;

--pod-network-cidr 設定kubernetes的子網為10.244.0.0/16,注意此處不要修改為其他位址,因為這個值與後續的flannel的yaml值要一緻,如果修改,請一并修改。

--apiserver-advertise-address apiserver的位址,使用内網位址

3.4、配置 kubectl

kubectl 是管理 Kubernetes Cluster 的指令行工具,前面我們已經在所有的節點安裝了 kubectl。Master 初始化完成後需要做一些配置工作,然後 kubectl 就能使用了。推薦用 Linux 普通使用者執行 kubectl(root 會有一些問題),我們為 ubuntu 使用者配置 kubectl。

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
           

為了使用更便捷,啟用 kubectl 指令的自動補全功能

echo "source <(kubectl completion bash)" >> ~/.bashrc
           

 4、安裝網絡插件flannel

4.1、下載下傳鏡像

在k8s-master上下載下傳鏡像(科學上網隻在k8s-master上配置)

$ sudo docker pull quay.io/coreos/flannel:v0.10.0-amd64
           

4.2、儲存鏡像,并在拷貝到各個node節點上

儲存的鏡像有k8s.gcr.io/pause,quay.io/coreos/flannel,k8s.gcr.io/kube-proxy-amd64

# k8s-master
$ sudo docker images
$ sudo docker save da86e6ba6ca1 f0fad859c909 1d3d7afd77d1 > node.tar
$ scp node.tar [email protected]:/tmp
$ scp node.tar [email protected]:/tmp
           
# k8s-node1
$ sudo docker load < /tmp/node.tar
$ sudo docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1
$ sudo docker tag f0fad859c909 quay.io/coreos/flannel:v0.10.0-amd64
$ sudo docker tag 1d3d7afd77d1  k8s.gcr.io/kube-proxy-amd64:v1.11.0
           
# k8s-node2
$ sudo docker load < /tmp/node.tar
$ sudo docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1
$ sudo docker tag f0fad859c909 quay.io/coreos/flannel:v0.10.0-amd64
$ sudo docker tag 1d3d7afd77d1  k8s.gcr.io/kube-proxy-amd64:v1.11.0
           

4.3、通過kube-flannel.yml建立flannel的pod

# k8s-master
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
           

 5、node節點加入叢集

5.1、k8s-master上檢視token值

$ sudo kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
54f68j.arg4usxkj5qyoaca   <invalid>   2018-07-28T18:08:18+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
           

5.2、node節點加入叢集

k8s-node1
$ kubeadm join --token 54f68j.arg4usxkj5qyoaca 172.17.0.11:6443 --discovery-token-unsafe-skip-ca-verification
k8s-node2
$ kubeadm join --token 54f68j.arg4usxkj5qyoaca 172.17.0.11:6443 --discovery-token-unsafe-skip-ca-verification
           

加入叢集成功的顯示,由于我當時沒有截圖,這是盜的參考資料裡的圖

kubeadm方式搭建k8s v1.11.0叢集1、準備工作2、在所有節點上安裝kubeadm3、使用kubeadm初始化master節點 4、安裝網絡插件flannel 5、node節點加入叢集搭建參考文檔

5.3、檢視pod狀态和叢集狀态

所有的pod均是running狀态時,表明已經OK了

$ kubectl get pod --all-namespaces
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-jrh8p             1/1       Running   0          1d
kube-system   coredns-78fcdf6894-kh65f             1/1       Running   0          1d
kube-system   etcd-k8s-master                      1/1       Running   0          1d
kube-system   kube-apiserver-k8s-master            1/1       Running   0          1d
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          1d
kube-system   kube-flannel-ds-amd64-6kwlk          1/1       Running   0          1d
kube-system   kube-flannel-ds-amd64-8thxm          1/1       Running   0          1d
kube-system   kube-flannel-ds-amd64-f2rjt          1/1       Running   0          1d
kube-system   kube-proxy-h5k4v                     1/1       Running   0          1d
kube-system   kube-proxy-hsh5b                     1/1       Running   0          1d
kube-system   kube-proxy-lp55r                     1/1       Running   0          1d
kube-system   kube-scheduler-k8s-master            1/1       Running   0          1d
           

如果狀态為Pending、ContainerCreating、ImagePullBackOff都表明 Pod 沒有就緒,可以通過pod的Event排查下原因

$ kubectl describe pod kube-flannel-ds-amd64-f2rjt -n kube-system
           

檢視所有節點的狀态(version應該是v1.11.0,可能是因為我的kubectl版本是v1.11.1導緻的,忽略)

$ kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    1d        v1.11.1
k8s-node1    Ready     <none>    1d        v1.11.1
k8s-node2    Ready     <none>    1d        v1.11.1
           

搭建參考文檔

https://www.kubernetes.org.cn/3895.html

https://mp.weixin.qq.com/s?__biz=MzIwMTM5MjUwMg==&mid=2653588195&idx=1&sn=ed00d0e19feb417b41f4d0d4b7af86de&chksm=8d3082faba470bec9b2be2a2c98b44a52f9b8f90be48a54f6212d2c43ef4a54593497cd12024&scene=21#wechat_redirect

https://mp.weixin.qq.com/s?__biz=MzIwMTM5MjUwMg==&mid=2653588210&idx=1&sn=b198ad2c5be463fb3f252fd375e18fff&chksm=8d3082ebba470bfd5ee1d343dddea7397f5c90b684d2510e5985ffb0efa1db420d605de49258&scene=21#wechat_redirect

http://www.3gcomet.com/?p=1980