環境:
master:master 10.51.98.41
node1: node1 10.25.134.181
nide2 : node2 10.51.55.208
準備:
基于主機名通信
時間同步
關閉防火牆
安裝步驟:
1,etcd cluster,僅master節點
2,flannel,叢集的所有節點
3,配置k8s的master:僅master節點
kubernetes-master
啟動服務:
kube-apiserver,kube-scheduler,kube-controller-manager
4,配置k8s的各Node節點:
kubernetes-node
先設定啟動docker服務;
啟動的k8s服務:
kube-proxy,kubelet
kubeadm:
1.master,nodes:安裝kubelet ,kubeadm,docker
2,master:kubeadm init 完成叢集初始化
3,nodes:kubeadm join 加入叢集
https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md
【master節點】
包下載下傳:
配置yum源
下載下傳阿裡雲的docker-ce源
下載下傳阿裡雲的kubernetes源
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabeld=1
[root@ll-sas01 /etc/yum.repos.d]# rpm --import rpm-package-key.gpg
[root@ll-sas01 /etc/yum.repos.d]# rpm --import yum-key.gpg
同步源到兩個node節點
yum install docker kubelet kubeadm kubectl -y
初始化docker 啟動
修改docker鏡像源位址
編輯vim /usr/lib/systemd/system/docker.service 檔案,添加下邊兩行
Environment="HTTPS_PROXY=http://www.ik8s.io:10080"
Environment="NO_PROXY=127.0.0.0/8,172.20.0.0/16"
確定下邊兩個檔案值為1
[root@ll-sas01 ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1
[root@ll-sas01 ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
修改/etc/sysconfig/kubelet
kubeadm init --kubernetes-version=v1.11.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
檢視初始化拖下來的docker images
[root@ll-sas01 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy-amd64 v1.11.3 be5a6e1ecfa6 7 days ago 97.8MB
k8s.gcr.io/kube-apiserver-amd64 v1.11.3 3de571b6587b 7 days ago 187MB
k8s.gcr.io/kube-controller-manager-amd64 v1.11.3 a710d6a92519 7 days ago 155MB
k8s.gcr.io/kube-scheduler-amd64 v1.11.3 ca1f38854f74 7 days ago 56.8MB
k8s.gcr.io/coredns 1.1.3 b3b94275d97c 3 months ago 45.6MB
k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 5 months ago 219MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB
kubeadm join 123.57.224.60:6443 --token eza7wy.j0jonwz244k05yyi --discovery-token-ca-cert-hash sha256:e548d233d4b7674e5061923039f3d9d27c0fca20fe2a1499e49c42d9781142f0
初始晚推薦使用
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
【node1】
[root@xjd-sas01 /etc/yum.repos.d]# rpm --import rpm-package-key.gpg
[root@xjd-sas01 /etc/yum.repos.d]# rpm --import yum-key.gpg
[root@xjd-sas01 ~]# yum install docker-ce kubelet kubeadm
[root@xjd-sas01 ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
【node2】
[root@xjd-sas02 /etc/yum.repos.d]# rpm --import rpm-package-key.gpg
[root@xjd-sas02 /etc/yum.repos.d]# rpm --import yum-key.gpg
[root@xjd-sas02 ~]# yum install docker-ce kubelet kubeadm
[root@xjd-sas02 ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[master]
狀态資訊
[root@ll-sas01 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
[root@ll-sas01 ~]# kubectl get componentstatus #cs是componentstatus的簡寫
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
檢視節點資訊
[root@ll-sas01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ll-sas01 NotReady master 26m v1.11.3
屬于未就緒,因為沒有網絡沒有就緒,各節點無法通信
flannel文檔位址
https://github.com/coreos/flannel
1.7+以上的kubernetes安裝flannel方法:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
檢視flannel鏡像有沒有下載下傳完成
[root@ll-sas01 ~]# docker images
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 7 months ago 44.6MB
[root@ll-sas01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ll-sas01 Ready master 33m v1.11.3
可以看到STATUS 已經為Ready狀态了
檢視名稱空間:
[root@ll-sas01 ~]# kubectl get ns
NAME STATUS AGE
default Active 45m
kube-public Active 45m
kube-system Active 45m
檢視節點所有pod 指定kube-system名稱空間
[root@ll-sas01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-4rrzv 1/1 Running 0 34m
coredns-78fcdf6894-94b75 1/1 Running 0 34m
etcd-ll-sas01 1/1 Running 0 33m
kube-apiserver-ll-sas01 1/1 Running 0 33m
kube-controller-manager-ll-sas01 1/1 Running 0 33m
kube-flannel-ds-amd64-fzf4r 1/1 Running 0 2m
kube-proxy-hc4ww 1/1 Running 0 34m
kube-scheduler-ll-sas01 1/1 Running 0 33m
複制配置master檔案到node1,2
[root@ll-sas01 ~]# scp /usr/lib/systemd/system/docker.service xjd-sas01:/usr/lib/systemd/system/docker.service
docker.service 100% 1240 1.7MB/s 00:00
[root@ll-sas01 ~]# scp /usr/lib/systemd/system/docker.service xjd-sas02:/usr/lib/systemd/system/docker.service
docker.service 100% 1240 1.6MB/s 00:00
[root@ll-sas01 ~]# scp /etc/sysc
sysconfig/ sysctl.conf sysctl.d/
[root@ll-sas01 ~]# scp /etc/sysconfig/kubelet xjd-sas01:/etc/sysconfig/
kubelet 100% 42 78.8KB/s 00:00
[root@ll-sas01 ~]# scp /etc/sysconfig/kubelet xjd-sas02:/etc/sysconfig/
kubelet 100% 42 84.3KB/s 00:00
[node1/2]
[root@xjd-sas01 ~]# systemctl start docker
Warning: docker.service changed on disk. Run 'systemctl daemon-reload' to reload units.
[root@xjd-sas01 ~]# systemctl daemon-reload
[root@xjd-sas01 ~]# systemctl start docker
[root@xjd-sas01 ~]# systemctl enable docker kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@xjd-sas01 ~]# kubeadm join 123.57.224.60:6443 --token eza7wy.j0jonwz244k05yyi --discovery-token-ca-cert-hash sha256:e548d233d4b7674e5061923039f3d9d27c0fca20fe2a1499e49c42d9781142f0
[master]
檢視節點,發現兩個node添加進來了
[root@ll-sas01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ll-sas01 Ready master 1h v1.11.3
xjd-sas01 NotReady <none> 2m v1.11.3
xjd-sas02 NotReady <none> 8s v1.11.3
等待兩個node節點docker images同步結束
[root@ll-sas01 ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
coredns-78fcdf6894-4rrzv 1/1 Running 0 1h 10.244.0.2 ll-sas01 <none>
coredns-78fcdf6894-94b75 1/1 Running 0 1h 10.244.0.3 ll-sas01 <none>
etcd-ll-sas01 1/1 Running 0 1h 10.51.98.41 ll-sas01 <none>
kube-apiserver-ll-sas01 1/1 Running 0 1h 10.51.98.41 ll-sas01 <none>
kube-controller-manager-ll-sas01 1/1 Running 0 1h 10.51.98.41 ll-sas01 <none>
kube-flannel-ds-amd64-cxfsq 1/1 Running 0 12m 10.51.55.208 xjd-sas02 <none>
kube-flannel-ds-amd64-fzf4r 1/1 Running 0 40m 10.51.98.41 ll-sas01 <none>
kube-flannel-ds-amd64-gvs77 1/1 Running 0 13m 10.25.134.181 xjd-sas01 <none>
kube-proxy-hc4ww 1/1 Running 0 1h 10.51.98.41 ll-sas01 <none>
kube-proxy-r54nq 1/1 Running 0 12m 10.51.55.208 xjd-sas02 <none>
kube-proxy-skfqg 1/1 Running 0 13m 10.25.134.181 xjd-sas01 <none>
kube-scheduler-ll-sas01 1/1 Running 0 1h 10.51.98.41 ll-sas01 <none>
[root@xjd-sas01 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy-amd64 v1.11.3 be5a6e1ecfa6 7 days ago 97.8MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 7 months ago 44.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB