簡單了解:
docker預設是單機使用的,不可以跨主機,也沒有高可用,是以生産環境一般不會單獨使用docker跑應用,k8s主要是作為docker的一個排程器來使用,可以使容器實作跨節點通信,當一台運作容器的節點故障後,容器會自動遷移到其它可用節點上繼續運作服務,目前比較常用的是k8s
k8s架構:k8s主要由master節點和node節點構成,而且常用操作都在master上操作,作為控制節點角色,node節點提供計算功能
master:
kubectl:k8s的所有操作都是通過kubectl指令操作的
REST API:k8s對外部服務提供的接口服務,例如圖形化界面或者kubectl都會通過REST API接口下發指令來控制k8s
scheduler:排程器,例如建立pod,scheduler可以控制将pod配置設定到哪個pod節點
controller-manager:檢測pod或者node的健康狀态,并維持pod的正常運作,如果pod故障,controller-manger會自動修複,例如在啟動一個pod副本
kubelet:代理軟體,例如在master上對node節點下發的指令,都需要通過kubelet組建來告知各個元件
etcd:資料庫,所有配置資料都存放在etcd資料庫中
kubeproxy:在所有節點都需要運作kubeproxy,後期通過建立svc來将pod映射到外網,當外部通過svc-ip通路pod的時候就需要通過kubeporxy進行路由轉發到pod
node:
pod:k8s環境運作的最小機關,一個pod中可以包含一個或多個容器

安裝方式:
kubeadmin(生産)
yum
源碼安裝(學習,深刻了解)
===================================================================================
節點規劃
開始安裝:
初始化各個節點的配置:
安裝常用工具
yum -y install wget telnet net-tools lrzsz vim zip unzip
修改主機名
将master節點主機名修改為k8s-master01 node節點為k8s-node01 以這種命名規則命名即可
關閉防火牆
[root@k8s-node01 ~]# systemctl stop firewalld && systemctl disable firewalld
永久關閉selinux
[root@k8s-node01 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config;cat /etc/selinux/config
臨時關閉selinux
[root@k8s-node01 ~]# setenforce 0
配置hosts
[root@k8s-node01 ~]# cat >> /etc/hosts <<EOF
10.1.1.100 k8s-master01
10.1.1.110 k8s-node01
10.1.1.120 k8s-node02
EOF
臨時關閉swap
[root@k8s-node01 ~]# swapoff -a
永久關閉swap
[root@k8s-node01 ~]# vim /etc/fstab
配置yum源(自帶的kubernetes版本太低)
rm -rf /etc/yum.repos.d/*
cd /etc/yum.repo/
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
vim k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
yum clean all && yum makecache
安裝docker
[root@k8s-master01 ~]#yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-master01 ~]# yum -y install docker
由于核心不支援 overlay2是以需要更新核心或者禁用overlay2(我們選擇禁用,安裝完docker可以啟動docker測試下是否支援,啟動docker不報錯的可以忽略這一步)
vim /etc/sysconfig/docker 将 --selinux-enabled=false
啟動docker服務
[root@k8s-master01 ~]# systemctl start docker && systemctl enable docker
設定伺服器時區
timedatectl set-timezone Asia/Shanghai
設定相關k8s參數
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@k8s-master01 ~]# sysctl -p
[root@k8s-master01 ~]# echo "1" > /proc/sys/net/ipv4/ip_forward
安裝k8s相關安裝包 這裡安裝的是1.15.1版本,可以根據自己的需求指定版本,前提是你yum源支援 使用 yum list kube\* 來檢視目前yum源支援的最新版本
[root@k8s-master01 ~]#yum -y install kubeadm-1.15.1 kubelet-1.15.1 kubectl-1.15.1
啟動服務
[root@k8s-master01 ~]# systemctl restart kubelet && systemctl enable kubelet
====================================以上操作在所有節點執行,如果新加node節點也需要執行上述操作================================================================
開始初始化k8s叢集:在master上執行
$ kubeadm init \
--apiserver-advertise-address=10.1.1.100 \ #master元件監聽的api位址,這裡寫masterIP位址即可或者多網卡選擇另一個IP位址
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.15.1 \
--service-cidr=10.2.0.0/16 \
--pod-network-cidr=10.244.0.0/16
初始化完成後kubelet服務如果沒異常的話應該正常啟動了
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
安裝flannel網絡
vim kube-flannel.yml添加下面内容
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: ppc64le
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: s390x
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
View Code
kubectl apply -f kube-flannel.yml
遇到問題,執行kubectl get pods -n kube-system檢視到flannel容器拉取鏡像錯誤,在master節點上手動拉取鏡像後再次執行kubectl apply,檢視到flannel網絡運作正常
docker pull quay.io/coreos/flannel:v0.11.0-amd64
kubectl apply -f kube-flannel.yml
=====================================================master節點部署完成==================================================================================
将node節點加入到master叢集中,在需要加入k8s叢集的node節點上執行該指令
kubeadm join 10.1.1.100:6443 --token jib7mk.mybh3vpbfcf2tgw0 \
--discovery-token-ca-cert-hash sha256:e5ab611a21d0442c1f25b9f88eaf5cddc66ad906d206163f2630d6228efa733
如果忘記kubeadm join後的參數,可用通過下面指令來擷取
[root@k8s-master01 opt]# kubeadm token create --print-join-command
問題1:
執行完kubeadm join後在master節點執行kubectl get nodes 發現k8s-node01節點狀态為NotReady
解決方法:在node節點檢視日志,tail -f /var/log/message或者journalctl -f -u kubelet發現有報錯日志如下
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
通過日志可以看出是網絡元件出現的問題,然後在node節點上執行docker images,檢視到flannel元件的鏡像沒有拉取到,可用稍微等一下,如果不想等或者等了一會還拉取不到,可用手動拉取
docker pull quay.io/coreos/flannel:v0.11.0-amd64
過一倆分鐘在master節點上進行驗證
[root@k8s-master01 opt]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 15h v1.15.1
k8s-node01 Ready <none> 6m42s v1.15.1
在master節點上部署k8s-dashboard元件,該元件為k8s的web ui元件
vim kubernetes-dashboard.yml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
部署該服務
[root@k8s-master01 opt]# kubectl apply -f kubernetes-dashboard.yaml
檢視通路dashboard服務的端口号,通路IP使用master節點的IP位址
[root@k8s-master01 opt]# kubectl get svc kubernetes-dashboard -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.103.18.237 <none> 443:31072/TCP 2m33s
通路驗證:https://10.1.1.100:31072/
如何使用令牌登陸:
[root@k8s-master01 opt]# kubectl create serviceaccount dashboard-admin -n kube-system
[root@k8s-master01 opt]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
[root@k8s-master01 opt]# kubectl get secret -n kube-system|grep dashboard
dashboard-admin-token-pbg4s kubernetes.io/service-account-token 3 96s
kubernetes-dashboard-certs Opaque 0 48m
kubernetes-dashboard-key-holder Opaque 2 47m
kubernetes-dashboard-token-ksd9f kubernetes.io/service-account-token 3 48m
檢視token資訊
[root@k8s-master01 opt]# kubectl describe secret dashboard-admin-token-pbg4s -n kube-system
登陸成功,部署完成