天天看點

kubernetes叢集中CoreDNS與Dashboard部署

作者:技術怪圈

上一篇我們已kubernetes叢集已部署完成,今天就來把最後一個元件部署完成,沒錯就是CoreDNS。

kubernetes叢集中CoreDNS與Dashboard部署

一、CoreDNS簡介

CoreDNS 是雲原生計算基金會畢業的項目,用 Go語言編寫的DNS 伺服器/轉發器用于連結插件。 每個插件執行一個 (DNS) 功能。

CoreDNS是一個快速靈活的DNS伺服器。這裡的關鍵詞是靈活的:有了CoreDNS,你 能夠利用插件對DNS資料做您想做的事情。如果某些功能不是 開箱即用,您可以通過編寫插件來添加它。

CoreDNS 可以偵聽通過 UDP/TCP(go'old DNS)、TLS(RFC)傳入的 DNS 請求。 7858),也稱為 DoT,DNS over HTTP/2 - DoH - (RFC 8484) 和 gRPC(非标準)。

目前,CoreDNS能夠:

  • 提供檔案中的區域資料;支援DNSSEC(僅限NSEC)和DNS(檔案和自動)。
  • 從主伺服器檢索區域資料,即充當輔助伺服器(僅限 AXFR)(輔助伺服器)。
  • 動态對區域資料進行簽名 (dnssec)。
  • 響應的負載平衡(負載平衡)。
  • 允許區域傳輸,即充當主伺服器(檔案 + 傳輸)。
  • 自動從磁盤加載區域檔案(自動)。
  • 緩存 DNS 響應(緩存)。
  • 使用etcd作為後端(取代SkyDNS)(etcd)。
  • 使用 k8s (kubernetes) 作為後端 (kubernetes)。
  • 充當代理,将查詢轉發到其他一些(遞歸)名稱伺服器(轉發)。
  • 提供名額(通過使用普羅米修斯)(普羅米修斯)。
  • 提供查詢(日志)和錯誤(錯誤)日志記錄。
  • 與雲提供商內建 (route53)。
  • 支援CH類:和朋友(混沌)。version.bind
  • 支援 RFC 5001 DNS 名稱伺服器辨別符 (NSID) 選項 (nsid)。
  • 分析支援 (pprof)。
  • 重寫查詢(qtype、qclass 和 qname)(重寫和模闆)。
  • 阻止任何查詢(任何)。
  • 提供 DNS64 IPv6 轉換 (dns64)。

二、部署CoreDNS

1、下載下傳所需要的鏡像

#摘取鏡像
~# docker pull coredns/coredns:1.8.7

#鏡像打标簽
~# docker tag docker.io/coredns/coredns:1.8.7 harbor.host.com/base/coredns:1.8.7

#把鏡像推送至本地鏡像倉庫
~# docker push harbor.host.com/base/coredns:1.8.7           

2、下載下傳yaml檔案

去github裡把kubernetes源碼下載下傳下來,裡面就有相應的yaml檔案

root@k8s-master:/usr/src# ll
total 480856
-rw-r--r--  1 root root    554989 Dec  2 22:58 kubernetes_1.23.5.tar.gz
-rw-r--r--  1 root root 28757182 Dec 2 22:58 kubernetes-client-linux-amd64_1.23.5.tar.gz
-rw-r--r--  1 root root 121635346 Dec 2 22:58 kubernetes-node-linux-amd64__1.23.5.tar.gz
-rw-r--r-- 1 root root 341423907 Dec 2 22:58 kubernetes-server-linux-amd64__1.23.5.tar.gz           

解壓完後在kubernetes/cluster/addons/dns/coredns找到coredns的yaml檔案進行修改

root@k8s-master:/usr/src/kubernetes_1.23.5# ll cluster/addons/dns/coredns/
total 44
drwxr-xr-x 2 root root 4096 Mar 17  2022 ./
drwxr-xr-x 5 root root 4096 Mar 17  2022 ../
-rw-r--r-- 1 root root 5060 Mar 17  2022 coredns.yaml.base
-rw-r--r-- 1 root root 5110 Mar 17  2022 coredns.yaml.in
-rw-r--r-- 1 root root 5112 Mar 17  2022 coredns.yaml.sed
-rw-r--r-- 1 root root 1075 Mar 17  2022 Makefile
-rw-r--r-- 1 root root  344 Mar 17  2022 transforms2salt.sed
-rw-r--r-- 1 root root  287 Mar 17  2022 transforms2sed.sed


#cp coredns.yaml.base /root/yaml/1202/coredns.yaml           

3、修改部分内容如下:cat coredns.yaml根據實際情況修改

root@k8s-master:~/yaml/1202# cat coredns.yaml
# __MACHINE_GENERATED_WARNING__

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors #錯誤資訊标準輸出
        health {  #在CoreDNS的http://localhost:8080/health端口提供CoreDNS服務的健康報告
            lameduck 5s
        }
        ready   #監聽8181端口,當CoreDNS的插件都已就緒時,通路該接口會傳回200OK
        #CoreDNS将基于kubernetes.service.name進行DNS查詢并傳回查詢記錄給用戶端
        kubernetes cluster.local in-addr.arpa ip6.arpa { #修改成當時部署k8s時host檔案裡面CLUSTER_DNS_DOMAIN對應的值,我這裡是cluster.local
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153   #度量名額資料以Prometheus的key-value的格式在http://localhost:9153/metrics URI 上提供
        forward . /etc/resolv.conf { #不是kubernetes叢集内的其他任務域名查詢都将轉發到預定義的目的server
            max_concurrent 1000
        }
        cache 30   #啟用servic解析緩存,機關為秒
        loop #檢測域名解析是否有死循環,如coredens轉發給内網DNS伺服器,而内網DNS伺服器又轉發給CoreDNS,如果發現死循環,則強制中止CoreDNS程序,kubernetes重建
        reload  #檢測corefile是否更改,再重新編輯configmap配置後,預設2分鐘後會優雅的自動加載
        loadbalance #輪訓DNS域名解析,如果一個域名存在多個記錄則輪訓解析
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: k8s-app
                    operator: In
                    values: ["kube-dns"]
              topologyKey: kubernetes.io/hostname
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: harbor.host.com/k8s/coredns:1.8.7 #修改成自己的鏡像倉庫位址
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 200Mi #根據實際修改資源
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.100.0.2  #改成pod裡面/etc/resolv.conf相同的位址
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP           

4、kubernetes叢集中部署CoreDNS

root@k8s-master:~/1202# kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created           

5、驗證

驗證容器是否啟動

root@k8s-master:~/yaml/1202# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-754966f84c-8g75v   1/1     Running   0          35m
calico-node-bptxd                                          1/1     Running   0          35m
calico-node-swz8g                                          1/1     Running   0          35m
calico-node-wsl5j                                            1/1     Running   0          35m
coredns-745884d567-rc5nn                            1/1     Running   0          17m           
kubernetes叢集中CoreDNS與Dashboard部署

三、kubernetes自帶的dashboard部署

部署dashbord需要用到二個鏡像(在changlog裡查對應版本)

docker pull kubernetesui/dashboard:v2.5.1
docker pull kubernetesui/metrics-scraper:v1.0.7           

下載下傳yaml檔案

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml

#修改service暴露服務
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort #nodeport暴露
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30004  #暴露端口
  selector:
    k8s-app: kubernetes-dashboard           

部署dashboard

root@k8s-master:~/1202# kubectl apply -f dashboard-v2.5.1.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper create           
kubernetes叢集中CoreDNS與Dashboard部署

建立admin賬戶admin-user.yml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard           
kubectl apply -f admin-user.yml           

拿到admin賬戶的token,測試登入

kubectl get secret -A | grep admin
root@k8s-master:~/yaml# kubectl describe secret -n kubernetes-dashboard admin-user-token-frzts           
kubernetes叢集中CoreDNS與Dashboard部署

繼續閱讀