天天看点

kubernetes集群中CoreDNS与Dashboard部署

作者:技术怪圈

上一篇我们已kubernetes集群已部署完成,今天就来把最后一个组件部署完成,没错就是CoreDNS。

kubernetes集群中CoreDNS与Dashboard部署

一、CoreDNS简介

CoreDNS 是云原生计算基金会毕业的项目,用 Go语言编写的DNS 服务器/转发器用于链接插件。 每个插件执行一个 (DNS) 功能。

CoreDNS是一个快速灵活的DNS服务器。这里的关键词是灵活的:有了CoreDNS,你 能够利用插件对DNS数据做您想做的事情。如果某些功能不是 开箱即用,您可以通过编写插件来添加它。

CoreDNS 可以侦听通过 UDP/TCP(go'old DNS)、TLS(RFC)传入的 DNS 请求。 7858),也称为 DoT,DNS over HTTP/2 - DoH - (RFC 8484) 和 gRPC(非标准)。

目前,CoreDNS能够:

  • 提供文件中的区域数据;支持DNSSEC(仅限NSEC)和DNS(文件和自动)。
  • 从主服务器检索区域数据,即充当辅助服务器(仅限 AXFR)(辅助服务器)。
  • 动态对区域数据进行签名 (dnssec)。
  • 响应的负载平衡(负载平衡)。
  • 允许区域传输,即充当主服务器(文件 + 传输)。
  • 自动从磁盘加载区域文件(自动)。
  • 缓存 DNS 响应(缓存)。
  • 使用etcd作为后端(取代SkyDNS)(etcd)。
  • 使用 k8s (kubernetes) 作为后端 (kubernetes)。
  • 充当代理,将查询转发到其他一些(递归)名称服务器(转发)。
  • 提供指标(通过使用普罗米修斯)(普罗米修斯)。
  • 提供查询(日志)和错误(错误)日志记录。
  • 与云提供商集成 (route53)。
  • 支持CH类:和朋友(混沌)。version.bind
  • 支持 RFC 5001 DNS 名称服务器标识符 (NSID) 选项 (nsid)。
  • 分析支持 (pprof)。
  • 重写查询(qtype、qclass 和 qname)(重写和模板)。
  • 阻止任何查询(任何)。
  • 提供 DNS64 IPv6 转换 (dns64)。

二、部署CoreDNS

1、下载所需要的镜像

#摘取镜像
~# docker pull coredns/coredns:1.8.7

#镜像打标签
~# docker tag docker.io/coredns/coredns:1.8.7 harbor.host.com/base/coredns:1.8.7

#把镜像推送至本地镜像仓库
~# docker push harbor.host.com/base/coredns:1.8.7           

2、下载yaml文件

去github里把kubernetes源码下载下来,里面就有相应的yaml文件

root@k8s-master:/usr/src# ll
total 480856
-rw-r--r--  1 root root    554989 Dec  2 22:58 kubernetes_1.23.5.tar.gz
-rw-r--r--  1 root root 28757182 Dec 2 22:58 kubernetes-client-linux-amd64_1.23.5.tar.gz
-rw-r--r--  1 root root 121635346 Dec 2 22:58 kubernetes-node-linux-amd64__1.23.5.tar.gz
-rw-r--r-- 1 root root 341423907 Dec 2 22:58 kubernetes-server-linux-amd64__1.23.5.tar.gz           

解压完后在kubernetes/cluster/addons/dns/coredns找到coredns的yaml文件进行修改

root@k8s-master:/usr/src/kubernetes_1.23.5# ll cluster/addons/dns/coredns/
total 44
drwxr-xr-x 2 root root 4096 Mar 17  2022 ./
drwxr-xr-x 5 root root 4096 Mar 17  2022 ../
-rw-r--r-- 1 root root 5060 Mar 17  2022 coredns.yaml.base
-rw-r--r-- 1 root root 5110 Mar 17  2022 coredns.yaml.in
-rw-r--r-- 1 root root 5112 Mar 17  2022 coredns.yaml.sed
-rw-r--r-- 1 root root 1075 Mar 17  2022 Makefile
-rw-r--r-- 1 root root  344 Mar 17  2022 transforms2salt.sed
-rw-r--r-- 1 root root  287 Mar 17  2022 transforms2sed.sed


#cp coredns.yaml.base /root/yaml/1202/coredns.yaml           

3、修改部分内容如下:cat coredns.yaml根据实际情况修改

root@k8s-master:~/yaml/1202# cat coredns.yaml
# __MACHINE_GENERATED_WARNING__

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors #错误信息标准输出
        health {  #在CoreDNS的http://localhost:8080/health端口提供CoreDNS服务的健康报告
            lameduck 5s
        }
        ready   #监听8181端口,当CoreDNS的插件都已就绪时,访问该接口会返回200OK
        #CoreDNS将基于kubernetes.service.name进行DNS查询并返回查询记录给客户端
        kubernetes cluster.local in-addr.arpa ip6.arpa { #修改成当时部署k8s时host文件里面CLUSTER_DNS_DOMAIN对应的值,我这里是cluster.local
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153   #度量指标数据以Prometheus的key-value的格式在http://localhost:9153/metrics URI 上提供
        forward . /etc/resolv.conf { #不是kubernetes集群内的其他任务域名查询都将转发到预定义的目的server
            max_concurrent 1000
        }
        cache 30   #启用servic解析缓存,单位为秒
        loop #检测域名解析是否有死循环,如coredens转发给内网DNS服务器,而内网DNS服务器又转发给CoreDNS,如果发现死循环,则强制中止CoreDNS进程,kubernetes重建
        reload  #检测corefile是否更改,再重新编辑configmap配置后,默认2分钟后会优雅的自动加载
        loadbalance #轮训DNS域名解析,如果一个域名存在多个记录则轮训解析
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: k8s-app
                    operator: In
                    values: ["kube-dns"]
              topologyKey: kubernetes.io/hostname
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: harbor.host.com/k8s/coredns:1.8.7 #修改成自己的镜像仓库地址
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 200Mi #根据实际修改资源
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.100.0.2  #改成pod里面/etc/resolv.conf相同的地址
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP           

4、kubernetes集群中部署CoreDNS

root@k8s-master:~/1202# kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created           

5、验证

验证容器是否启动

root@k8s-master:~/yaml/1202# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-754966f84c-8g75v   1/1     Running   0          35m
calico-node-bptxd                                          1/1     Running   0          35m
calico-node-swz8g                                          1/1     Running   0          35m
calico-node-wsl5j                                            1/1     Running   0          35m
coredns-745884d567-rc5nn                            1/1     Running   0          17m           
kubernetes集群中CoreDNS与Dashboard部署

三、kubernetes自带的dashboard部署

部署dashbord需要用到二个镜像(在changlog里查对应版本)

docker pull kubernetesui/dashboard:v2.5.1
docker pull kubernetesui/metrics-scraper:v1.0.7           

下载yaml文件

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml

#修改service暴露服务
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort #nodeport暴露
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30004  #暴露端口
  selector:
    k8s-app: kubernetes-dashboard           

部署dashboard

root@k8s-master:~/1202# kubectl apply -f dashboard-v2.5.1.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper create           
kubernetes集群中CoreDNS与Dashboard部署

创建admin账户admin-user.yml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard           
kubectl apply -f admin-user.yml           

拿到admin账户的token,测试登录

kubectl get secret -A | grep admin
root@k8s-master:~/yaml# kubectl describe secret -n kubernetes-dashboard admin-user-token-frzts           
kubernetes集群中CoreDNS与Dashboard部署

继续阅读