天天看點

Kubernetes 初識Ingress Controller以及部署

Ingress是什麼

NodePort存在的不足:

• 一個端口隻能一個服務使用,端口需提前規劃

• 隻支援4層負載均衡(iptables ipvs)

當應用越來越多的時候,管理端口就會變得麻煩起來了。那麼可不可以在叢集當中暴露一個端口完成所有項目的統一接入。

Nodeport隻支援四層的資料包轉發,是以不支援基于域名的做分流,基于uri做比對。

 有沒有隻在叢集當中暴露一個端口同時是一個全局的負載均衡器可以提供所有項目的統一接入呢?

 Ingress:Ingress公開了從叢集外部到叢集内服務的HTTP和HTTPS路由的規則集合,而具體實作流量路由則是由Ingress Controller負責。

Ingress

 • Ingress:K8s中的一個抽象資源,給管理者提供一個暴露應用的入口定義方法

• Ingress Controller:根據Ingress生成具體的路由規則,并對Pod負載均衡器

Kubernetes 初識Ingress Controller以及部署

Ingress Controller

Ingress Controller有很多實作,我們這裡采用官方維護的Nginx控制器。

項目位址:

https://github.com/kubernetes/ingress-nginx

部署:kubectl apply -f

https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml

注意事項:

鏡像位址修改成國内的:lizhenliang/nginx-ingress-controller:0.30.0

将Ingress Controller暴露,一般使用主控端網絡(hostNetwork: true)或者使用NodePort

其他控制器:

https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml      

 這裡鏡像quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0,可以使用docker pull手工去拉取如果拉取不下來使用這個位址lizhenliang/nginx-ingress-controller:0.30.0

在部署Ingress Controller yml需要注意的地方

(1)啟動程式的幾個參數,其中的變量使用env定義

args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io

 env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace      

(2)可不可以不建立service使用其他方法使得其可以暴露在叢集之外

[root@k8s-master ~]# kubectl get svc -n ingress-nginx 
No resources found in ingress-nginx namespace.      

在container加上這個字段(這裡可以看到,這裡并沒有為我們生成service,是以先要使用nodeport暴露出去,這裡不使用nodeport)

spec:
      hostNetwork: true      

不會獨立為其配置設定網絡,也就是不會建立infocontainer這個容器了。這個pod在哪個節點上就使用哪個節點的網絡

[root@k8s-master ~]# kubectl get pod -n ingress-nginx -o wide
NAME                             READY   STATUS    RESTARTS   AGE    IP                NODE        NOMINATED NODE   READINESS GATES
nginx-ingress-controller-q6gp7   1/1     Running   0          122m   192.168.179.104   k8s-node2   <none>           <none>
nginx-ingress-controller-xq7fl   1/1     Running   0          122m   192.168.179.103   k8s-node1   <none>           <none>      

(3)提供的端口為80 443來統一接入流量

ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP      
[root@k8s-node1 ~]# netstat -tpln | grep -wE "80|443"
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      32669/nginx: master 
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      32669/nginx: master 
tcp6       0      0 :::443                  :::*                    LISTEN      32669/nginx: master 
tcp6       0      0 :::80                   :::*                    LISTEN      32669/nginx: master 

[root@k8s-node2 ~]# netstat -tpln | grep -wE "80|443"
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      27102/nginx: master 
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      27102/nginx: master 
tcp6       0      0 :::80                   :::*                    LISTEN      27102/nginx: master 
tcp6       0      0 :::443                  :::*                    LISTEN      27102/nginx: master       

(4)健康檢查,這裡健康檢查提供了一個uri路徑/healthz

livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10      
[root@k8s-master ~]# curl -I 192.168.179.103:80/healthz
HTTP/1.1 200 OK
Server: nginx/1.17.8
Date: Tue, 29 Dec 2020 16:41:56 GMT
Content-Type: text/html
Content-Length: 0
Connection: keep-alive      

(5)使用鈎子在程式關閉之前做了一定的處理

lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown      

(6)對資源的限制

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
    - min:
        memory: 90Mi
        cpu: 100m
      type: Container      

(7)我這裡修改了yml使用了demoset

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx      
Kubernetes 初識Ingress Controller以及部署
[root@k8s-master ~]# cat ingress-controller.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      hostNetwork: true
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: lizhenliang/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown      

繼續閱讀