天天看點

Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

Kubernetes之Service

      • Service的概念
      • Service的類型
      • Service示範案例
        • 環境準備
        • ClusterIP(叢集内部通路)
          • Iptables
          • IPVS
          • Endpoint
        • NodePort(對外暴露應用)
        • LoadBalancer(對外暴露應用,适用于公有雲)
      • Ingress
        • 搭建Ingress環境
        • 示範案例環境準備
        • 配置http代理

Service的概念

Kubernetes Pod是有生命周期的,它們可以被建立,也可以被銷毀,然而一旦被銷毀生命就永遠結束,每個Pod都會擷取它自己的IP位址,可Pod一旦銷毀并重新建立後,IP位址就會發生改變。這時候我們需要通過k8s中的Service通路整個Pod叢集,隻要Service不被銷毀,Pod就算不斷發生變化,入口通路的IP總是固定的。

Service資源用于為Pod對象提供一個固定、統一的通路入口及負載均衡的能力,并借助新一代DNS系統的服務發現功能,解決用戶端發現并通路容器化應用的問題。

Kubernetes中的Service是一種邏輯概念。它定義了一個Pod邏輯集合以及通路它們的政策,Service與Pod的關聯同樣是通過Label完成的。Service的目标是提供一種橋梁,它會為通路者提供一個固定的通路IP位址,用于在通路時重定向到相應的後端。

Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

Service的類型

  1. ClusterIP:預設值,K8S系統給Service自動配置設定的虛拟IP,隻能在叢集内部通路。一個Service可能對應多個EndPoint(Pod),Client通路的是Cluster IP,通過iptables規則轉到Real Server,進而達到負載均衡的效果;
  2. NodePort:将Service通過指定的Node上的端口暴露給外部,通路任意一個;
  3. LoadBalancer:在NodePort的基礎上,借助Cloud Provider建立一個外部的負載均衡器,并将請求轉發到:NodePort,此模式隻能在雲伺服器上使用;
  4. ExternalName:将服務通過DNS CNAME記錄方式轉發到指定的域名(通過spec.externlName設定);

Service示範案例

環境準備

在同一個Namespace下啟動三個不同的pod

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pc-deployment
  namespace: bubble-dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers: 
      - name: nginx
        image: nginx:1.17.9
        ports:
        - containerPort: 80
           
kubectl delete ns bubble-dev
kubectl create ns bubble-dev
vi pc-deployment.yaml
cat pc-deployment.yaml
kubectl create -f pc-deployment.yaml
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

修改每個Pod裡面的Nginx容器資訊

Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

為了友善測試檢視的效果,修改每個Nginx中的index.html頁面為對應的IP位址

kubectl exec -it pc-deployment-97b5dbd78-66k25 -n bubble-dev /bin/sh
echo "node1節點 172.17.0.3" > /usr/share/nginx/html/index.html
exit
kubectl exec -it pc-deployment-97b5dbd78-lwws7 -n bubble-dev /bin/sh
echo "node1節點 172.17.0.2" > /usr/share/nginx/html/index.html
exit
kubectl exec -it pc-deployment-97b5dbd78-sm8q5 -n bubble-dev /bin/sh
echo "node2節點 172.17.0.4" > /usr/share/nginx/html/index.html
exit
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

ClusterIP(叢集内部通路)

apiVersion: v1
kind: Service
metadata:
  name: svc-clusterip
  namespace: bubble-dev
spec:
  selector:
    app: nginx-pod
  clusterIP: # Service的ip位址,如果不寫預設會生成一個随機的IP
  type: ClusterIP
  ports:
  - port: 80 # Service端口
    targetPort: 80 # Pod端口
           
vi svc-clusterip.yaml
cat svc-clusterip.yaml
kubectl create -f svc-clusterip.yaml
kubectl describe svc svc-clusterip -n bubble-dev
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

在node1節點上執行

curl 10.96.26.149
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

每隔1s通路Nginx叢集的IP

Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

分發政策:預設是随機或者輪詢,自定義改成固定IP通路,即來自同一個用戶端發起的所有請求都會轉發到固定的一個Pod上

apiVersion: v1
kind: Service
metadata:
  name: svc-clusterip
  namespace: bubble-dev
spec:
  selector:
    app: nginx-pod
  clusterIP: # Service的ip位址,如果不寫預設會生成一個随機的IP
  type: ClusterIP
  ports:
  - port: 80 # Service端口
    targetPort: 80 # pod端口
  sessionAffinity: ClientIP
           
kubectl delete -f svc-clusterip.yaml
rm -rf svc-clusterip.yaml
vi svc-clusterip.yaml
cat svc-clusterip.yaml
kubectl create -f svc-clusterip.yaml
kubectl describe svc svc-clusterip -n bubble-dev
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

每隔1s通路Nginx叢集的IP

Mr. Cappuccino的第44杯咖啡——Kubernetes之Service
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service
Iptables

檢視iptables的規則

iptables -t nat -nL |grep 80
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

Iptables使用NAT等技術将virtualIP的流量轉至endpoint中,容器内暴露的端口是80。kube-proxy 通過iptables處理Service的過程,需要在主控端上設定相當多的iptables規則,如果主控端有大量的Pod,不斷重新整理iptables規則,會消耗大量的CPU資源。

IPVS

IPVS模式的Service,可以使K8S叢集支援更多量級的Pod。Pod和Service通信:通過iptables或ipvs實作通信,ipvs取代不了iptables,因為ipvs隻能做負載均衡,而做不了NAT轉換。

Endpoint

Endpoint是Kubernetes中的一個資源對象,存儲在etcd中,用于記錄一個Service對應的所有Pod的通路位址,一個Service由一組Pod組成,這些Pod通過Endpoints暴露出來,Endpoints是實作實際服務的端點集合。

kubectl get endpoints -n bubble-dev -o wide
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

NodePort(對外暴露應用)

在每個節點啟用一個端口來暴露服務,可以在叢集外部通路,通過NodeIP:NodePort通路。

Mr. Cappuccino的第44杯咖啡——Kubernetes之Service
apiVersion: v1
kind: Service
metadata:
  name: svc-nodeport
  namespace: bubble-dev
spec:
  selector:
    app: nginx-pod
  type: NodePort # Service類型
  ports:
  - port: 80
    nodePort: 30008 # 指定綁定的node端口(預設的取值範圍是: 30000-32767),如果不指定,則會預設配置設定
    targetPort: 80
           
kubectl delete -f svc-clusterip.yaml
vi svc-nodeport.yaml
cat svc-nodeport.yaml
kubectl create -f svc-nodeport.yaml
kubectl describe svc svc-nodeport -n bubble-dev
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

使用NodeIP:NodePort從外部通路 http://192.168.102.160:30008/

Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

LoadBalancer(對外暴露應用,适用于公有雲)

與NodePort類似,在每個節點啟用一個端口來暴露服務。除此之外,K8S請求底層雲平台的負載均衡器,把每個NodeIP:NodePort作為後端添加進去。

Ingress

由于在Pod數量多的時候,NodePort性能會急劇下降,如果K8S叢集有成百上千的服務需要管理成百上千個NodePort,非常不友好。Ingress和Service、Deployment一樣,Ingress也是K8S的資源類型,Ingress實作用域名的方式通路K8S叢集的内部應用,Ingress受命名空間隔離。

Ingress文檔

搭建Ingress環境

mkdir -p /home/ingress-controller
cd /home/ingress-controller
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

下載下傳檔案

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml
           

如果下載下傳不了,可以自行使用翻牆工具下載下傳該檔案或者直接使用我下面貼出來的檔案

Mr. Cappuccino的第44杯咖啡——Kubernetes之Service
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

mandatory.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container
           

service-nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

           

執行yaml檔案

Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

如果執行有問題,則删除命名空間

kubectl delete ns ingress-nginx
           

檢視資訊

kubectl get pod -n ingress-nginx
kubectl get svc -n ingress-nginx
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

其中30755是http協定端口,30176是為 https協定端口。

示範案例環境準備

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: bubble-dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nx-pod
  template:
    metadata:
      labels:
        app: nx-pod
    spec:
      containers: 
      - name: nginx
        image: nginx:1.17.9
        ports:
        - containerPort: 80

---

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: tomcat-deployment
  namespace: bubble-dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tc-pod
  template:
    metadata:
      labels:
        app: tc-pod
    spec:
      containers:
      - name: tomcat
        image: tomcat:9
        ports:
        - containerPort: 8080
        
---

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: bubble-dev
spec:
  selector:
    app: nx-pod
  clusterIP: None
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 80
    
---

apiVersion: v1
kind: Service
metadata:
  name: tomcat-service
  namespace: bubble-dev
spec:
  selector:
    app: tc-pod
  clusterIP: None
  type: ClusterIP
  ports:
  - port: 8080
    targetPort: 8080
           
kubectl create ns bubble-dev
vi ingress.yaml
cat ingress.yaml
kubectl create -f ingress.yaml
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service
kubectl get svc -n bubble-dev
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

配置http代理

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-http
  namespace: bubble-dev
spec:
  rules:
  - host: nginx.bubble.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-service
          servicePort: 80
  - host: tomcat.bubble.com
    http:
      paths:
      - path: /
        backend:
          serviceName: tomcat-service
          servicePort: 8080
           
vi ingress-http.yaml
cat ingress-http.yaml
kubectl create -f ingress-http.yaml
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service
kubectl describe ing ingress-http -n bubble-dev
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

為了能看到效果,在電腦上配置host檔案

C:\Windows\System32\drivers\etc

Mr. Cappuccino的第44杯咖啡——Kubernetes之Service
192.168.102.160 nginx.bubble.com
192.168.102.160 tomcat.bubble.com
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service
kubectl get svc -n ingress-nginx
           
Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

分别通路:

http://nginx.bubble.com:30755/

Mr. Cappuccino的第44杯咖啡——Kubernetes之Service

http://tomcat.bubble.com:30755/

Mr. Cappuccino的第44杯咖啡——Kubernetes之Service