前言
在Kubernetes中,服務和Pod的IP位址僅可以在叢集網絡内部使用,對于叢集外的應用是不可見的。為了使外部的應用能夠通路叢集内的服務,在Kubernetes中目前提供了以下幾種方案:
* NodePort
* LoadBalancer
* Ingress
在之前的博文中介紹過NodePort,簡單來說,就是通過service這種資源對象,為後端pod提供一個統一的通路接口,然後将service的統一通路接口映射到群集節點上,最終實作client通過映射到群集節點上的端口通路到後端pod提供的服務。
但是,這種方式有一個弊端,就是當新生成一個pod服務就需要建立對應的service将其映射到節點端口,當運作的pod過多時,我們節點暴露給client端的端口也會随之增加,這樣我們整個k8s群集的危險系數就會增加,因為我們在搭建群集之處,官方明确指出,必須關閉firewalld防火牆及清空iptables規則,現在我們又暴露了那麼多端口給client,安全系數可想而知。
一、Ingress-nginx介紹
1、Ingress-nginx組成
* ingress-nginx-controller:根據使用者編寫的ingress規則(建立的ingress的yaml檔案),
動态的去更改nginx服務的配置檔案,并且reload重載使其生效(是自動化的,通過lua腳本來實作);
* ingress資源對象:将Nginx的配置抽象成一個Ingress對象,每添加一個新的Service資
源對象隻需寫一個新的Ingress規則的yaml檔案即可(或修改已存在的ingress規則的yaml檔案)
2、Ingress-nginx可以解決什麼問題?
1)動态配置服務
如果按照傳統方式, 當新增加一個服務時, 我們可能需要在流量入口加一個反向代理指
向我們新的k8s服務. 而如果用了Ingress-nginx, 隻需要配置好這個服務, 當服務啟動
時, 會自動注冊到Ingress的中, 不需要而外的操作。
2)減少不必要的端口映射
配置過k8s的都清楚, 第一步是要關閉防火牆的, 主要原因是k8s的很多服務會
以NodePort方式映射出去, 這樣就相當于給主控端打了很多孔, 既不安全也不優雅.
而Ingress可以避免這個問題, 除了Ingress自身服務可能需要映射出去, 其他服務都不要
用NodePort方式
3、Ingress-nginx工作原理
1)ingress controller通過和kubernetes api互動,動态的去感覺叢集中ingress規則變化,
2)然後讀取它,按照自定義的規則,規則就是寫明了哪個域名對應哪個service,生成一段nginx配置,
3)再寫到nginx-ingress-controller的pod裡,這個Ingress controller的pod裡運作着一個Nginx服務,控制器會把生成的nginx配置寫入/etc/nginx.conf檔案中,
4)然後reload一下使配置生效。以此達到域名分别配置和動态更新的問題。
二、配置Ingress-nginx
1、搭建registry私有倉庫
搭建私有倉庫的目的,僅僅是為了更快的擷取鏡像,如果網絡穩定,也可跳過此步驟!
//運作registry私有倉庫
[[email protected] ~]# docker run -tid --name registry -p 5000:5000 --restart always registry
[[email protected] ~]# vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --insecure-registry 192.168.45.129:5000
//修改完儲存退出即可
//将修改後的檔案發送到k8s群集中的其他節點
[[email protected] ~]# scp /usr/lib/systemd/system/docker.service [email protected]:/usr/lib/systemd/system/
docker.service 100% 1628 1.6KB/s 00:00
[[email protected] ~]# scp /usr/lib/systemd/system/docker.service [email protected]:/usr/lib/systemd/system/
docker.service 100% 1628 1.6KB/s 00:00
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl restart docker.service //重新開機docker服務
[[email protected] ~]# docker pull httpd //下載下傳相應所需的鏡像
[[email protected] ~]# docker pull tomcat:8.5.45
[[email protected] ~]# docker tag httpd:latest 192.168.45.129:5000/httpd:v1
[[email protected] ~]# docker tag tomcat:8.5.45 192.168.45.129:5000/tomcat:v1
[[email protected] ~]# docker push 192.168.45.129:5000/httpd:v1
[[email protected] ~]# docker push 192.168.45.129:5000/tomcat:v1
//将下載下傳好的鏡像,上傳到私有倉庫
2、建立namespace(也可跳過,使用預設的default名稱空間也可以,但需要删除下面所有yaml檔案中關于自定義的名稱空間的配置字段)
[[email protected] ~]# kubectl create ns test-ns //建立名稱空間test-ns
[[email protected] ~]# kubectl get ns //确認建立成功
3、建立Deployment、Service資源對象
1)建立httpd服務及其service與之關聯
[[email protected] test]# vim httpd.yaml //編寫基于httpd服務的資源對象
kind: Deployment
apiVersion: apps/v1
metadata:
name: web01
namespace: test-ns
spec:
replicas: 3
selector:
matchLabels:
app: httpd01
template:
metadata:
labels:
app: httpd01
spec:
containers:
- name: httpd
image: httpd:latest
---
apiVersion: v1
kind: Service
metadata:
name: httpd-svc
namespace: test-ns
spec:
selector:
app: httpd01
ports:
- protocol: TCP
port: 80
targetPort: 80
//建立service資源對象與Deployment資源使用标簽的方式進行關聯
[[email protected] test]# kubectl apply -f httpd.yaml //執行yaml檔案
2)建立tomcat服務及其service
[[email protected] test]# vim tomcat.yaml //編寫yaml檔案如下
kind: Deployment
apiVersion: apps/v1
metadata:
name: web02
namespace: test-ns
spec:
replicas: 3
selector:
matchLabels:
app: tomcat01
template:
metadata:
labels:
app: tomcat01
spec:
containers:
- name: tomcat
image: tomcat:8.5.45
---
apiVersion: v1
kind: Service
metadata:
name: tomcat-svc
namespace: test-ns
spec:
selector:
app: tomcat01
ports:
- protocol: TCP
port: 8080
targetPort: 8080
[[email protected] test]# kubectl apply -f tomcat.yaml //執行yaml檔案
3)確定以上資源對象成功建立
[[email protected] ~]# kubectl get pod -n test-ns //确定pod是正常運作狀态
NAME READY STATUS RESTARTS AGE
web01-85674fbdd7-7h6cc 1/1 Running 0 9m3s
web01-85674fbdd7-9l2zm 1/1 Running 0 9m3s
web01-85674fbdd7-p9hfx 1/1 Running 0 9m3s
web02-7f8f755bc7-4qk9b 1/1 Running 0 7m22s
web02-7f8f755bc7-qhbnm 1/1 Running 0 7m22s
web02-7f8f755bc7-zr56g 1/1 Running 0 7m22s
[[email protected] ~]# kubectl get svc -n test-ns //确認SVC建立成功
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
httpd-svc ClusterIP 10.102.57.12 <none> 80/TCP 10m
tomcat-svc ClusterIP 10.110.25.145 <none> 8080/TCP 8m23s
[[email protected] ~]# curl -I 10.102.57.12:80 //通路httpd
HTTP/1.1 200 OK
Date: Sun, 23 Aug 2020 07:02:06 GMT
Server: Apache/2.4.46 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html
[[email protected] ~]# curl -I 10.110.25.145:8080 //通路tomcat
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Sun, 23 Aug 2020 09:08:18 GMT
//OK,以上表示内部通路是沒有問題的
//如果在上述通路測試中,沒有通路到相應的pod,建議使用“kubectl describe svc”指令,
檢視相應的service中的Endpoints列中有沒有關聯後端pod。
4、建立Ingress-nginx資源對象
下載下傳mandatory.yaml檔案,如果下載下傳不到複制以下檔案
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to five minutes for the drain of connections
hostNetwork: true
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
Ingress: nginx
containers:
- name: nginx-ingress-controller
image: registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.29.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
apiVersion: v1
kind: LimitRange
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
limits:
- min:
memory: 90Mi
cpu: 100m
type: Container
[[email protected] ~]# vim mandatory.yaml
//對下載下傳的yaml檔案進行簡單的修改
spec: //定位到212行,也就是該行
hostNetwork: true //添加該行,表示使用主機網絡
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
Ingress: nginx //設定節點的标簽選擇器,指定在哪台節點上運作
containers:
- name: nginx-ingress-controller
image: registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.29.0
[[email protected] ~]# kubectl label nodes k8s-node01 Ingress=nginx
#對node01節點打相應的标簽,以便指定Ingress-nginx運作在node01
[[email protected] ~]# kubectl get nodes k8s-node01 --show-labels
//檢視node01的标簽是否存在
[[email protected] ~]# kubectl apply -f mandatory.yaml
附圖如下:
![](https://img.laitimes.com/img/__Qf2AjLwojIjJCLyojI0JCLicmbw5CNkRDMiZDZxkzNxEjZmRzN5YDOxUzMzkzM0cTYzYTZ28CX0JXZ252bj91Ztl2Lc52YucWbp5GZzNmLn9Gbi1yZtl2Lc9CX6MHc0RHaiojIsJye.png)
5、定義Ingress規則(編寫ingress的yaml檔案)
[[email protected]~]# vim ingress.yaml //編寫yaml檔案如下
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: test-ns
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: www.test01.com
http:
paths:
- path: /
backend:
serviceName: httpd-svc
servicePort: 80
- path: /tomcat
backend:
serviceName: tomcat-svc
servicePort: 8080
[[email protected]~]# kubectl apply -f ingress.yaml //執行ingress規則的yaml檔案
[[email protected]~]# kubectl get ingresses -n test-ns //檢視ingresses規則資源對象
NAME HOSTS ADDRESS PORTS AGE
test-ingress www.test01.com 80 28s
注:其實,至此已經實作了我們想要的功能,現在就可以通過www.test01.com 來通路到我們後端httpd容器提供的服務,通過www.test01.com/tomcat 來通路我們後端tomcat提供的服務,當然,前提是自行配置DNS解析,或者直接修改client的hosts檔案。通路頁面如下(注意:一定要自己解決域名解析的問題,若不知道域名對應的是哪個IP,請跳過這兩個圖,看下面的文字解釋):
解決域名解析的問題如下:
[[email protected] ~]# kubectl get pod -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-86cdd68cf-hndtw 1/1 Running 0 89m 192.168.45.141 node01 <none> <none>
- 在上面的通路測試中,雖然通路到了對應的服務,但是有一個弊端,就是在做DNS解析的時候,隻能指定Ingress-nginx容器所在的節點IP。而指定k8s叢集内部的其他節點IP(包括master)都是不可以通路到的,如果這個節點一旦當機,Ingress-nginx容器被轉移到其他節點上運作(不考慮節點标簽的問題,其實保持Ingress-nginx的yaml檔案中預設的标簽的話,那麼每個節點都是有那個标簽的)。随之還要我們手動去更改DNS解析的IP(要更改為Ingress-nginx容器所在節點的IP,通過指令“kubectl get pod -n ingress-nginx -o wide”可以檢視到其所在節點),很是麻煩。
- 有沒有更簡單的一種方法呢?答案是肯定的,就是我們為Ingress-nginx規則再建立一個類型為nodePort的Service,這樣,在配置DNS解析時,就可以使用www.test01.com 綁定所有node節點,包括master節點的IP了,很是靈活。
6、為Ingress規則建立一個Service
[[email protected] ~]# vim service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
//編輯完,儲存退出即可
[[email protected] ~]# kubectl apply -f service-nodeport.yaml //執行yaml檔案
[[email protected] ~]# kubectl get svc -n ingress-nginx //檢視運作的service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx NodePort 10.108.48.248 <none> 80:32529/TCP,443:30534/TCP 11s
//可以看到service分别将80和443端口映射到了節點的32529和30543端口(随機映射的,也可以修改yaml檔案指定端口)
注:至此,這個www.test01.com 的域名即可和群集中任意節點的32529/30543端口進行綁定了。
測試如下(域名解析對應的IP可以是k8s群集内的任意節點IP):
至此,就實作了最初的需求!!
7、建立基于虛拟主機的Ingress規則
如果現在是另一種需求,我需要将www.test01.com 和www.test02.com 都對應上我後端的httpd容器提供的服務,那麼此時應該怎麼配置?
[[email protected] test]# vim ingress.yaml #修改ingress規則的yaml檔案如下
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: test-ns
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: www.test02.com //增加這一段host配置
http:
paths:
- path: /
backend:
serviceName: httpd-svc //綁定和www.test01相同的service名字即可
servicePort: 80
- host: www.test01.com
http:
paths:
- path: /
backend:
serviceName: httpd-svc
servicePort: 80
- path: /tomcat
backend:
serviceName: tomcat-svc
servicePort: 8080
//增加完上述的host字段儲存退出即可
[r[email protected] test]# kubectl apply -f ingress.yaml //重新執行yaml檔案
至此,即可實作通路www.test01.com 和www.test02.com 都可以通路到後端的httpd提供的頁面(自行解決域名解析問題=域名解析配置client hosts檔案即可),如下:
總結上述示例的pod是如何一步一步可以使client通路到的,總結如下:
後端pod===》service====》ingress規則====》寫入Ingress-nginx-controller配置檔案并自動重載使更改生效===》對Ingress-nginx建立service====》實作client無論通過哪個K8節點的IP+端口都可以通路到後端pod
三、配置HTTPS
在上面的操作中,實作了使用ingress-nginx為後端所有pod提供一個統一的入口,那麼,有一個非常嚴肅的問題需要考慮,就是如何為我們的pod配置CA憑證來實作HTTPS通路?在pod中直接配置CA麼?那需要進行多少重複性的操作?而且,pod是随時可能被kubelet殺死再建立的。當然這些問題有很多解決方法,比如直接将CA配置到鏡像中,但是這樣又需要很多個CA憑證。
這裡有更簡便的一種方法,就拿上面的情況來說,後端有多個pod,pod與service進行關聯,service又被ingress規則發現并動态寫入到ingress-nginx-controller容器中,然後又為ingress-nginx-controller建立了一個Service映射到群集節點上的端口,來供client來通路。
在上面的一系列流程中,關鍵的點就在于ingress規則,我們隻需要在ingress的yaml檔案中,為域名配置CA憑證即可,隻要可以通過HTTPS通路到域名,至于這個域名是怎麼關聯到後端提供服務的pod,這就是屬于k8s群集内部的通信了,即便是使用http來通信,也無傷大雅。
配置如下:
接下來的配置與上面的配置基本沒什麼關系,但是由于上面已經運作了Ingress-nginx-controller容器,是以這裡就沒有必要再運作了。隻需要配置pod、service、ingress規則即可。
//建立CA憑證(測試環境,自己建立吧)
[[email protected] https]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
//目前目錄下會生成兩個檔案,如下:
[[email protected] https]# ls #确定目前目錄下有這兩個檔案
tls.crt tls.key
//将生成的CA憑證存儲到etcd
[[email protected] https]# kubectl create secret tls tls-secret --key=tls.key --cert tls.crt
//建立deploy、service、ingress資源對象
[[email protected] https]# vim httpd03.yaml //編寫yaml檔案
kind: Deployment
apiVersion: apps/v1
metadata:
name: web03
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: httpd03
template:
metadata:
labels:
app: httpd03
spec:
containers:
- name: httpd3
image: httpd:latest
---
apiVersion: v1
kind: Service
metadata:
name: httpd-svc3
namespace: test-ns
spec:
selector:
app: httpd03
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress3
namespace: test-ns
spec:
tls:
- hosts:
- www.test03.com
secretName: tls-secret
rules:
- host: www.test03.com
http:
paths:
- path: /
backend:
serviceName: httpd-svc3
servicePort: 80
[r[email protected] https]# kubectl apply -f httpd03.yaml //執行yaml檔案
确認建立的資源對象是否正常運作:
[[email protected] https]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
httpd-svc3 ClusterIP 10.96.4.68 <none> 80/TCP 9s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12d
[[email protected] https]# kubectl get pod
NAME READY STATUS RESTARTS AGE
web03-b955f886b-gjbnr 1/1 Running 0 7m23s
web03-b955f886b-w8cdn 1/1 Running 0 7m23s
[[email protected] https]# kubectl describe ingresses. //檢視ingress規則
Name: test-ingress3
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
tls-secret terminates www.test03.com
Rules:
Host Path Backends
---- ---- --------
www.test03.com
/ httpd-svc3:80 (10.244.1.5:80,10.244.2.5:80)
//确定關聯到對應的service及後端的pod
注:使用https://www.test03.com 進行通路(自行解決域名解析問題)