天天看點

Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

services

  • 1. k8s網絡通信
  • 2. services
    • 2.1 簡介
    • 2.2 IPVS模式的service
      • 2.2.1 檢視沒有設定ipvs模式時候的ipvs
      • 2.2.2 部署ipvs模式
      • 2.2.3 測試(觀察是否是動态負載均衡變化)
    • 2.3 k8s提供的dns服務插件
    • 2.4 Headless Service “無頭服務”
    • 2.5 從外部通路service的三種方式
      • 2.5.1 NodePort方式
      • 2.5.2 LoadBalancer
        • 2.5.2.1 沒有Metallb
        • 2.5.2.2 通過Metallb模拟雲端環境
        • 2.5.2.3 ingress結合metallb的實驗(可以不看)
      • 2.5.3 ExternalName(叢集内部通路外部)
        • 2.5.3.1 ExternalName
        • 2.5.3.2 直接配置設定一個公有ip
  • 3. pod間通信
    • 3.1 同節點之間的通信
    • 3.2 不同節點的pod之間的通信需要網絡插件支援(詳解)
      • 3.2.1 Flannel vxlan模式跨主機通信原理
      • 3.2.2 vxlan模式(預設模式)
      • 3.2.3 host-gw模式
      • 3.2.4 Directrouting
  • 4. Ingress服務
    • 4.1 部署ingress服務
    • 4.2 ingress配置
      • 4.2.1 配置基本的測試檔案
        • 4.2.1.1 一個host
        • 4.2.1.2 兩個host
      • 4.2.2 證書加密
      • 4.2.3 證書加密與使用者認證
      • 4.2.4 簡單設定重定向
      • 4.2.5 位址重寫(複雜的重定向)
  • 5. 補充

1. k8s網絡通信

- k8s通過CNI接口接入其他插件來實作網絡通訊。目前比較流行的插件有flannel,calico等。
  CNI插件存放位置:# cat  /etc/cni/net.d/10-flannel.conflist 
  
  插件使用的解決方案如下:
	虛拟網橋,虛拟網卡,多個容器共用一個虛拟網卡進行通信。
	多路複用:MacVLAN,多個容器共用一個實體網卡進行通信。
	硬體交換:SR-LOV,一個實體網卡可以虛拟出多個接口,這個性能最好。
	
  容器間通信:同一個pod内的多個容器間的通信,通過lo即可實作;
	
  pod之間的通信:
	同一節點的pod之間通過cni網橋轉發資料包。(brctl show可以檢視)
	不同節點的pod之間的通信需要網絡插件支援。
	
  pod和service通信: 通過iptables或ipvs實作通信,ipvs取代不了iptables,因為ipvs隻能做負載均衡,而做不了nat轉換。
	
  pod和外網通信:iptables的MASQUERADE。
	
  Service與叢集外部用戶端的通信;(ingress、nodeport、loadbalancer)
           

2. services

2.1 簡介

- Service可以看作是一組提供相同服務的Pod對外的通路接口。借助Service,應用可以友善地實作服務發現和負載均衡。
	service預設隻支援4層負載均衡能力,沒有7層功能。(可以通過Ingress實作)
	
	service的類型:(前三種是叢集外部通路内部資源)
		ClusterIP:預設值,k8s系統給service自動配置設定的虛拟IP,隻能在叢集内部通路。
		NodePort:将Service通過指定的Node上的端口暴露給外部,通路任意一個NodeIP:nodePort都将路由到ClusterIP。
		LoadBalancer:在 NodePort 的基礎上,借助 cloud provider 建立一個外部的負載均衡器,并将請求轉發到 <NodeIP>:NodePort,此模式隻能在雲伺服器上使用。
		ExternalName:将服務通過 DNS CNAME 記錄方式轉發到指定的域名(通過 spec.externlName 設定)。[叢集内部通路外部,通過内部調用外部資源]
           

2.2 IPVS模式的service

- Service 是由 kube-proxy 元件,加上 iptables 來共同實作的.

	kube-proxy 通過 iptables 處理 Service 的過程,需要在主控端上設定相當多的 iptables 規則,如果主控端有大量的Pod,不斷重新整理iptables規則,會消耗大量的CPU資源。

	IPVS模式的service,可以使K8s叢集支援更多量級的Pod。
           

2.2.1 檢視沒有設定ipvs模式時候的ipvs

[[email protected] ~]# lsmod | grep ip    ##可以檢視對應的ipvs是沒有使用的,還是使用的iptables
ip6_udp_tunnel         12755  1 vxlan
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr

           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

2.2.2 部署ipvs模式

[[email protected] ~]# yum install -y ipvsadm    ##安裝ipvsadm,每個節點都需要安裝。每個節點操作一樣
[[email protected] ~]# yum install -y ipvsadm
[[email protected] ~]# yum install -y ipvsadm


[[email protected] ~]# kubectl get pod -n kube-system | grep kube-proxy   ##部署之前檢視一下
[[email protected] ~]# kubectl edit cm kube-proxy -n kube-system  ##進入修改mode為ipvs
[[email protected] ~]# kubectl get pod -n kube-system |grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'     ##更新kube-proxy pod
[[email protected] ~]# kubectl get pod -n kube-system | grep kube-proxy  ##部署之後檢視是否發生變化


#IPVS模式下,kube-proxy會在service建立後,在主控端上添加一個虛拟網卡:kube-ipvs0,并配置設定service IP。
[[email protected] ~]# ip addr   ##檢視ip
           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

2.2.3 測試(觀察是否是動态負載均衡變化)

[[email protected] ~]# vim damo.yml   ##編輯測試檔案
[[email protected] ~]# cat damo.yml
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2

[[email protected] ~]# kubectl apply -f damo.yml    ##建立services和pod
service/myservice created
deployment.apps/demo2 created
[[email protected] ~]# kubectl get svc    ##檢視服務
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   2d18h
myservice    ClusterIP   10.100.154.215   <none>        80/TCP    10s
[[email protected] ~]# ip addr | grep kube-ipvs0     ##檢視相應的ip
9: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    inet 10.96.0.1/32 scope global kube-ipvs0
    inet 10.96.0.10/32 scope global kube-ipvs0
    inet 10.100.154.215/32 scope global kube-ipvs0

[[email protected] ~]# ipvsadm -ln   ##檢視對應的負載均衡


   
[[email protected] ~]# vim damo.yml       ##改成6個
[[email protected] ~]# cat damo.yml     
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo2
spec:
  replicas: 6
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2

[[email protected] ~]# kubectl apply -f damo.yml    ##更新pod
service/myservice unchanged
deployment.apps/demo2 configured
[[email protected] ~]# ipvsadm -ln    ##再次檢視負載均衡

[[email protected] ~]# kubectl get pod    ##檢視對應pod資訊
[[email protected] ~]# curl 10.100.154.215   ##檢視是否負載均衡
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[[email protected] ~]# curl 10.100.154.215/hostname.html

           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

2.3 k8s提供的dns服務插件

[[email protected] ~]# kubectl get services kube-dns --namespace=kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   2d19h

[[email protected] ~]# kubectl attach demo -it    ##如果有demo就直接進
[[email protected] ~]# kubectl  run demo --image=busyboxplus -it  ##沒有demo建立demo

[[email protected] ~]#  yum install bind-utils -y   ##安裝dig工具
[[email protected] ~]# dig myservice.default.svc.cluster.local. @10.96.0.10  ##通過dig進行測試

           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

2.4 Headless Service “無頭服務”

- 	Headless Service不需要配置設定一個VIP,而是直接以DNS記錄的方式解析出被代理Pod的IP位址。
	域名格式:$(servicename).$(namespace).svc.cluster.local
           
[[email protected] ~]# kubectl delete -f damo.yml   ##清理環境
[[email protected] ~]# vim damo.yml 
[[email protected] ~]# cat damo.yml     ##無頭服務示例
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  clusterIP: None           ##無頭服務,
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2

[[email protected] ~]# kubectl apply  -f damo.yml    ##應用
service/myservice created

[[email protected] ~]# kubectl  get svc    ##CLUSTER-IP沒有配置設定ip
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d19h
myservice    ClusterIP   None         <none>        80/TCP    24s
[[email protected] ~]# ip addr | grep kube-ipvs0   ##沒有配置設定ip到ipvs0
9: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    inet 10.96.0.1/32 scope global kube-ipvs0
    inet 10.96.0.10/32 scope global kube-ipvs0
[[email protected] ~]# kubectl describe svc myservice   ##有對應的Endpoints:

           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

2.5 從外部通路service的三種方式

2.5.1 NodePort方式

[[email protected] ~]# kubectl delete -f damo.yml     ##删除
[[email protected] ~]# cat damo.yml 
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  #clusterIP: None
  type: NodePort     ##改為NodePort
  #type: LoadBalancer
  #externalIPs:
  #  - 172.25.0.100

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2
[[email protected] ~]# kubectl apply -f damo.yml  ##建立pod

[[email protected] ~]# kubectl get svc    ##檢視外部通路需要的端口是31957
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        2d20h
myservice    NodePort    10.104.243.136   <none>        80:31957/TCP   74s

           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

2.5.2 LoadBalancer

2.5.2.1 沒有Metallb

- 從外部通路 Service 的第二種方式,适用于公有雲上的 Kubernetes 服務。這時候,你可以指定一個 LoadBalancer 類型的 Service。

- 在service送出後,Kubernetes就會調用 CloudProvider 在公有雲上為你建立一個負載均衡服務,并且把被代理的 Pod 的 IP位址配置給負載均衡服務做後端。
           
[[email protected] ~]# cat damo.yml 
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  #clusterIP: None
  #type: NodePort
  type: LoadBalancer
  #externalIPs:
  #  - 172.25.0.100

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2

[[email protected] ~]# kubectl apply -f damo.yml   ##建立pod應用
service/myservice configured
deployment.apps/demo2 unchanged
[[email protected] ~]# kubectl  get svc   ##檢視服務
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1      <none>        443/TCP        2d20h
myservice    LoadBalancer   10.97.125.97   <pending>     80:31148/TCP   2s


           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

2.5.2.2 通過Metallb模拟雲端環境

Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

metallb官網

  • 針對layer2模式進行分析

    metallb會根據配置的位址池為LoadBalancer類型的service配置設定external-ip(vip)

    一旦MetalLB為service配置設定了一個外部IP位址,它需要使叢集之外的網絡知道該VIP“存在”在叢集中。MetalLB使用标準路由協定來實作這一點:ARP、NDP或BGP。

    在layer2模式下,MetalLB的某一個speaker(這個speaker由metallb controller選擇)會響應對service VIP的ARP請求或IPv6的NDP請求,是以從區域網路層面來看,speaker所在的機器是有多個IP位址的,其中包含VIP。

## 1. metallb配置
[[email protected] ingress]# kubectl edit configmap -n kube-system kube-proxy   ##修改配置檔案
[[email protected] ~]# kubectl get pod -n kube-system |grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'		//更新kube-proxy pod
[[email protected] ~]# mkdir metallb    ##建立檔案夾友善實驗
[[email protected] ~]# cd metallb/

[[email protected] metallb]# wget https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
[[email protected] metallb]# wget https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml   ##下載下傳metallb的倆個配置檔案
[[email protected] metallb]# ls
metallb.yaml  namespace.yaml

##拉取鏡像并上傳鏡像到自己的私有倉庫(在自己的倉庫主機上執行)
[[email protected] harbor]# docker pull  metallb/speaker:v0.9.5    ##通過檢視metallb.yml配置檔案,拉取相應的鏡像
[[email protected] harbor]# docker pull metallb/controller:v0.9.5

##建立yml檔案内容
[[email protected] metallb]# kubectl apply -f namespace.yaml   ##建立namespace
[[email protected] metallb]# vim metallb.yaml 
[[email protected] metallb]# kubectl get ns    ##檢視建立的namespace
NAME              STATUS   AGE
default           Active   4d18h
ingress-nginx     Active   41h
kube-node-lease   Active   4d18h
kube-public       Active   4d18h
kube-system       Active   4d18h
metallb-system    Active   4m54s  
[[email protected] metallb]# kubectl  apply -f metallb.yaml    ##建立metallb

[[email protected] metallb]# kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"          ##建立密鑰(第一次建立需要執行,之後不需要多次執行)
[[email protected] metallb]# kubectl -n metallb-system get secrets    ##檢視密鑰

[[email protected] metallb]# kubectl -n metallb-system get pod   ##都是running說明配置成功


           

1. metallb配置

Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

2.測試

[[email protected] metallb]# vim config.yml
[[email protected] metallb]# cat config.yml 
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.25.13.100-172.25.13.200      ##測試ip範圍100-200
[[email protected] metallb]# kubectl apply  -f config.yml    ##應用config檔案,随即配置設定ip
configmap/config created


## 測試1
[[email protected] metallb]# vim nginx-svc.yml     ##測試1
[[email protected] metallb]# cat nginx-svc.yml 
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: myapp:v1


[[email protected] metallb]# kubectl  apply -f nginx-svc.yml    ##
service/nginx-svc unchanged
deployment.apps/deployment unchanged
[[email protected] metallb]# kubectl get svc   ## 172,25,13,100配置設定成功
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>          443/TCP        4d18h
nginx-svc    LoadBalancer   10.98.143.200   172.25.13.100   80:31262/TCP   39h

## 測試2
[[email protected] metallb]# cat demo-svc.yml 
---
apiVersion: v1
kind: Service
metadata:
  name: demo-svc
spec:
  selector:
    app: demo
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: demo
        image: myapp:v2

[[email protected] metallb]# kubectl  apply -f demo-svc.yml 
[[email protected] metallb]# kubectl get pod       ##檢視是否running
[[email protected] metallb]# kubectl get svc       ##檢視服務啟動效果


## 真機測試效果
[[email protected] Desktop]# curl 172.25.13.100
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[[email protected] Desktop]# curl 172.25.13.100/hostname.html
deployment-6456d7c676-gkvbc
[[email protected] Desktop]# curl 172.25.13.100/hostname.html
deployment-6456d7c676-fmgtj

[[email protected] Desktop]# curl 172.25.13.101/hostname.html
demo-7f6949cddf-fkvbp
[[email protected] Desktop]# curl 172.25.13.101/hostname.html
demo-7f6949cddf-mcj94



           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

2.5.2.3 ingress結合metallb的實驗(可以不看)

[[email protected] ~]# cd ingress/
[[email protected] ingress]# vim deploy.yaml    ##修改的檔案内容如下

273 # Source: ingress-nginx/templates/controller-service.yaml
274 apiVersion: v1
275 kind: Service
276 metadata:
277   labels:
278     helm.sh/chart: ingress-nginx-2.4.0
279     app.kubernetes.io/name: ingress-nginx
280     app.kubernetes.io/instance: ingress-nginx
281     app.kubernetes.io/version: 0.33.0
282     app.kubernetes.io/managed-by: Helm
283     app.kubernetes.io/component: controller
284   name: ingress-nginx-controller
285   namespace: ingress-nginx
286 spec:
287   type: LoadBalancer
288   ports:
289     - name: http
290       port: 80
291       protocol: TCP
292       targetPort: http
293     - name: https
294       port: 443
295       protocol: TCP
296       targetPort: https
297   selector:
298     app.kubernetes.io/name: ingress-nginx
299     app.kubernetes.io/instance: ingress-nginx
300     app.kubernetes.io/component: controller



329     spec:
330       #hostNetwork: true
331       #nodeSelector:
332       #  kubernetes.io/hostname: server4
333       dnsPolicy: ClusterFirst
334       containers:

## 測試檔案
[[email protected] ingress]# cat demo.yml 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-demo
spec:
  tls:
    - hosts:
      - www1.westos.org
      secretName: tls-secret
  rules:
  - host: www1.westos.org
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-svc   ##服務一定要確定開啟,如果沒有可以[[email protected] ingress]# kubectl apply -f nginx-svc.yml 
          servicePort: 80


[[email protected] ingress]# kubectl delete -f nginx.ymlz
## 重新配置證書檔案
[[email protected] ingress]# ls
auth  demo.yml  deploy.yaml  nginx-svc.yml  nginx.yml  tls.crt  tls.key
[[email protected] ingress]# rm -fr tls.*
[[email protected] ingress]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=www1.westos.org/O=www1.westos.org" ##重新生成證書
 
[[email protected] ingress]# kubectl get secrets 
[[email protected] ingress]# kubectl delete secrets tls-secret   ##删除原來的證書檔案
[[email protected] ingress]# kubectl create secret tls tls-secret --key tls.key --cert tls.crt  ##重新儲存
[[email protected] ingress]# kubectl get secrets
           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

2.5.3 ExternalName(叢集内部通路外部)

從外部通路的第三種方式叫做ExternalName

2.5.3.1 ExternalName

[[email protected] ~]# vim exsvc.yml
[[email protected] ~]# cat exsvc.yml 
apiVersion: v1
kind: Service
metadata:
  name: exsvc
spec:
  type:  ExternalName
  externalName: www.baidu.com
[[email protected] ~]# kubectl apply -f exsvc.yml 
service/exsvc created
[[email protected] ~]# kubectl get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)   AGE
exsvc        ExternalName   <none>         www.baidu.com   <none>    7s
kubernetes   ClusterIP      10.96.0.1      <none>          443/TCP   2d20h
myservice    ClusterIP      10.97.125.97   172.25.13.100   80/TCP    13m
[[email protected] ~]# kubectl attach demo -it    ##
	/ # nslookup exsvc
	
[[email protected] ~]# dig -t A exsvc.default.svc.cluster.local. @10.96.0.10  ##要想通路到,要確定聯網
           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

2.5.3.2 直接配置設定一個公有ip

[[email protected] ~]# vim damo.yml 
[[email protected] ~]# cat damo.yml 
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  #clusterIP: None
  #type: NodePort
  #type: LoadBalancer
  externalIPs:     ##配置設定的公有ip
    - 172.25.13.100   

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2


[[email protected] ~]# kubectl apply -f damo.yml 
service/myservice configured
deployment.apps/demo2 unchanged
[[email protected] ~]# kubectl  get svc    ##檢視配置設定的公有ip為172.25.0.100
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP    PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>         443/TCP   2d20h
myservice    ClusterIP   10.97.125.97   172.25.13.100   80/TCP    5m7s

[[email protected] Desktop]# curl 172.25.13.100/hostname.html   ##真機通路172.25.13.100
           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

3. pod間通信

3.1 同節點之間的通信

- 同一節點的pod之間通過cni網橋轉發資料包。(brctl show可以檢視)
           

3.2 不同節點的pod之間的通信需要網絡插件支援(詳解)

3.2.1 Flannel vxlan模式跨主機通信原理

Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
- flannel網絡	
	VXLAN,即Virtual Extensible LAN(虛拟可擴充區域網路),是Linux本身支援的一網種網絡虛拟化技術。VXLAN可以完全在核心态實作封裝和解封裝工作,進而通過“隧道”機制,建構出覆寫網絡(Overlay Network)。
	VTEP:VXLAN Tunnel End Point(虛拟隧道端點),在Flannel中 VNI的預設值是1,這也是為什麼主控端的VTEP裝置都叫flannel.1的原因。
	Cni0: 網橋裝置,每建立一個pod都會建立一對 veth pair。其中一端是pod中的eth0,另一端是Cni0網橋中的端口(網卡)。
	Flannel.1: TUN裝置(虛拟網卡),用來進行 vxlan 封包的處理(封包和解包)。不同node之間的pod資料流量都從overlay裝置以隧道的形式發送到對端。
	Flanneld:flannel在每個主機中運作flanneld作為agent,它會為所在主機從叢集的網絡位址空間中,擷取一個小的網段subnet,本主機内所有容器的IP位址都将從中配置設定。同時Flanneld監聽K8s叢集資料庫,為flannel.1裝置提供封裝資料時必要的mac、ip等網絡資料資訊。

- flannel網絡原理
	當容器發送IP包,通過veth pair 發往cni網橋,再路由到本機的flannel.1裝置進行處理。
	VTEP裝置之間通過二層資料幀進行通信,源VTEP裝置收到原始IP包後,在上面加上一個目的MAC位址,封裝成一個内部資料幀,發送給目的VTEP裝置。
	内部資料桢,并不能在主控端的二層網絡傳輸,Linux核心還需要把它進一步封裝成為主控端的一個普通的資料幀,承載着内部資料幀通過主控端的eth0進行傳輸。
	Linux會在内部資料幀前面,加上一個VXLAN頭,VXLAN頭裡有一個重要的标志叫VNI,它是VTEP識别某個資料桢是不是應該歸自己處理的重要辨別。
	flannel.1裝置隻知道另一端flannel.1裝置的MAC位址,卻不知道對應的主控端位址是什麼。在linux核心裡面,網絡裝置進行轉發的依據,來自FDB的轉發資料庫,這個flannel.1網橋對應的FDB資訊,是由flanneld程序維護的。
	linux核心在IP包前面再加上二層資料幀頭,把目标節點的MAC位址填進去,MAC位址從主控端的ARP表擷取。
	此時flannel.1裝置就可以把這個資料幀從eth0發出去,再經過主控端網絡來到目标節點的eth0裝置。目标主機核心網絡棧會發現這個資料幀有VXLAN Header,并且VNI為1,Linux核心會對它進行拆包,拿到内部資料幀,根據VNI的值,交給本機flannel.1裝置處理,flannel.1拆包,根據路由表發往cni網橋,最後到達目标容器。


- flannel支援多種後端:
		1. Vxlan
			vxlan			//封包封裝,預設
			Directrouting		//直接路由,跨網段使用vxlan,同網段使用host-gw模式。
		2. host-gw:		//主機網關,性能好,但隻能在二層網絡中,不支援跨網絡,如果有成千上萬的Pod,容易産生廣播風暴,不推薦
		3. UDP:			//性能差,不推薦
           

3.2.2 vxlan模式(預設模式)

[[email protected] ~]# vim damo.yml 
[[email protected] ~]# cat damo.yml 
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  #clusterIP: None
  #type: NodePort
  #type: LoadBalancer
  externalIPs:
    - 172.25.13.100

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2

[[email protected] ~]# kubectl apply -f damo.yml 
[[email protected] ~]# kubectl  get pod -o wide   ##檢視詳細資訊
##下面的指令每一個節點都可以用
[[email protected] ~]# cat /run/flannel/subnet.env 
[[email protected] ~]# ip n 
[[email protected] ~]# ip addr   ##檢視flannel.1對應的mac位址
[[email protected] ~]# bridge fdb


[[email protected] ~]# kubectl attach demo -it 
	/ # ping 10.244.2.39

[[email protected] ~]# tcpdump -i eth0 -nn host 172.25.13.4
           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

3.2.3 host-gw模式

Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
[[email protected] ~]# kubectl -n kube-system edit  cm kube-flannel-cfg  ##修改模式為host-gw

[[email protected] ~]# kubectl get pod -n kube-system |grep flannel | awk '{system("kubectl delete pod "$1" -n kube-system")}'   ##類似于之前,重新開機生效,是一個删除在生成的過程

[[email protected] ~]# ip route   ##每個節點都可以通過這條指令檢視route,也可以用route -n


           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

3.2.4 Directrouting

[[email protected] ~]# kubectl -n kube-system edit  cm kube-flannel-cfg  ##修改模式
[[email protected] ~]# kubectl get pod -n kube-system |grep flannel | awk '{system("kubectl delete pod "$1" -n kube-system")}'    ##重新生效

           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

4. Ingress服務

官網

官網下載下傳位址

- 一種全局的、為了代理不同後端 Service 而設定的負載均衡服務,就是 Kubernetes 裡的Ingress 服務。

	Ingress由兩部分組成:Ingress controller和Ingress服務。
	
	Ingress Controller 會根據你定義的 Ingress 對象,提供對應的代理能力。業界常用的各種反向代理項目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已經為Kubernetes 專門維護了對應的 Ingress Controller。
           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

4.1 部署ingress服務

- 用DaemonSet結合nodeselector來部署ingress-controller到特定的node上,然後使用HostNetwork直接把該pod與主控端node的網絡打通,直接使用主控端的80/443端口就能通路服務。
	優點是整個請求鍊路最簡單,性能相對NodePort模式更好。
	缺點是由于直接利用主控端節點的網絡和端口,一個node隻能部署一個ingress-controller pod。
	比較适合大并發的生産環境使用。
           
[[email protected] ~]# mkdir ingress     ##建立對應檔案夾友善實驗
[[email protected] ~]# cd ingress/
[[email protected] ingress]# pwd
/root/ingress
[[email protected] ingress]# ll
total 20
-rwxr-xr-x 1 root root 17728 Feb 22 15:59 deploy.yaml    ##我本地的檔案,也可以通過官網wget


[[email protected] ~]# docker load -i ingress-nginx.tar    ##加載兩個鏡像,分别是quay.io/kubernetes-ingress-controller/nginx-ingress-controller 和 jettech/kube-webhook-certgen ,可以通過網絡拉取,也可以官網找。這裡我使用的是本地的

[[email protected] ~]# docker tag quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0 reg.westos.org/library/nginx-ingress-controller:0.33.0  
[[email protected] ~]# docker tag jettech/kube-webhook-certgen:v1.2.0  reg.westos.org/library/kube-webhook-certgen:v1.2.0
[[email protected] ~]# docker push reg.westos.org/library/nginx-ingress-controller:0.33.0   ##上傳到harbor倉庫,友善實驗
[[email protected] ~]# docker push reg.westos.org/library/kube-webhook-certgen:v1.2.0

## 環境的清理
[[email protected] ~]# kubectl delete -f damo.yml   ##先删除前面實驗添加的172.25.13.100的svc,然後在進行實驗
[[email protected] ~]# vim damo.yml 
[[email protected] ~]# cat damo.yml 
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP       ##這裡的type需要修改
  #clusterIP: None
  #type: NodePort
  #type: LoadBalancer
  #externalIPs:
  #  - 172.25.13.100

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2
[[email protected] ~]# kubectl apply -f damo.yml 


## 正式部署
[[email protected] ingress]# cat deploy.yaml     ##檔案内容,和官方有點差距,經過優化,使用的是Daemonset控制器和hostnetwork
 
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - update
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - update
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader-nginx
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - endpoints
    verbs:
      - create
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
#apiVersion: v1
#kind: Service
#metadata:
#  labels:
#    helm.sh/chart: ingress-nginx-2.4.0
#    app.kubernetes.io/name: ingress-nginx
#    app.kubernetes.io/instance: ingress-nginx
#    app.kubernetes.io/version: 0.33.0
#    app.kubernetes.io/managed-by: Helm
#    app.kubernetes.io/component: controller
#  name: ingress-nginx-controller
#  namespace: ingress-nginx
#spec:
#  type: NodePort
#  ports:
#    - name: http
#      port: 80
#      protocol: TCP
#      targetPort: http
#    - name: https
#      port: 443
#      protocol: TCP
#      targetPort: https
#  selector:
#    app.kubernetes.io/name: ingress-nginx
#    app.kubernetes.io/instance: ingress-nginx
#    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/hostname: server4
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          image: nginx-ingress-controller:0.33.0
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --ingress-class=nginx
            - --configmap=ingress-nginx/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          livenessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 3
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
  namespace: ingress-nginx
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    rules:
      - apiGroups:
          - extensions
          - networking.k8s.io
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /extensions/v1beta1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-2.4.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.33.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: kube-webhook-certgen:v1.2.0
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.ingress-nginx.svc
            - --namespace=ingress-nginx
            - --secret-name=ingress-nginx-admission
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-2.4.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.33.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: kube-webhook-certgen:v1.2.0
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=ingress-nginx
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.4.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx


[[email protected] ingress]# kubectl apply -f deploy.yaml 
[[email protected] ingress]# kubectl get ns     ##安裝成功後出現新的namespace:ingress-nginx
[[email protected] ingress]# kubectl get all -n ingress-nginx     ##檢視服務是否安裝成功
[[email protected] ingress]# kubectl get all -o wide -n ingress-nginx   ##檢視新namespace詳細資訊
[[email protected] ingress]# kubectl -n ingress-nginx get pod    ##檢視是否運作成功,READY狀态必須為1


           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

4.2 ingress配置

4.2.1 配置基本的測試檔案

4.2.1.1 一個host

[[email protected] ~]# cd ingress/  
[[email protected] ingress]# vim nginx.yml
[[email protected] ingress]# cat nginx.yml 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-demo
spec:
  rules:
  - host: www1.westos.org
    http:
      paths:
      - path: /
        backend:
          serviceName: myservice
          servicePort: 80
[[email protected] ingress]# kubectl apply -f nginx.yml    ##建立ingress

[[email protected] ingress]# kubectl get ingress    ##檢視建立的ingress
NAME           CLASS    HOSTS             ADDRESS   PORTS   AGE
ingress-demo   <none>   www1.westos.org             80      20s
[[email protected] ~]# vim /etc/hosts     ##真機需要做解析


[[email protected] ingress]# cat nginx-svc.yml     ##在重寫一個服務,為後面實驗進行區分
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: myapp:v1

[[email protected] ingress]# kubectl apply  -f nginx-svc.yml 
[[email protected] ingress]# kubectl  get svc     ##檢視新加的services

           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

4.2.1.2 兩個host

[[email protected] ingress]# vim nginx.yml 
[[email protected] ingress]# cat nginx.yml 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-demo
spec:
  rules:
  - host: www1.westos.org
    http:
      paths:
      - path: /
        backend:
          serviceName: myservice
          servicePort: 80
  - host: www2.westos.org
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-svc
          servicePort: 80
[[email protected] ingress]# kubectl apply  -f nginx.yml     ##
[[email protected] ingress]# kubectl  get ingress     ##檢視ingress
           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

4.2.2 證書加密

[[email protected] ingress]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"   ##生成證書檔案
[[email protected] ingress]# kubectl create secret tls tls-secret --key tls.key --cert tls.crt ##生成緩存的密鑰檔案,相當于緩存到secret這個檔案中
[[email protected] ingress]# kubectl get secrets      ##檢視生成的密鑰檔案
[[email protected] ingress]# kubectl describe  secrets tls-secret  ##檢視密鑰詳細資訊



[[email protected] ingress]# vim nginx.yml 
[[email protected] ingress]# cat nginx.yml 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-demo
spec:
  tls:
    - hosts:
      - www1.westos.org
      secretName: tls-secret           ##為www1.westos.org添加證書
  rules:
  - host: www1.westos.org
    http:
      paths:
      - path: /
        backend:
          serviceName: myservice
          servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-demo2
spec:
  rules:
  - host: www2.westos.org
    http:
      paths:
      - path: /  
        backend:
          serviceName: nginx-svc
          servicePort: 80
[[email protected] ingress]# kubectl apply -f nginx.yml    ##建立
[[email protected] ingress]# kubectl  get ingress     ##可以發現有了443端口
NAME            CLASS    HOSTS             ADDRESS       PORTS     AGE
ingress-demo    <none>   www1.westos.org   172.25.13.4   80, 443   109m
ingress-demo2   <none>   www2.westos.org   172.25.13.4   80        2m23s
[[email protected] ingress]# kubectl describe ingress ingress-demo2

[[email protected] ~]# curl www1.westos.org -I     ##真機通路
[[email protected] ~]# curl -k https://www1.westos.org/   ##真機直接通路重定向網址
[[email protected] ~]# curl www2.westos.org -I

           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

4.2.3 證書加密與使用者認證

[[email protected] ingress]# yum provides */htpasswd    ##檢視htpasswd軟體屬于哪個安裝包
[[email protected] ingress]# yum install httpd-tools-2.4.6-88.el7.x86_64 -y  ##安裝對應軟體
[[email protected] ingress]# htpasswd -c auth zhy    ##建立認證使用者
[[email protected] ingress]# htpasswd auth admin     ##再建立一個
[[email protected] ingress]# cat auth                ##檢視認證的使用者
zhy:$apr1$Q1LeQyt7$CWuCeqN2fo8/0cvE6nf.A.
admin:$apr1$AetqdllY$QAYkZ7Vc0W304Y3OpsUON.

[[email protected] ingress]# kubectl create secret generic basic-auth --from-file=auth  ##存儲使用者認證檔案
[[email protected] ingress]# kubectl get secrets    ##存儲成功
NAME                  TYPE                                  DATA   AGE
basic-auth            Opaque                                1      19s
default-token-mdfz9   kubernetes.io/service-account-token   3      3d4h
tls-secret            kubernetes.io/tls                     2      48m
[[email protected] ingress]# kubectl get secret basic-auth -o yaml   ##通過yaml檔案檢視使用者認證資訊


[[email protected] ingress]# cat nginx.yml 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-demo
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - wxh'
spec:
  tls:
    - hosts:
      - www1.westos.org
      secretName: tls-secret
  rules:
  - host: www1.westos.org
    http:
      paths:
      - path: /
        backend:
          serviceName: myservice
          servicePort: 80
---

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-demo2
spec:
  rules:
  - host: www2.westos.org
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-svc
          servicePort: 80

[[email protected] ingress]# kubectl apply -f nginx.yml   ##建立 

#然後網頁通路www1.westos.org發現需要使用者認證,登陸即可
           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

4.2.4 簡單設定重定向

[[email protected] ingress]# vim nginx.yml 
[[email protected] ingress]# cat nginx.yml 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-demo
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - wxh'
spec:
  tls:
    - hosts:
      - www1.westos.org
      secretName: tls-secret
  rules:
  - host: www1.westos.org
    http:
      paths:
      - path: /
        backend:
          serviceName: myservice
          servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-demo2
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /hostname.html   ##設定重定向資訊,可以直接通路到www1.westos.org/hostname.html
spec:
  rules:
  - host: www2.westos.org
    http:
      paths:
      - backend:
          serviceName: nginx-svc
          servicePort: 80
        path: /          ##通路的根目錄

           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

4.2.5 位址重寫(複雜的重定向)

[[email protected] ingress]# vim nginx.yml 
[[email protected] ingress]# cat nginx.yml 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-demo
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - wxh'
spec:
  tls:
    - hosts:
      - www1.westos.org
      secretName: tls-secret
  rules:
  - host: www1.westos.org
    http:
      paths:
      - path: /
        backend:
          serviceName: myservice
          servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-demo2
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2     ##定位到$2,指關鍵字後面的所有内容
spec:
  rules:
  - host: www2.westos.org
    http:
      paths:
      - backend:
          serviceName: nginx-svc
          servicePort: 80
        path: /westos(/|$)(.*)      ##通路必須添加westos路徑(即域名後面加westos,然後重新定向到别的路徑)這關鍵字随意。   

[[email protected] ingress]# kubectl apply -f nginx.yml
[[email protected] ingress]# kubectl describe ingress ingress-demo2  ##檢視是否生效
           
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充
Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充

5. 補充

Docker(十八)--Docker k8s--services(微服務)1. k8s網絡通信2. services3. pod間通信4. Ingress服務5. 補充