天天看點

在win10上搭建完整Kubernetes、Istio、Prometheus、Grafana和Knative1. 作業系統2. Kubernetes3. 安裝kubernetes dashboard4. 安裝prometheus/grafana/alertmanager5. 安裝Istio6. 安裝Knative7. 其他

目錄

  • 1. 作業系統
    • 1.1. 下載下傳Ubuntu安裝鏡像
    • 1.2. 配置個人win10桌上型電腦的Hyper-V虛拟化引擎
    • 1.3. 安裝和配置k8s叢集master節點OS
    • 1.4. 安裝和配置k8s叢集node節點OS
  • 2. Kubernetes
    • 2.1. 安裝containerd(本安裝不使用docker)
    • 2.2. 安裝cni
    • 2.3. 安裝kubelet/kubeadm/kubectl
    • 2.4. 初始化kubernetes叢集
    • 2.5. 安裝網絡插件初始化kubernetes叢集calico
    • 2.6. 建立并添加k8s的node節點
  • 3. 安裝kubernetes dashboard
  • 4. 安裝prometheus/grafana/alertmanager
  • 5. 安裝Istio
  • 6. 安裝Knative
  • 7. 其他

1. 作業系統

1.1. 下載下傳Ubuntu安裝鏡像

點選下載下傳Ubuntu Server 22.04 LTS

1.2. 配置個人win10桌上型電腦的Hyper-V虛拟化引擎

  • 打開windows自帶的虛拟化引擎:打開

    開始

    ->

    設定

    ->

    應用

    ->

    可選功能

    ->

    更多Windows功能

    ,勾選

    Hyper-V

    。如果想在家裡的Windows家庭版上安裝,需将以下代碼儲存成.cmd檔案,右鍵

    以管理者身份運作

    pushd "%~dp0"
    dir /b %SystemRoot%\servicing\Packages\*Hyper-V*.mum >hyper-v.txt
    for /f %%i in ('findstr /i . hyper-v.txt 2^>nul') do dism /online /norestart /add-package:"%SystemRoot%\servicing\Packages\%%i"
    del hyper-v.txt
    Dism /online /enable-feature /featurename:Microsoft-Hyper-V-All /LimitAccess /ALL
               
  • 建立可以通路外部網絡的虛拟交換機(預設已有

    内部

    虛拟交換機

    Default Switch

    ):打開

    開始

    ->

    Windows管理工具

    ->

    Hyper-V管理器

    ->

    虛拟交換機管理器

    ->

    建立虛拟網絡交換機

    ,選擇

    外部

    ,點選

    建立虛拟交換機

    ,名稱輸入

    External Switch

  • 配置windows主控端連接配接

    Default Switch

    網卡的固定IP:打開

    開始

    ->

    設定

    ->

    網絡和Internet

    ->

    更改擴充卡選項

    ,輕按兩下

    vEthernet (Default Seitch)

    ,打開

    屬性

    ->

    Internet協定版本4 (TCP/IPv4)

    ,在

    使用下面的IP位址

    下,将

    IP位址

    改為

    172.28.176.1

    ,将

    子網路遮罩

    改為

    255.255.240.0

1.3. 安裝和配置k8s叢集master節點OS

  • 在windows主控端中建立虛拟機
    • 打開

      開始

      ->

      Windows管理工具

      ->

      Hyper-V管理器

      ,選擇菜單

      操作

      ->

      建立

      ->

      虛拟機

    • 名稱輸入

      k-master-1

      下一步

    • 選擇

      第一代(1)

      下一步

    • 記憶體

      4096

      MB,

      下一步

    • 連接配接

      External Switch

      下一步

    • 下一步

    • 選擇

      從可啟動的CD/DVD-ROM安裝作業系統

      ->

      影像檔案(.iso)

      ,檔案選擇前面下載下傳好的

      ubuntu-22.04-live-server-amd64.iso

      下一步

    • 完成

  • 添加虛拟機内部網卡,增加CPU
    • Hyper-V管理器

      中,選擇

      k-master-1

      ,點選

      設定

      ->

      新增硬體

      ->

      網絡擴充卡

      ->

      添加

      ,虛拟交換機

      Default Switch

      确定

    • 選擇

      處理器

      虛拟處理器的數量

      改為

      2

      确定

  • 安裝虛拟機OS
    • Hyper-V管理器

      中,選擇

      k-master-1

      ,點選

      連接配接

    • 在虛拟機界面,

      啟動

      ,在GNU GRUB頁面

      回車

      ,等待,
    • 回車

    • 回車

    • 選擇

      Ubuntu Server (minimized)

      ,選中

      Done

      回車

    • 選中

      eth1

      ->

      Edit IPv4

      IPv4 Method

      設為

      Manual

      Subnet

      輸入

      172.28.176.0/20

      Address

      輸入

      172.28.176.4

      Gateway

      Name servers

      都輸入

      172.28.176.1

      ,選中

      Save

      回車

    • 選中

      Done

      回車

    • 選中

      Done

      回車

    • Mirror address

      輸入

      http://mirrors.aliyun.com/ubuntu/

      ,選中

      Done

      回車

    • 選中

      update to the new installer

      回車

    • 等待,選中

      Done

      回車

    • 選中

      Done

      回車

    • 選中

      Continue

      回車

    • Your name

      輸入

      myname

      Your server's name

      輸入

      k-master-1

      Pick a username

      輸入

      user

      ,輸入密碼,選中

      Done

      回車

    • 勾選

      Install OpenSSH server

      ,選中

      Done

      回車

    • 再選中

      Done

      回車

    • 等待,安裝完畢後,選中

      Reboot Now

      回車

    • 等待,按提示

      回車

    • 關閉虛拟機頁面
  • 設定root登入
    • 使用shell工具(如PuTTY),以使用者名

      user

      及其密碼登入

      172.28.176.4

    • 執行以下指令
      # 修改root密碼
      $ sudo passwd root
      # 修改ssh參數,打開PermitRootLogin,并改為yes
      $ sudo vi /etc/ssh/sshd_config
      # 重新開機ssh服務
      $ sudo service ssh restart
                 
    • 退出

      user

      使用者登入;使用shell工具(如PuTTY)

      root

      登入
  • 設定k8s admin配置檔案環境變量
    $ export KUBECONFIG=/etc/kubernetes/admin.conf
               
    打開

    .bashrc

    檔案,将上述内容(僅執行過的指令部分)添加到檔案末尾
    $ vi .bashrc
               
  • 關閉swap
    $ swapoff -a
    $ vi /etc/fstab
               
    注釋掉下面這行
    /swap.img      none    swap    sw      0       0
               

1.4. 安裝和配置k8s叢集node節點OS

重複1.3中的步驟安裝和配置k8s叢集node節點OS,除以下3點外,其他步驟完全相同

  • 虛拟機名稱為

    k-node-1

  • Your server's name

    k-node-1

  • eth1

    靜态IP

    Address

    172.28.176.5

  • 不執行,也不添加下面的内容進.bashrc
    $ export KUBECONFIG=/etc/kubernetes/admin.conf
               

2. Kubernetes

2.1. 安裝containerd(本安裝不使用docker)

k-master-1

的shell會話中

$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

# 設定所需的 sysctl 參數,參數在重新啟動後保持不變
$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
# 應用 sysctl 參數而不重新啟動
$ sysctl --system

$ wget https://github.com/containerd/containerd/releases/download/v1.6.6/containerd-1.6.6-linux-amd64.tar.gz
$ tar Cxzvf /usr/local containerd-1.6.6-linux-amd64.tar.gz
$ mkdir -p /usr/local/lib/systemd/system/
$ wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
$ mv containerd.service /usr/local/lib/systemd/system/
$ systemctl daemon-reload
$ systemctl enable --now containerd

$ wget https://github.com/opencontainers/runc/releases/download/v1.1.3/runc.amd64
install -m 755 runc.amd64 /usr/local/sbin/runc

$ wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
$ mkdir -p /opt/cni/bin
$ tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz

$ mkdir /etc/containerd/
$ containerd config default>/etc/containerd/config.toml
$ vi /etc/containerd/config.toml
#将SystemdCgroup改為true,sandbox_image改為"k8s.gcr.io/pause:3.7"

$ systemctl daemon-reload
$ systemctl restart containerd
           

如果所在環境不能直接通路google,則需先使用以下指令,一一下載下傳清單中的全部鏡像(需要baidu找到鏡像對應的國内站點)

docker.io/calico/apiserver:v3.23.2
docker.io/calico/cni:v3.23.2
docker.io/calico/kube-controllers:v3.23.2
docker.io/calico/node:v3.23.2
docker.io/calico/pod2daemon-flexvol:v3.23.2
docker.io/calico/typha:v3.23.2
docker.io/grafana/grafana:9.0.2
docker.io/istio/examples-bookinfo-productpage-v1:1.16.4
docker.io/istio/examples-bookinfo-ratings-v1:1.16.4
docker.io/istio/examples-bookinfo-reviews-v2:1.16.4
docker.io/istio/examples-bookinfo-reviews-v3:1.16.4
docker.io/istio/pilot:1.14.1
docker.io/istio/proxyv2:1.14.1
docker.io/jimmidyson/configmap-reload:v0.5.0
docker.io/kubernetesui/dashboard:v2.6.0
docker.io/kubernetesui/metrics-scraper:v1.0.8
gcr.io/knative-releases/knative.dev/eventing/cmd/in_memory/[email protected]:97d7db62ea35f7f9199787722c352091987e8816d549c3193ee5683424fef8d0
gcr.io/knative-releases/knative.dev/eventing/cmd/in_memory/[email protected]:3163f0a3b3ba5b81c36357df3dd2bff834056f2943c5b395adb497fb97476d20
gcr.io/knative-releases/knative.dev/net-istio/cmd/[email protected]:148b0ffa1f4bc9402daaac55f233581f7999b1562fa3079a8ab1d1a8213fd909
gcr.io/knative-releases/knative.dev/serving/cmd/[email protected]:08315309da4b219ec74bb2017f569a98a7cfecee5e1285b03dfddc2410feb7d7
gcr.io/knative-releases/knative.dev/serving/cmd/[email protected]:105bdd14ecaabad79d9bbcb8359bf2c317bd72382f80a7c4a335adfea53844f2
gcr.io/knative-releases/knative.dev/serving/cmd/[email protected]:bac158dfb0c73d13ed42266ba287f1a86192c0ba581e23fbe012d30a1c34837c
gcr.io/knative-releases/knative.dev/serving/cmd/domain-mapping-web[email protected]:15f1ce7f35b4765cc3b1c073423ab8d8bf2c8c2630eea3995c610f520fb68ca0
gcr.io/knative-releases/knative.dev/serving/cmd/[email protected]:e384a295069b9e10e509fc3986cce4fe7be4ff5c73413d1c2234a813b1f4f99b
gcr.io/knative-releases/knative.dev/serving/cmd/[email protected]:1282a399cbb94f3b9de4f199239b39e795b87108efe7d8ba0380147160a97abb
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/kube-apiserver:v1.24.3
k8s.gcr.io/kube-controller-manager:v1.24.3
k8s.gcr.io/kube-proxy:v1.24.3
k8s.gcr.io/kube-scheduler:v1.24.3
k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.5.0
k8s.gcr.io/metrics-server/metrics-server:v0.6.1
k8s.gcr.io/pause:3.7
k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.1
quay.io/brancz/kube-rbac-proxy:v0.13.0
quay.io/prometheus-operator/prometheus-config-reloader:v0.57.0
quay.io/prometheus-operator/prometheus-operator:v0.57.0
quay.io/prometheus/alertmanager:v0.24.0
quay.io/prometheus/blackbox-exporter:v0.21.1
quay.io/prometheus/node-exporter:v1.3.1
quay.io/prometheus/prometheus:v2.36.2
quay.io/tigera/operator:v1.27.7
registry.k8s.io/autoscaling/addon-resizer:1.8.14
registry.k8s.io/metrics-server/metrics-server:v0.5.2
           

2.2. 安裝cni

$ mkdir -p /root/go/src/github.com/containerd/
$ cd /root/go/src/github.com/containerd/
$ git clone https://github.com/containerd/containerd.git
$ apt install golang-go
$ cd containerd/script/setup
$ vi install-cni #将"$GOPATH"替換為:/root/go
$ ./install-cni
           

2.3. 安裝kubelet/kubeadm/kubectl

$ cd ~
$ apt-get update
$ apt-get install -y apt-transport-https ca-certificates curl

$ curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg #如不能通路google,需baidu找到國内此檔案的鏡像位址
$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list

$ apt-get update
$ apt-get install -y kubelet kubeadm kubectl
$ apt-mark hold kubelet kubeadm kubectl
           

2.4. 初始化kubernetes叢集

$ kubeadm init --pod-network-cidr=10.244.0.0/16
$ kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-
           

從此時起,保持打開另一個單獨的shell會話,用root登入到172.28.176.4,執行以下指令,實時觀察叢集内pod的運作(正常pod都應該是

Running

狀态)

$ watch kubectl get pods -A -o wide 
           

2.5. 安裝網絡插件初始化kubernetes叢集calico

$ kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml

$ wget https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml
$ vi custom-resources.yaml #将cidr改為:10.244.0.0/16 

$ kubectl apply -f custom-resources.yaml
           

2.6. 建立并添加k8s的node節點

  • k-node-1

    的shell會話中重複2.1、2.2、2.3
  • k-master-1

    的shell會話中,生成token和證書
    $ kubeadm token create
    $ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
               
  • k-node-1

    的shell會話中
    #token輸入上一步第一條指令的輸出,sha256:後輸入上一步第二條指令的輸出
    $ kubeadm join 172.28.176.4:6443 --token y0xyix.ree0ap7zll8vicai --discovery-token-ca-cert-hash sha256:3a2d98d658f5bef44a629dc25bb96168ebd01ec203ac88d3854fd3985b3befec 
               

3. 安裝kubernetes dashboard

k-master-1

的shell會話中

  • 安裝
    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
    $ kubectl edit service kubernetes-dashboard -n kubernetes-dashboard #将type從ClusterIP改為NodePort
    $ kubectl get service kubernetes-dashboard -n kubernetes-dashboard #擷取到443對應的端口,如30341
               
  • 建立使用者
    $ vi dashboard-adminuser.yaml
    #輸入以下内容并儲存
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard
               
    $ kubectl apply -f dashboard-adminuser.yaml
    $ kubectl -n kubernetes-dashboard create token admin-user #記錄輸出的token
               
  • 在windows主控端中打開浏覽器,通路https://172.28.176.4:30431(端口為上上步得到的内容),在token處輸入上一步生成的内容,應顯示如下内容:
    在win10上搭建完整Kubernetes、Istio、Prometheus、Grafana和Knative1. 作業系統2. Kubernetes3. 安裝kubernetes dashboard4. 安裝prometheus/grafana/alertmanager5. 安裝Istio6. 安裝Knative7. 其他

4. 安裝prometheus/grafana/alertmanager

$ go install -a github.com/jsonnet-bundler/jsonnet-bundler/cmd/[email protected]
$ ln -s /root/go/bin/jb /usr/local/bin/jb

$ go install github.com/google/go-jsonnet/cmd/[email protected]
$ ln -s /root/go/bin/jsonnet /usr/local/bin/jsonnet

$ go install github.com/brancz/[email protected]
$ ln -s /root/go/bin/gojsontoyaml /usr/local/bin/gojsontoyaml

$ mkdir my-kube-prometheus
$ cd my-kube-prometheus/

$ jb init
$ jb install github.com/prometheus-operator/kube-prometheus/jsonnet/[email protected]

$ wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/example.jsonnet -O example.jsonnet
$ wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/build.sh -O build.sh
$ chmod +x build.sh

$ jb update
$ vi example.jsonnet #取消掉這一行的注釋:(import 'kube-prometheus/addons/node-ports.libsonnet') +

$ ./build.sh example.jsonnet

$ kubectl apply --server-side -f manifests/setup
$ kubectl apply -f manifests/

$ cd ~

$ kubectl -n monitoring delete networkpolicies.networking.k8s.io --all

$ kubectl get service -n monitoring #取得alertmanager-main,grafana,prometheus-k8s對應的端口号,如30903,30902,30900
           

在windows主控端中打開浏覽器,通路https://172.28.176.4:30903、https://172.28.176.4:30902、https://172.28.176.4:30900(端口為上上步得到的内容),應顯示如下内容:

在win10上搭建完整Kubernetes、Istio、Prometheus、Grafana和Knative1. 作業系統2. Kubernetes3. 安裝kubernetes dashboard4. 安裝prometheus/grafana/alertmanager5. 安裝Istio6. 安裝Knative7. 其他
在win10上搭建完整Kubernetes、Istio、Prometheus、Grafana和Knative1. 作業系統2. Kubernetes3. 安裝kubernetes dashboard4. 安裝prometheus/grafana/alertmanager5. 安裝Istio6. 安裝Knative7. 其他
在win10上搭建完整Kubernetes、Istio、Prometheus、Grafana和Knative1. 作業系統2. Kubernetes3. 安裝kubernetes dashboard4. 安裝prometheus/grafana/alertmanager5. 安裝Istio6. 安裝Knative7. 其他

5. 安裝Istio

$ wget https://github.com/istio/istio/releases/download/1.14.1/istioctl-1.14.1-linux-amd64.tar.gz
$ watch kubectl get pods -A -o wide
$ tar Cxzvf . istioctl-1.14.1-linux-amd64.tar.gz
$ mv istioctl /usr/local/bin/

$ vi /etc/kubernetes/manifests/kube-apiserver.yaml #在- --enable-admission-plugins=NodeRestriction後添加:,MutatingAdmissionWebhook,ValidatingAdmissionWebhook
$ wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
$ vi components.yaml #在- --metric-resolution=15s後添加一行:- --kubelet-insecure-tls
$ kubectl apply -f components.yaml
$ kubectl top pods -A #确認得到pod的資源使用清單

$ curl -L https://istio.io/downloadIstio | sh -
$ cd istio-1.14.1/
$ istioctl install --set profile=default -y

$ kubectl label namespace default istio-injection=enabled

$ kubectl edit service istio-ingressgateway -n istio-system #在這行  externalTrafficPolicy: Cluster的前面添加兩行如下:
  externalIPs:
  - 172.28.176.4
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml #如無法部署成功,可用下一指令更換代理,成功後,應使用下下條指令修復代理人
$ cp ~/http_proxy.conf.ml /etc/systemd/system/containerd.service.d/http_proxy.conf
$ cp ~/http_proxy.conf.www /etc/systemd/system/containerd.service.d/http_proxy.conf

$ cd ~

$ kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>" #應得到如下輸出
<title>Simple Bookstore App</title>
           

6. 安裝Knative

$ wget https://github.com/knative/client/releases/download/knative-v1.6.0/kn-linux-amd64
$ chmod +x kn-linux-amd64
$ mv kn-linux-amd64 /usr/local/bin/kn

$ kubectl apply -f https://github.com/knative/operator/releases/download/knative-v1.6.0/operator.yaml
           
$ vi knative-serving.yaml #寫入下面的内容儲存

apiVersion: v1
kind: Namespace
metadata:
  name: knative-serving
---
apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
  name: knative-serving
  namespace: knative-serving
           
kubectl apply -f knative-serving.yaml
           
$ vi knative-eventing.yaml #寫入下面的内容儲存

apiVersion: v1
kind: Namespace
metadata:
  name: knative-eventing
---
apiVersion: operator.knative.dev/v1beta1
kind: KnativeEventing
metadata:
  name: knative-eventing
  namespace: knative-eventing
           
$ kubectl apply -f knative-eventing.yaml
           
$ vi hello.yaml #寫入下面的内容儲存

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: hello
spec:
  template:
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go
          ports:
            - containerPort: 8080
          env:
            - name: TARGET
              value: "World"
           
$ kubectl apply -f hello.yaml
           

在Windows主控端上,修改C:\Windows\System32\drivers\etc\hosts,添加:

172.28.176.4 hello.default.example.com

浏覽器通路如下位址,得到hello world傳回,應該pod清單中發現pod随着通路發起和終止而起停:

http://hello.default.example.com

7. 其他

  • 安裝步驟隻能保證如下版本可正常工作(其他版本資訊參見上一步的鏡像清單),如元件版本有更新,需針對錯誤進行調整
    $ kubeadm version -o short
    v1.24.3
    
    $ kubectl version --short
    Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
    Client Version: v1.24.3
    Kustomize Version: v4.5.4
    Server Version: v1.24.3
    
    $ ctr version
    Client:
      Version:  v1.6.6
      Revision: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
      Go version: go1.17.11
    
    Server:
      Version:  v1.6.6
      Revision: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
      UUID: 272b3b50-f95d-42aa-9d36-6558b245afb0
    
    $ istioctl version
    client version: 1.14.1
    control plane version: 1.14.1
    data plane version: 1.14.1 (7 proxies)
    
    $ kn version
    Version:      v20220713-local-bfdc0a21
    Build Date:   2022-07-13 09:04:48
    Git Revision: bfdc0a21
    Supported APIs:
    * Serving
      - serving.knative.dev/v1 (knative-serving v0.33.0)
    * Eventing
      - sources.knative.dev/v1 (knative-eventing v0.33.0)
      - eventing.knative.dev/v1 (knative-eventing v0.33.0)
               
  • 最終全部安裝完畢後的pod清單如下:
    $ watch kubectl get pods -A -o wide #以下輸出為最終全部安裝完畢後的pod清單
    NAMESPACE              NAME                                        READY   STATUS    RESTARTS       AGE   IP               NODE         NOMINATED NODE   READINESS GATES
    calico-apiserver       calico-apiserver-bddc5fc5c-2hlct            1/1     Running   3 (24h ago)    38h   10.244.150.133   k-master-1   <none>           <none>
    calico-apiserver       calico-apiserver-bddc5fc5c-q6kls            1/1     Running   3 (24h ago)    38h   10.244.150.132   k-master-1   <none>           <none>
    calico-system          calico-kube-controllers-86dff98c45-64r9z    1/1     Running   0              38h   10.244.150.131   k-master-1   <none>           <none>
    calico-system          calico-node-6n4wm                           1/1     Running   5 (15h ago)    25h   172.28.176.5     k-node-1     <none>           <none>
    calico-system          calico-node-zx8nn                           1/1     Running   0              38h   172.28.176.4     k-master-1   <none>           <none>
    calico-system          calico-typha-77d7886577-lccxl               1/1     Running   0              38h   172.28.176.4     k-master-1   <none>           <none>
    default                details-v1-b48c969c5-2m8c2                  2/2     Running   0              15h   10.244.161.197   k-node-1     <none>           <none>
    default                knative-operator-85547459f-m98hf            1/1     Running   0              15h   10.244.161.194   k-node-1     <none>           <none>
    default                productpage-v1-74fdfbd7c7-6vw7n             2/2     Running   0              21h   10.244.150.165   k-master-1   <none>           <none>
    default                ratings-v1-b74b895c5-krqv4                  2/2     Running   0              21h   10.244.150.164   k-master-1   <none>           <none>
    default                reviews-v1-68b4dcbdb9-vtgzb                 2/2     Running   0              15h   10.244.161.198   k-node-1     <none>           <none>
    default                reviews-v2-565bcd7987-47jzb                 2/2     Running   0              21h   10.244.150.162   k-master-1   <none>           <none>
    default                reviews-v3-d88774f9c-whkcf                  2/2     Running   0              21h   10.244.150.163   k-master-1   <none>           <none>
    istio-system           istio-ingressgateway-545d46d996-cf7xw       1/1     Running   0              23h   10.244.150.153   k-master-1   <none>           <none>
    istio-system           istiod-59876b79cd-wqz64                     1/1     Running   0              15h   10.244.161.252   k-node-1     <none>           <none>
    knative-eventing       eventing-controller-5c8967885c-4r78h        1/1     Running   0              15h   10.244.161.248   k-node-1     <none>           <none>
    knative-eventing       eventing-webhook-7f9b5f7d9-mwxrc            1/1     Running   0              15h   10.244.161.246   k-node-1     <none>           <none>
    knative-eventing       imc-controller-7d9b5756cb-fzr5x             1/1     Running   0              15h   10.244.150.174   k-master-1   <none>           <none>
    knative-eventing       imc-dispatcher-76665c67df-cvz77             1/1     Running   0              15h   10.244.150.173   k-master-1   <none>           <none>
    knative-eventing       mt-broker-controller-b74d7487c-gc2rb        1/1     Running   0              15h   10.244.161.251   k-node-1     <none>           <none>
    knative-eventing       mt-broker-filter-545d9f864f-jjrf2           1/1     Running   0              15h   10.244.161.249   k-node-1     <none>           <none>
    knative-eventing       mt-broker-ingress-7655d545f5-hn86f          1/1     Running   0              15h   10.244.161.245   k-node-1     <none>           <none>
    knative-serving        activator-c7d578d94-2jlwd                   1/1     Running   0              20h   10.244.150.166   k-master-1   <none>           <none>
    knative-serving        autoscaler-6488988457-gth42                 1/1     Running   0              20h   10.244.150.167   k-master-1   <none>           <none>
    knative-serving        autoscaler-hpa-7d89d96584-pdzj8             1/1     Running   0              15h   10.244.161.250   k-node-1     <none>           <none>
    knative-serving        controller-768d64b7b4-ftdhk                 1/1     Running   0              15h   10.244.161.253   k-node-1     <none>           <none>
    knative-serving        domain-mapping-7598c5f659-f6vbh             1/1     Running   0              20h   10.244.150.169   k-master-1   <none>           <none>
    knative-serving        domainmapping-webhook-8c4c9fdc4-46cjq       1/1     Running   0              15h   10.244.161.254   k-node-1     <none>           <none>
    knative-serving        net-istio-controller-7d44658469-n5lg7       1/1     Running   0              15h   10.244.161.247   k-node-1     <none>           <none>
    knative-serving        net-istio-webhook-f5ccb98cf-jggpk           1/1     Running   0              20h   10.244.150.172   k-master-1   <none>           <none>
    knative-serving        webhook-df8844f6-47zh5                      1/1     Running   0              20h   10.244.150.171   k-master-1   <none>           <none>
    kube-system            coredns-6d4b75cb6d-2tp79                    1/1     Running   0              38h   10.244.150.129   k-master-1   <none>           <none>
    kube-system            coredns-6d4b75cb6d-bddjm                    1/1     Running   0              38h   10.244.150.130   k-master-1   <none>           <none>
    kube-system            etcd-k-master-1                             1/1     Running   0              38h   172.28.176.4     k-master-1   <none>           <none>
    kube-system            kube-apiserver-k-master-1                   1/1     Running   0              24h   172.28.176.4     k-master-1   <none>           <none>
    kube-system            kube-controller-manager-k-master-1          1/1     Running   1 (24h ago)    38h   172.28.176.4     k-master-1   <none>           <none>
    kube-system            kube-proxy-g8n8v                            1/1     Running   5 (15h ago)    25h   172.28.176.5     k-node-1     <none>           <none>
    kube-system            kube-proxy-zg5l8                            1/1     Running   0              38h   172.28.176.4     k-master-1   <none>           <none>
    kube-system            kube-scheduler-k-master-1                   1/1     Running   1 (24h ago)    38h   172.28.176.4     k-master-1   <none>           <none>
    kube-system            metrics-server-658867cdb7-999wn             1/1     Running   0              15h   10.244.150.175   k-master-1   <none>           <none>
    kubernetes-dashboard   dashboard-metrics-scraper-8c47d4b5d-8jmlv   1/1     Running   0              38h   10.244.150.137   k-master-1   <none>           <none>
    kubernetes-dashboard   kubernetes-dashboard-5676d8b865-lgtgn       1/1     Running   0              38h   10.244.150.136   k-master-1   <none>           <none>
    monitoring             alertmanager-main-0                         2/2     Running   0              25h   10.244.150.140   k-master-1   <none>           <none>
    monitoring             alertmanager-main-1                         2/2     Running   0              25h   10.244.150.141   k-master-1   <none>           <none>
    monitoring             alertmanager-main-2                         2/2     Running   0              25h   10.244.150.139   k-master-1   <none>           <none>
    monitoring             blackbox-exporter-5fb779998c-q7dkx          3/3     Running   0              25h   10.244.150.142   k-master-1   <none>           <none>
    monitoring             grafana-66bbbd4f57-v4wtp                    1/1     Running   0              25h   10.244.150.144   k-master-1   <none>           <none>
    monitoring             kube-state-metrics-98bdf47b9-h28kd          3/3     Running   0              25h   10.244.150.143   k-master-1   <none>           <none>
    monitoring             node-exporter-cw2mc                         2/2     Running   0              25h   172.28.176.4     k-master-1   <none>           <none>
    monitoring             node-exporter-nmhf5                         2/2     Running   10 (15h ago)   25h   172.28.176.5     k-node-1     <none>           <none>
    monitoring             prometheus-adapter-5f68766c85-clhh7         1/1     Running   0              25h   10.244.150.145   k-master-1   <none>           <none>
    monitoring             prometheus-adapter-5f68766c85-hh5d2         1/1     Running   0              25h   10.244.150.146   k-master-1   <none>           <none>
    monitoring             prometheus-k8s-0                            2/2     Running   0              25h   10.244.150.148   k-master-1   <none>           <none>
    monitoring             prometheus-k8s-1                            2/2     Running   0              25h   10.244.150.147   k-master-1   <none>           <none>
    monitoring             prometheus-operator-6486d45dc7-jjzw9        2/2     Running   0              26h   10.244.150.138   k-master-1   <none>           <none>
    tigera-operator        tigera-operator-5dc8b759d9-lwmb7            1/1     Running   1 (24h ago)    38h   172.28.176.4     k-master-1   <none>           <none>
               

繼續閱讀