天天看點

開源Service Mesh元件Nginmesh安裝指南

本部落格為 http://zhaohuabing.com 在CSDN的鏡像,評論請移步 http://zhaohuabing.com/2018/01/02/nginmesh-install/

前言

Nginmesh是NGINX的Service Mesh開源項目,用于Istio服務網格平台中的資料面代理。它旨在提供七層負載均衡和服務路由功能,與Istio內建作為sidecar部署,并将以“标準,可靠和安全的方式”使得服務間通信更容易。Nginmesh在今年底已經連續釋出了0.2和0.3版本,提供了服務發現,請求轉發,路由規則,性能名額收集等功能。

開源Service Mesh元件Nginmesh安裝指南
備注:本文安裝指南基于Ubuntu 16.04,在Centos上某些安裝步驟的指令可能需要稍作改動。

安裝Kubernetes Cluster

Kubernetes Cluster包含etcd, api server, scheduler,controller manager等多個元件,元件之間的配置較為複雜,如果要手動去逐個安裝及配置各個元件,需要了解kubernetes,作業系統及網絡等多方面的知識,對安裝人員的能力要求較高。kubeadm提供了一個簡便,快速安裝Kubernetes Cluster的方式,并且可以通過安裝配置檔案提供較高的靈活性,是以我們采用kubeadm安裝kubernetes cluster。

首先參照kubeadm的說明文檔在計劃部署kubernetes cluster的每個節點上安裝docker,kubeadm, kubelet 和 kubectl。

安裝docker

apt-get update
apt-get install -y docker.io
           

使用google的源安裝kubelet kubeadm和kubectl

apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
           

使用kubeadmin安裝kubernetes cluster

Nginmesh使用Kubernetes的Initializer機制來實作sidecar的自動注入。Initializer目前是kubernetes的一個Alpha feature,預設是未啟用的,需要通過api server的參數打開。是以我們先建立一個kubeadm-conf配置檔案,用于配置api server的啟動參數

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
apiServerExtraArgs:
  admission-control: Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ValidatingAdmissionWebhook,ResourceQuota,DefaultTolerationSeconds,MutatingAdmissionWebhook
  runtime-config: admissionregistration.k8s.io/v1alpha1
           

使用kubeadmin init指令建立kubernetes master節點。

可以先試用–dry-run參數驗證一下配置檔案。

如果一切正常,kubeadm将提示:Finished dry-running successfully. Above are the resources that would be created.

下面再實際執行建立指令

kubeadm會花一點時間拉取docker image,指令完成後,會提示如何将一個work node加入cluster。如下所示:

kubeadm join --token fffbf6bcb3563428cf23 : --discovery-token-ca-cert-hash sha256:ad08b4cd9f02e522334979deaf09e3fae80507afde63acf88892c8b72f143f
 ```
> 備注:目前kubeadm隻能支援在一個節點上安裝master,支援高可用的安裝将在後續版本實作。kubernetes官方給出的workaround建議是定期備份 etcd 資料[kubeadm limitations](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#limitations)。

Kubeadm并不會安裝Pod需要的網絡,是以需要手動安裝一個Pod網絡,這裡采用的是Calico
           

kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

使用kubectl 指令檢查master節點安裝結果
           

[email protected]:~$ kubectl get all

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

svc/kubernetes ClusterIP 10.96.0.1 443/TCP 12m

在每台工作節點上執行上述kubeadm join指令,即可把工作節點加入叢集中。使用kubectl 指令檢查cluster中的節點情況。
           

[email protected]:~$ kubectl get nodes

NAME STATUS ROLES AGE VERSION

kube-1 Ready master 21m v1.9.0

kube-2 Ready 47s v1.9.0

## 安裝Istio控制面和Bookinfo

參考[Nginmesh文檔](https://github.com/nginmesh/nginmesh)安裝Istio控制面和Bookinfo
該文檔的步驟清晰明确,這裡不再贅述。

需要注意的是,在Niginmesh文檔中,建議通過Ingress的External IP通路bookinfo應用程式。但[Loadbalancer隻在支援的雲環境中才會生效](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer),并且還需要進行一定的配置。如我在Openstack環境中建立的cluster,則需要參照[該文檔](https://docs.openstack.org/magnum/ocata/dev/kubernetes-load-balancer.html)對Openstack進行配置後,Openstack才能夠支援kubernetes的Loadbalancer service。如未進行配置,通過指令檢視Ingress External IP一直顯示為pending狀态。
           

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

istio-ingress LoadBalancer 10.111.158.10 80:32765/TCP,443:31969/TCP 11m

istio-mixer ClusterIP 10.107.135.31 9091/TCP,15004/TCP,9093/TCP,9094/TCP,9102/TCP,9125/UDP,42422/TCP 11m

istio-pilot ClusterIP 10.111.110.65 15003/TCP,443/TCP 11m

如不能配置雲環境提供Loadbalancer特性, 我們可以直接使用叢集中的一個節點IP:Nodeport通路Bookinfo應用程式。
           

http://10.12.5.31:32765/productpage

想要了解更多關于如何從叢集外部進行通路的内容,可以參考[如何從外部通路Kubernetes叢集中的應用?](http://zhaohuabing.com/2017/11/28/access-application-from-outside/)

## 檢視自動注入的sidecar
使用 kubectl get pod reviews-v3-5fff595d9b-zsb2q -o yaml 指令檢視Bookinfo應用的reviews服務的Pod。
           
apiVersion: v1
kind: Pod
metadata:
  annotations:
    sidecar.istio.io/status: injected-version-0.2.12
  creationTimestamp: 2018-01-02T02:33:36Z
  generateName: reviews-v3-5fff595d9b-
  labels:
    app: reviews
    pod-template-hash: "1999151856"
    version: v3
  name: reviews-v3-5fff595d9b-zsb2q
  namespace: default
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: reviews-v3-5fff595d9b
    uid: 5599688c-ef65-11e7-8be6-fa163e160c7d
  resourceVersion: "3757"
  selfLink: /api/v1/namespaces/default/pods/reviews-v3-5fff595d9b-zsb2q
  uid: 559d8c6f-ef65-11e7-8be6-fa163e160c7d
spec:
  containers:
  - image: istio/examples-bookinfo-reviews-v3:.
    imagePullPolicy: IfNotPresent
    name: reviews
    ports:
    - containerPort: 
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-48vxx
      readOnly: true
  - args:
    - proxy
    - sidecar
    - -v
    - "2"
    - --configPath
    - /etc/istio/proxy
    - --binaryPath
    - /usr/local/bin/envoy
    - --serviceCluster
    - reviews
    - --drainDuration
    - s
    - --parentShutdownDuration
    - m0s
    - --discoveryAddress
    - istio-pilot.istio-system:
    - --discoveryRefreshDelay
    - s
    - --zipkinAddress
    - zipkin.istio-system:
    - --connectTimeout
    - s
    - --statsdUdpAddress
    - istio-mixer.istio-system:
    - --proxyAdminPort
    - "15000"
    - --controlPlaneAuthPolicy
    - NONE
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: INSTANCE_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    image: nginmesh/proxy_debug:0.2.12
    imagePullPolicy: Always
    name: istio-proxy
    resources: {}
    securityContext:
      privileged: true
      readOnlyRootFilesystem: false
      runAsUser: 1337
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /etc/certs/
      name: istio-certs
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-48vxx
      readOnly: true
  dnsPolicy: ClusterFirst
  initContainers:
  - args:
    - -p
    - "15001"
    - -u
    - "1337"
    image: nginmesh/proxy_init:0.2.12
    imagePullPolicy: Always
    name: istio-init
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-48vxx
      readOnly: true
  nodeName: kube-2
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir:
      medium: Memory
    name: istio-envoy
  - name: istio-certs
    secret:
      defaultMode: 420
      optional: true
      secretName: istio.default
  - name: default-token-vxx
    secret:
      defaultMode: 420
      secretName: default-token-48vxx
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-01-02T02:33:54Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-01-02T02:36:06Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-01-02T02:33:36Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://5d0c189b9dde8e14af4c8065ee5cf007508c0bb2b3c9535598d99dc49f531370
    image: nginmesh/proxy_debug:0.2.12
    imageID: docker-pullable://nginmesh/[email protected]:6275934ea3a1ce5592e728717c4973ac704237b06b78966a1d50de3bc9319c71
    lastState: {}
    name: istio-proxy
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-01-02T02:36:05Z
  - containerID: docker://aba3e114ac1aa87c75e969dcc1b0725696de78d3407c5341691d9db579429f28
    image: istio/examples-bookinfo-reviews-v3:0.2.3
    imageID: docker-pullable://istio/[email protected]:6e100e4805a8c10c47040ea7b66f10ad619c7e0068696032546ad3e35ad46570
    lastState: {}
    name: reviews
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-01-02T02:35:47Z
  hostIP: 10.12.5.31
  initContainerStatuses:
  - containerID: docker://b55108625832a3205a265e8b45e5487df10276d5ae35af572ea4f30583933c1f
    image: nginmesh/proxy_init:0.2.12
    imageID: docker-pullable://nginmesh/[email protected]:f73b68839f6ac1596d6286ca498e4478b8fcfa834e4884418d23f9f625cbe5f5
    lastState: {}
    name: istio-init
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: docker://b55108625832a3205a265e8b45e5487df10276d5ae35af572ea4f30583933c1f
        exitCode: 0
        finishedAt: 2018-01-02T02:33:53Z
        reason: Completed
        startedAt: 2018-01-02T02:33:53Z
  phase: Running
  podIP: 192.168.79.138
  qosClass: BestEffort
  startTime: 2018-01-02T02:33:39Z
           

該指令行輸出的内容相當長,我們可以看到Pod中注入了一個 nginmesh/proxy_debug container,還增加了一個initContainer nginmesh/proxy_init。這兩個容器是通過kubernetes initializer自動注入到pod中的。這兩個container分别有什麼作用呢?讓我們看一下Nginmesh源代碼中的說明:

  • proxy_debug, which comes with the agent and NGINX.
  • proxy_init, which is used for configuring iptables rules for transparently injecting an NGINX proxy from the proxy_debug image into an application pod.

proxy_debug就是sidecar代理,proxy_init則用于配置iptable 規則,以将應用的流量導入到sidecar代理中。

檢視proxy_init的Dockerfile檔案,可以看到proxy_init其實是調用了prepare_proxy.sh這個腳本來建立iptable規則。

proxy_debug Dockerfile

FROM debian:stretch-slim
RUN apt-get update && apt-get install -y iptables
ADD prepare_proxy.sh /
ENTRYPOINT ["/prepare_proxy.sh"]
           

prepare_proxy.sh節選

... omitted for brevity 

\# Create a new chain for redirecting inbound and outbound traffic to
\# the common Envoy port.
iptables -t nat -N ISTIO_REDIRECT                                             -m comment --comment "istio/redirect-common-chain"
iptables -t nat -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-port ${ENVOY_PORT}  -m comment --comment "istio/redirect-to-envoy-port"

\# Redirect all inbound traffic to Envoy.
iptables -t nat -A PREROUTING -j ISTIO_REDIRECT                               -m comment --comment "istio/install-istio-prerouting"

\# Create a new chain for selectively redirecting outbound packets to
\# Envoy.
iptables -t nat -N ISTIO_OUTPUT                                               -m comment --comment "istio/common-output-chain"

\...omitted for brevity
           

關聯閱讀

Istio及Bookinfo示例程式安裝試用筆記

參考

  • Service Mesh with Istio and NGINX
  • Using kubeadm to Create a Cluster
  • Kubernetes Reference Documentation-Dynamic Admission Control

1

1

繼續閱讀