天天看點

Tekton CD 之實戰篇(三):從Github到k8s叢集Tekton CD 之實戰篇(二):從Github到k8s叢集

Tekton CD 之實戰篇(二):從Github到k8s叢集

(如果對于雲原生技術感興趣,歡迎關注微信公衆号“雲原生手記”)

CICD邏輯圖

Tekton CD 之實戰篇(三):從Github到k8s叢集Tekton CD 之實戰篇(二):從Github到k8s叢集

CD概述

我這邊講的CD是将使用者的應用部署進k8s叢集,現在大家都在使用k8s叢集了,k8s管理應用比較友善,部署應用更是友善,直接kubectl apply -f 檔案名就可以了。這邊我要講的CD兩種方式:

  • 一種是通過kubectl指令實作的;
  • 一種是通過client-go實作的,原理是一樣的,隻是實作方式不同。

當然,目前有的CD工具例如 Argo CD,也是很好的CD工具,已經很成熟了。我個人在項目中還沒有使用 Argo CD(我還不會,哈哈哈…),當然,因為我需要将每次的部署資訊進行統計上傳到統一管理的流水線平台做統計,是以暫時選擇使用client-go做CD部署,友善寫代碼嘛。

kubectl實作方式

首先需要做個Kubectl鏡像,使用者使用時,需要指定兩條路徑:

  • 應用yaml檔案的路徑;
  • kubeconfig檔案的路徑。

是以kubeconfig最好放在使用者工程中,這樣就能在CD中,根據使用者指定的kubeconfig路徑将應用部署在指定叢集。當然tekton部署的環境,必須要和kubeconfig指定的環境,網絡互通。

制作Kubectl鏡像

其實在dockerhub上已經有了kubectl鏡像,大家可以直接使用,如果需要修改dockerfile檔案,請參看下面的連結,kubectl鏡像建構參看這個位址, github連結(https://github.com/bitnami/bitnami-docker-kubectl) 裡面由各個k8s版本的kubectl制作鏡像的Dockerfile,我的k8s叢集是v1.17的,這邊就參看v1.17的kubectl的Dockerfile:

FROM docker.io/bitnami/minideb:buster
LABEL maintainer "Bitnami <[email protected]>"

COPY prebuildfs /
# Install required system packages and dependencies
RUN install_packages ca-certificates curl gzip procps tar wget
RUN wget -nc -P /tmp/bitnami/pkg/cache/ https://downloads.bitnami.com/files/stacksmith/kubectl-1.17.4-1-linux-amd64-debian-10.tar.gz && \
    echo "5991e0bc3746abb6a5432ea4421e26def1a295a40f19969bb7b8d7f67ab6ea7d  /tmp/bitnami/pkg/cache/kubectl-1.17.4-1-linux-amd64-debian-10.tar.gz" | sha256sum -c - && \
    tar -zxf /tmp/bitnami/pkg/cache/kubectl-1.17.4-1-linux-amd64-debian-10.tar.gz -P --transform 's|^[^/]*/files|/opt/bitnami|' --wildcards '*/files' && \
    rm -rf /tmp/bitnami/pkg/cache/kubectl-1.17.4-1-linux-amd64-debian-10.tar.gz
RUN apt-get update && apt-get upgrade -y && \
    rm -r /var/lib/apt/lists /var/cache/apt/archives

ENV BITNAMI_APP_NAME="kubectl" \
    BITNAMI_IMAGE_VERSION="1.17.4-debian-10-r97" \
    HOME="/" \
    OS_ARCH="amd64" \
    OS_FLAVOUR="debian-10" \
    OS_NAME="linux" \
    PATH="/opt/bitnami/kubectl/bin:$PATH"

USER 1001
ENTRYPOINT [ "kubectl" ]
CMD [ "--help" ]
           

這個kubectl的Dockerfile做了兩件事:

  • 1、安裝kuebctl ;
  • 2、進入容器,運作kubetcl --help指令

我需要改成的功能是:

  • 1、安裝kubectl;

運作kubetcl apply -f . 指令(這邊操作打算使用task的step中的shell腳本的方式實作)。

下面是我的Dockerfile檔案,删掉了一下腳本,我要執行自己的指令,就把寫這些删了:

ENTRYPOINT [ "kubectl" ]
CMD [ "--help" ]
           

指定yaml和Kubeconfig路徑

我這邊使用tekton的參數形式,将yaml路徑傳遞進容器,而kubeconfig檔案,我是用configmap的形式将他挂載到pod中的/tekton/home/.kube/目錄下。

configmap.yaml檔案:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kubectl-config
  namespace: nanjun
data:
  config: |
    apiVersion: v1
    clusters:
      - cluster:
          certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EUXhPREV5TkRNMU0xb1hEVE13TURReE5qRXlORE0xTTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS1lzCnR5OVBmREt1S01WZ0NXRURHVlhqakcxSm8vNGhibk15UWowbzRjS3FyQ0F4VVpkNXNEUVd2dHJmR1BJUXJsemQKMEpPOXo1SnhvZ01Lb2grMGdmYUcxTDEyRzVzWWYxenNUcmNtUGFReG5qNk1LajdScVBua2ZTdW1xUDZZRklORApCM3ZnMnZCUCtoQkhNZCtGTjZBcVFGZlQwaHNTNEtCd2dkMWx3aXlyQXIzZVRsWlVCUjFZWmxKZkFyRUVOb01BCjdrYVdwa3I2N2ZxTGdMVFlCRXNIREx3VmE2RExSMGJjSlZGS0VMWXV2VkZ5eTlxSnFMQmIrVWR2ZVU4L3dZeGQKT1pHVGNydzQ5WE5LZ3EyT1J3bHBxRi9DSXl2OThKcVBIbnlLQnpVN1ZrOVFxQWE5SFBIYXpUbHYvMmd0ZmgxSAp1aFpqYjRTOTluMHJUYWNyUVNVQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFLR3lNQVJHSnFNT1BkNkt0anhCaGs1SUNBTVIKenN0Q1RnVGQzT2ZjRzloVlBwVndGQmhSejd0ZFpSNWljRHpyU2JOV29KdzdpRVhlbURpTkhQdGxVM1dWa2dOQgpNWkNkb1hYOEhBVEFyV1Y1aXpZTnU3Nmd2bW5MUnVPRkhGOWNhRExOcnFUdEs4WmlwSXFIdlI1SGtpblZkWW9kClRZdU01QjdEQTZRRWE0K1lsMWFHQkNPMUlzd2RWaEpqVzhVd2IyK1Nibm9lWi95NE5XVWRzYTI1K2M1dWlVMUMKZ3JNNmdxQ1dteS9KQUxKSlBVRUsvYlppZ0Y2eUZOR290V2hhVlBJbG5VdXFFbW9SNWRnV0hDYkJQZXNrYm5yYQpNRTBvNVFWVzdXaUxzS25IS0ZneFhsN1EveWdlQm1WY3A1bXBoZTJiS21LcFh5dGpNc1U4bno1LzYzYz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
          server: https://10.142.113.40:6443
        name: kubernetes
    contexts:
      - context:
          cluster: kubernetes
          user: coderaction
        name: coderaction
      - context:
          cluster: kubernetes
          user: kubernetes-admin
        name: [email protected]
    current-context: [email protected]
    kind: Config
    preferences: {}
    users:
      - name: coderaction
        user:
          token: eyJhbGciOiJSUzI1NiIsImtpZCI6Im5CUjhEOGxtTU9naUhKQlFiX1VRUGFZV3AyTmNlWWl4Rjd5OUZ3RUg5c1EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJja2EiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiY2thLTA0MjgtdG9rZW4tOTlwazIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2thLTA0MjgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiOTQ2NTRjYy01ZDA3LTRmZDMtODcxNS1mMzkzYmQzYTExN2YiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y2thOmNrYS0wNDI4In0.LnhZxoNnftaUYc0xO2uTvn9TccvPgAnDelV1aIfWC7d0CVC_SHOS_AfSH3P1Yyq-Ny0yP8l1MtdbMfF_p3t_suG3ouleZMoIx9Lmt7WIq53adEINTh3fiaZ0zfEvZowNAxVJyodkwNiUiUprMlN9oh0bJNjwkOkjogx-9eEItSGVHq3eZ3XBlJ-g3GAJ27tEClMWg0raCGhmZKCT4m0rzoBNTf32QsNXOJGC97KN4Gx7Am2RWBTrpztCw5E39lJgYwYiDn7uHBf6fBQagj4bV6vTB4YJg-w7pDxUIwrRAgqNLPqwAcAvmeurEBBVuHdT43WXB1kLdz-vZwl4cob_Nw
      - name: kubernetes-admin
        user:
          client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJRTdOODA3Wm9lbHN3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBME1UZ3hNalF6TlROYUZ3MHlNVEEwTVRneE1qUXpOVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXRRSkRkSUovWmZkMmpJTXMKSFh1SmlpcC8xUWpxMFUzakZrVDlDL3cxcEZ2VlRqaFhqL0VYYkF5NStzcTJycG1sSlZMRlQ1R1U5b0MrNnN0MgpCVXpqS1NlT3NsNHFlNTc0WXlXM285M1VrNVVtUXVuMnp4RzFBNDR0WXJURXJZdkpYNGhFcHBrLzBCbVpiOXdqClJaWDFTTnZzQjFseC9CTDlCVmhaOHM1Wm1QSmdLcm95dm0vSm5ycmFvcnV5S1RreENZVmxwbHVBblFGb0tUMjcKRFA0YVZsZkZKWTExUnhzZXhSayt4L2hBVWt6VE9jVXFNVHU4Y25aVkliRFYrZ1Y5blNPUVBMelRDOVNNbi9DaApGdHA2Nm5QWFpkQWIxUWpCWW5RV2FTZTkwcTQzWTVYT0grcmhSREFsZEZLMER5OEhteUhjTDd4NVEyVUpRQ2hoCnpmaE1tUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEdklVMWZJQmw2ZC8veTd1VzZnby9FRFZBYlpTckt5eGNOVgprMUdqdVRBR0dpNldja2FkQ2IwdWJIUTkxWS9GbVRyRm5nb25rckpuSWF2R2R1bUh5KzZCKzZpSUliSE1PeUdaCjduSTJMZWExNkl2MEJkOGE3amhzbTJuaWMwWlVXOVhQMElobjhQT0hMK0MyZ05lcDBIQ3pCRVFjQVA3bjMvR2oKZm9CTjhjTi9vNnhXQUR0U2U0UjR0YWJuYWtGZTVxU1Y0K2pBTmRyTXVYWDFDdjNUNlN6VGxPbVVZL0Z6QXlKTApnell2YUd1QnhxZlpmMkxoUXcxVElsZW82ajdkV0xWeGhySk15MlJKWG4wV1VlU2VRb3lFUVNzdDFkaVhyNmtkCkRrcTNsaE1EN1IzQ2Y0NXBlcU1CYmRTWlg0S3p2Yzd1SE1MTjVkeXpwV25Hb2FhbFhzST0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
          client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBdFFKRGRJSi9aZmQyaklNc0hYdUppaXAvMVFqcTBVM2pGa1Q5Qy93MXBGdlZUamhYCmovRVhiQXk1K3NxMnJwbWxKVkxGVDVHVTlvQys2c3QyQlV6aktTZU9zbDRxZTU3NFl5VzNvOTNVazVVbVF1bjIKenhHMUE0NHRZclRFcll2Slg0aEVwcGsvMEJtWmI5d2pSWlgxU052c0IxbHgvQkw5QlZoWjhzNVptUEpnS3JveQp2bS9KbnJyYW9ydXlLVGt4Q1lWbHBsdUFuUUZvS1QyN0RQNGFWbGZGSlkxMVJ4c2V4UmsreC9oQVVrelRPY1VxCk1UdThjblpWSWJEVitnVjluU09RUEx6VEM5U01uL0NoRnRwNjZuUFhaZEFiMVFqQlluUVdhU2U5MHE0M1k1WE8KSCtyaFJEQWxkRkswRHk4SG15SGNMN3g1UTJVSlFDaGh6ZmhNbVFJREFRQUJBb0lCQUYxU3JtNmFmWTZYMktJMwpXdjVVWENSRkp5VXg5TWMyN2Zia1dNYmVJTlg5bHVzK056NzZZVVlQQmJBYzViVDllRnpXNE8zV05FUW5Pc2VaCllONzR0a0hZcUVTa01pa01YQ25hSDJVNEVNcUtZbkNyYWRsMjJxbmJtdURDTElrQmdqQmo5R2trcC9ibHkrc1YKUjRZdis0ZTJBMm9DbnJjRkh6aXJSYXplNE9qdVpyZS8wYzM1SGFzSGtiMUhuRGRDbVNLcmdyQloybURpZTUvWgpPdWJCem82UjI1UG1Yb0xUczFUamZzbDB1bnE5UWYrR2sxUDg1U0txb1Qza2dGRDJZc1VraHZjalRRdmNuWkIyCmlYOVBBMnRUNlhxaGJxMnFmRHRLei8wM1k1MGxqQnZ6c0NmSWp4bHViSHdSWHhwNGh5SXI0V2xFQ0xwakNtY1AKc0ovVTY1RUNnWUVBN2hFblVqNUZlVEljRWZxNC9jYVMzY29hWHNTMXBHMXJBcEY5TjVXdjNjbDdJMGxwd0lHYwo5L1E0TVYzWFM3VVo0eDhKZyt4MGRJbGl2RjVnT0IzcFZXR3luMnhUeHIxbDlGMzllYzFwcU9VNllQSStoM2svCm1NRGlpR1NyekRnOUtVUlFlM3dVOTNjbzNudGdSZ3doaVcrVnVzandITlRDaUV0MnAzVDIrcDBDZ1lFQXdxVFAKTGw5dG1tZ3pqbXpNVVRmcmlHckY2VGlEMzlFbCtWSzFoNVNEMUc2ZEZINCt4QnJMVVAxTkVIK2djZEdQeWY4aQpveE1nM3dwejF6L1gwdk93cWkvTFFEdVhrdE1jN3NtVkVkVGplL0VtNk5RbFpxSlFReXhNM1YyQkhMZDVLUFl2CmN1ZUlmWm9hY0V4VG9sQXUvY0pzbjQ0NjhHbHNoK1Vqbm9QM2l5MENnWUFjejFDZDRGRlNBR0ZyUDVkQmh0VmgKSjhNWE11RDBmQlZXSXpzdkRkdFJrTDlwSHNwQWRLOEZSclhDSzZRUlVtSkduUXZ1dmgrOXRwNlBRekNMdWZyeAp6VGZybVJWdVdKOU0razdoZloxS3hpclJicDlvajZERm9Kb0pmWDFZNG5sc1ZBc1ZWb2ZIQnRHWVV2L3NtaTA0Cno1c2tGb3NRUWlNa2tWVlRvSkQrOVFLQmdFQ1ZFSTB4YXB0dDhaVlRNaVBNcXlEVFZLR0NkL2NlWFR3eG5qdkQKSWs2cytQK2d0OUMzbHpoakkxdlREUGhXOFIrendObGM4bTR1K0txMTZ6VjZWK2JQL3Q5c0ptbTRGSVNDYkN6RApkMHRiZzI2RFhYbUZaNTR5SjdyWFdJeWZyOXJRZklQaW9ONFQ4S3ZNRjMvbW5RRGpyc2p1RjA1SG5KUW1pa0FCClIzUnRBb0dBYitQTGVUVWFHMDc4M3pSckJuWENyVi9SaXBjT3U2czBjdkxiQ2x6dGw5Ymc1QStITEFHUzB5SmYKWWUwUS9JSGZ0N1Vjb0UwRytOaWVFUmE5c2Y3SDFJNk1ZcFUvYzdKdHlNNmdWbS9vMGZibHNjL3JHdVl3MDVtSwo1WE9hUlFhNkM4eWllVVVtZ3dnaXhjcnZ3YlJ2UmZPaHMwQlhRMW44a3VxWDBuS2REY0U9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
           

在CI之上增加部署步驟:

這邊主要修改下task.yaml,增加yaml檔案路徑的參數和部署步驟,還有kubeconfig檔案的挂載。

task.yaml:

apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
  name: build-and-push
  namespace: nanjun
spec:
  inputs:
    resources:
      - name: golang-resource
        type: git
    params:
      - name: dockerfile-path
        type: string
        default: $(inputs.resources.golang-resource.path)/
        description: dockerfile path
      - name: yaml-path #需要指定的yaml檔案路徑
        type: string
        default: $(inputs.resources.golang-resource.path)/yaml
        description: yaml path
  outputs:
    resources:
      - name: builtImage
        type: image
  steps:
    - name: build-with-golangenv
      image: golang
      script: |
        #!/usr/bin/env sh
        cd $(inputs.resources.golang-resource.path)/
        export GO111MODULE=on
        export GOPROXY=https://goproxy.io
        go mod download
        go build -o main .
    - name: image-build-and-push
      image: docker:stable
      script: |
        #!/usr/bin/env sh
        docker login registry.nanjun
        docker build --rm --label buildNumber=1 -t $(outputs.resources.builtImage.url) $(inputs.params.dockerfile-path)
        docker push  $(outputs.resources.builtImage.url)
      volumeMounts:
      - name: docker-sock
        mountPath: /var/run/docker.sock
      - name: hosts
        mountPath: /etc/hosts
    - name: deploy-to-k8s #新增部署步驟
      image: registry.nanjun/tekton/kubectl:v1.0
      script: |
        #!/usr/bin/env sh
        path=$(inputs.params.yaml-path)
        ls $path
        ls $HOME/.kube/
        cat $HOME/.kube/config
        files=$(ls $path)
        for filename in $files
         do
           kubectl apply -f $path"/"$filename
           if [ $? -ne 0 ]; then
              echo "failed"
              exit 1
           else
              echo "succeed"
           fi
        done
      volumeMounts:
      - name: kubeconfig
        mountPath: /tekton/home/.kube/
  volumes:
    - name: docker-sock
      hostPath:
        path: /var/run/docker.sock
    - name: hosts
      hostPath:
        path: /etc/hosts
    - name: kubeconfig # 挂載kubeconfig檔案
      configMap:
        name: kubectl-config
           

kubectl形式部署的yaml檔案

具體的檔案請去以下github倉庫中檢視,在kubectl2apply檔案下。

github倉庫位址:https://github.com/fishingfly/githubCICD.git

CI思路參看上一篇《Tekton CI 之實戰篇(一):從Github到Harbor倉庫》,這邊隻是在task中增加了Kubectl部署的步驟。

client-go實作CD

使用client-go去部署的原因是我需要将CICD中的一些部署結果上傳到統計平台,寫代碼比用kubectl友善靈活。這邊部署的邏輯,就是把yaml所代表的資源對象先删除再建立的過程,比較粗暴,當然之後自己可以增加代碼去實作灰階釋出的流程(有機會的話,我後面會更新)。最後我将client-go打成鏡像,部署時需要指定存放在工程中的yaml檔案和Kubeconfig檔案的路徑:

flag.StringVar(&kubeconfigPath, "kubeconfig_path", "./test-project/deploy/develop/kubeconfig", "kubeconfig_path")
	flag.StringVar(&yamlPath, "yaml_path", "./test-project/deploy/develop/yaml", "yaml_path")
           

這邊我使用了client-go的unstructured包可以對接1.17所有的資源對象。以下是一段部署邏輯代碼:

// 需要做下邏輯判斷,先查詢該資源是否存在,存在就删除,并且會2秒一次輪詢是否删除完了,删除完了就建立。

			_, err = dclient.Resource(mapping.Resource).Namespace(namespace).Create(&unstruct, metav1.CreateOptions{})
			if err != nil {
				glog.Errorf("create  resource error, error is %s\n", err.Error())
				if k8sErrors.IsAlreadyExists(err) || k8sErrors.IsInvalid(err) {// 已經存在那就删除重新建立 或者port被占用
					if err := dclient.Resource(mapping.Resource).Namespace(namespace).Delete(unstruct.GetName(), &metav1.DeleteOptions{}); err != nil {
						glog.Errorf("delete  resource error, error is %s\n", err.Error())
						return err
					}
					//删除操作發生後,資源不會立即清理掉,此時建立會存在is being deleting的錯誤
					if err = pollGetResource(mapping, dclient, unstruct, namespace); err != nil {// 輪詢檢視資源是否删除幹淨
						glog.Errorf("yaml file is %s, delete  resource error, error is %s\n", v, err.Error())
						return err
					}
					_, err = dclient.Resource(mapping.Resource).Namespace(namespace).Create(&unstruct, metav1.CreateOptions{})//建立資源
					if err != nil {
						glog.Errorf("create  resource error, error is %s\n", err.Error())
						return err
					}
				} else {
					return err
				}
			}
           

下面是輪詢函數,2秒一次輪詢,10分鐘逾時時間,逾時就算失敗:

func pollGetResource(mapping *meta.RESTMapping, dclient dynamic.Interface, unstruct unstructured.Unstructured, namespace string) error {
	for {
		select {
		case <-time.After(time.Minute * 10): //十分鐘後沒删除完,那就算失敗了
			glog.Error("pollGetResource time out, something wrong with k8s resource")
			return errors.New("ErrWaitTimeout")
		default:
			resource, err := dclient.Resource(mapping.Resource).Namespace(namespace).Get(unstruct.GetName(), metav1.GetOptions{})// get對象
			if err != nil {
				if k8sErrors.IsNotFound(err) {
					glog.Warningf("kind is %s, namespace is %s, name is %s, this app has been deleted.", unstruct.GetKind(), namespace, unstruct.GetName())
					return  nil
				} else {
					glog.Errorf("get resource failed, error is %s", err.Error())
					return err
				}
			}
			if resource != nil && resource.GetName() == unstruct.GetName() {// 判斷資源存在,或者擷取資源報錯。
				glog.Errorf("kind: %s, namespace: %s, name %s still  exists", unstruct.GetKind(), namespace, unstruct.GetName())
			}
		}
		time.Sleep(time.Second * 2)// 2秒循環一次
	}
}
           

下面就是運作結果:

Tekton CD 之實戰篇(三):從Github到k8s叢集Tekton CD 之實戰篇(二):從Github到k8s叢集

總結

目前由于個人水準原因,用的都是普通的方式實作CD,後面會嘗試使用Argo CD來實作自動化部署。目前我的兩種CD方式,存在kubeconfig被暴露的隐患,尤其是client-go的CD形式,kubeconfig檔案直接暴露在工程中,有安全問題。總結下兩種方式使用場景:

  • kubectl方式适合簡單的部署;
  • client-go方式适用于需要統計CICD全流程資料的場景中。

本文用到的代碼及yaml檔案在下面這兩個連結内:

  • https://github.com/fishingfly/githubCICD.git
  • https://github.com/fishingfly/deployYamlByClient-go.git

未來展望

近期将推出k8s scheduler源碼分析系列,在那之後将持續推出client-go實戰系列,記得關注我哦!

最後的最後,要是您覺得我寫的東西對您有所幫助的話,打賞下呗!我會更有動力的!哈哈哈。。。

祝大家工作順利,生活愉快,每一份努力都不被辜負!

繼續閱讀