前言
吐槽一波
2020年6月2号剛入職公司時,第一感覺是叢集環境是個大坑!内網一套,公網一套。内網采用單節點Kubernetes,公網采用aliyun托管的X節點Kubernetes(還有節點是2C的...)。内網Kubernetes環境幾乎無人使用(可能後端開發工程師在偶爾使用吧)。公網的X節點Kubernetes叢集,也可以是稱之為生産Kubernetes叢集,也可以稱之為測試Kubernetes叢集,天才的設想--通過名稱空間區分叢集環境!
引出話題
研發人員向部署在公網的Kubernetes叢集的gitlab代碼管理倉庫推送代碼,然後由部署在香港伺服器的gitlab-runner做ci|cd,容器鏡像是存在gitlab上的,也就是公網kubernetes叢集上,emmm,好吧,反正是叢集重構勢在必行了。
叢集重構說的也直接點也就是針對ci|cd的重構,用起内網環境,增加預發環境,将公網Kubernetes的測試環境給剔除掉。
架構圖
企業級叢集架構圖
由上圖可知,分三種環境:研發環境(内網環境);預發環境(和生産環境處于同一VPC);生産環境(原公網環境【升配】)。
研發環境
研發人員:開發的日常開發工作都在這個環境進行。大部分的基礎開發工作都會在該環境進行。
測試人員:主要用于測試驗證目前的需求開發是否符合預期。
預發環境
其實就是真實的線上環境,幾乎全部的環境配置都是一模一樣的,包括但不限于,作業系統版本以及軟體配置,開發語言的運作環境,資料庫配置等。 最後上線前的驗證環境,除了不對使用者開放外,這個環境的資料和線上是一緻的。産品、營運、測試、開發都可以嘗試做最後的線上驗證,提前發現問題。
環境對比
分類 | 使用場景 | 使用者 | 使用時機 | 備注 |
---|---|---|---|---|
日常開發測試驗證 | 開發測試工程師 | 需求開發開發完成 | ||
線上驗證 | 開發、測試、産品、營運等 | 上線之前 |
gitlab-ci架構圖
如上圖所示,開發者将代碼送出到GitLab後,可以觸發CI腳本在GitLab Runner上執行,通過編寫CI腳本可以完成很多使用的功能:編譯、測試、建構docker鏡像、推送到Aliyun鏡像倉庫等;
- 🔲部署在公網環境的Gitlab 如何管控部署在内網環境的Kubernetes叢集呢?
- 🔲部署在Kubernetes上的Gitlab-runner如何實作緩存呢?
部署環境
部署Kubernetes
公網直接使用阿裡雲的ACK版Kubernetes叢集
-
- 🔲ingress nginx lb 域名???
内網部署如下:
清理Kubernetes環境
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
部署Kubernetes單節點
docker info
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum list kubelet kubeadm kubectl --showduplicates | sort -r
yum install -y kubelet-1.17.5 kubeadm--1.17.5 kubectl-1.17.5
systemctl enable kubelet && systemctl start kubelet
kubelet status kubelet
kubeadm init --apiserver-advertise-address=10.17.1.44 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=1.17.5 --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get cs
cd linkun/
cd zujian/
kubectl apply -f calico.yaml
kubectl get pods -A
kubectl apply -f metrice-server.yaml
kubectl apply -f nginx-ingress.yaml
kubectl get pods -A
部署gitlab
4C8G的ECS伺服器
yum install vim gcc gcc-c++ wget net-tools lrzsz iotop lsof iotop bash-completion -y
yum install curl policycoreutils openssh-server openssh-clients postfix -y
systemctl disable firewalld
sed -i '/SELINUX/s/enforcing/disabled/' /etc/sysconfig/selinux
wget https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7/gitlab-ce-13.1.4-ce.0.el7.x86_64.rpm
yum localinstall gitlab-ce-13.1.4-ce.0.el7.x86_64.rpm
cp /etc/gitlab/gitlab.rb{,.bak}
vim /etc/gitlab/gitlab.rb
gitlab-ctl reconfigure
ss -lntup
不允許使用者注冊
部署gitlab-runner
GitLab-CI 是一套 GitLab 提供給使用者使用的持續內建系統,GitLab 8.0 版本以後是預設內建并且預設啟用。GitLab-Runner 是配合 GitLab-CI 進行使用的,GitLab 裡面每個工程都會定義一些該工程的持續內建腳本,該腳本可配置一個或多個 Stage 例如建構、編譯、檢測、測試、部署等等。當工程有代碼更新時,GitLab 會自動觸發 GitLab-CI,此時 CitLab-CI 會找到事先注冊好的 GitLab-Runner 通知并觸發該 Runner 來執行預先定義好的腳本。
傳統的 GitLab-Runner 我們一般會選擇某個或某幾個機器上,可以 Docker 安裝啟動亦或是直接源碼安裝啟動,都會存在一些痛點問題,比如發生單點故障,那麼該機器的所有 Runner 就不可用了;每個 Runner 所在機器環境不一樣,以便來完成不同類型的 Stage 操作,但是這種差異化配置導緻管理起來很麻煩;資源配置設定不平衡,有的 Runner 運作工程腳本出現擁塞時,而有的 Runner 缺處于空閑狀态;資源有浪費,當 Runner 處于空閑狀态時,也沒有安全釋放掉資源。是以,為了解決這些痛點,我們可以采用在 Kubernetes 叢集中運作 GitLab-Runner 來動态執行 GitLab-CI 腳本任務,它整個流程如下圖:
這種方式帶來的好處有:
- 服務高可用,當某個節點出現故障時,Kubernetes 會自動建立一個新的 GitLab-Runner 容器,并挂載同樣的 Runner 配置,使服務達到高可用。
- 動态伸縮,合理使用資源,每次運作腳本任務時,Gitlab-Runner 會自動建立一個或多個新的臨時 Runner,當任務執行完畢後,臨時 Runner 會自動登出并删除容器,資源自動釋放,而且 Kubernetes 會根據每個節點資源的使用情況,動态配置設定臨時 Runner 到空閑的節點上建立,降低出現因某節點資源使用率高,還排隊等待在該節點的情況。
- 擴充性好,當 Kubernetes 叢集的資源嚴重不足而導緻臨時 Runner 排隊等待時,可以很容易的添加一個 Kubernetes Node 到叢集中,進而實作橫向擴充。
helm部署
📎gitlab-runner.tar.gz
https://www.cnblogs.com/bolingcavalry/p/13200977.html
wget https://get.helm.sh/helm-v3.1.2-linux-amd64.tar.gz
tar xf helm-v3.1.2-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/
chmod +x /usr/local/bin/helm
helm version
helm repo add gitlab https://charts.gitlab.io
helm fetch gitlab/gitlab-runner
helm install --name-template gitlab-runner -f values.yaml . --namespace gitlab-runner
helm uninstall gitlab-runner --namespace gitlab-runner
statefulset部署
名稱空間
# cat gitlab-runner-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: gitlab-runner
# kubectl apply -f gitlab-runner-namespace.yaml
namespace/gitlab-runner created
用于注冊、運作和取消注冊Gitlab ci runner 的Token
# echo "WyJEPLXYJ3uuLxbs62NT" | base64
V3lKRVBMWFlKM3V1THhiczYyTlQK
# cat gitlab-runner-token-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: gitlab-ci-token
namespace: gitlab-runner
labels:
app: gitlab-ci-token
data:
GITLAB_CI_TOKEN: V3lKRVBMWFlKM3V1THhiczYyTlQK
# kubectl apply -f gitlab-runner-token-secret.yaml
secret/gitlab-ci-token created
在k8s裡configmap裡存儲一個用于注冊、運作和取消注冊 Gitlab ci runner 的小腳本
# cat gitlab-runner-scripts-cm.yaml
apiVersion: v1
data:
run.sh: |
#!/bin/bash
unregister() {
kill %1
echo "Unregistering runner ${RUNNER_NAME} ..."
/usr/bin/gitlab-ci-multi-runner unregister -t "$(/usr/bin/gitlab-ci-multi-runner list 2>&1 | tail -n1 | awk '{print $4}' | cut -d'=' -f2)" -n ${RUNNER_NAME}
exit $?
}
trap 'unregister' EXIT HUP INT QUIT PIPE TERM
echo "Registering runner ${RUNNER_NAME} ..."
/usr/bin/gitlab-ci-multi-runner register -r ${GITLAB_CI_TOKEN}
sed -i 's/^concurrent.*/concurrent = '"${RUNNER_REQUEST_CONCURRENCY}"'/' /home/gitlab-runner/.gitlab-runner/config.toml
echo "Starting runner ${RUNNER_NAME} ..."
/usr/bin/gitlab-ci-multi-runner run -n ${RUNNER_NAME} &
wait
kind: ConfigMap
metadata:
labels:
app: gitlab-ci-runner
name: gitlab-ci-runner-scripts
namespace: gitlab-runner
# kubectl apply -f gitlab-runner-scripts-cm.yaml
configmap/gitlab-ci-runner-scripts created
使用k8s的configmap資源來傳遞runner鏡像所需的環境變量
# cat gitlab-runner-cm.yaml
apiVersion: v1
data:
REGISTER_NON_INTERACTIVE: "true"
REGISTER_LOCKED: "false"
METRICS_SERVER: "0.0.0.0:9100"
CI_SERVER_URL: "http://gitlab.st.zisefeizhubox.com/ci"
RUNNER_REQUEST_CONCURRENCY: "4"
RUNNER_EXECUTOR: "kubernetes"
KUBERNETES_NAMESPACE: "gitlab-runner"
KUBERNETES_PRIVILEGED: "true"
KUBERNETES_CPU_LIMIT: "1"
KUBERNETES_CPU_REQUEST: "500m"
KUBERNETES_MEMORY_LIMIT: "1Gi"
KUBERNETES_SERVICE_CPU_LIMIT: "1"
KUBERNETES_SERVICE_MEMORY_LIMIT: "1Gi"
KUBERNETES_HELPER_CPU_LIMIT: "500m"
KUBERNETES_HELPER_MEMORY_LIMIT: "100Mi"
KUBERNETES_PULL_POLICY: "if-not-present"
KUBERNETES_TERMINATIONGRACEPERIODSECONDS: "10"
KUBERNETES_POLL_INTERVAL: "5"
KUBERNETES_POLL_TIMEOUT: "360"
kind: ConfigMap
metadata:
labels:
app: gitlab-ci-runner
name: gitlab-ci-runner-cm
namespace: gitlab-runner
# kubectl apply -f gitlab-runner-cm.yaml
configmap/gitlab-ci-runner-cm created
需要用于k8s權限控制的rbac檔案
# cat gitlab-runner-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-ci
namespace: gitlab-runner
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: gitlab-ci
namespace: gitlab-runner
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: gitlab-ci
namespace: gitlab-runner
subjects:
- kind: ServiceAccount
name: gitlab-ci
namespace: gitlab-runner
roleRef:
kind: Role
name: gitlab-ci
apiGroup: rbac.authorization.k8s.io
# kubectl apply -f gitlab-runner-rbac.yaml
serviceaccount/gitlab-ci created
role.rbac.authorization.k8s.io/gitlab-ci created
rolebinding.rbac.authorization.k8s.io/gitlab-ci created
zisefeizhu:gitlab-runner root
在k8s叢集生成gitlab runner的statefulset檔案
# cat gitlab-runner-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: gitlab-ci-runner
namespace: gitlab-runner
labels:
app: gitlab-ci-runner
spec:
updateStrategy:
type: RollingUpdate
replicas: 2
selector:
matchLabels:
app: gitlab-ci-runner
serviceName: gitlab-ci-runner
template:
metadata:
labels:
app: gitlab-ci-runner
spec:
volumes:
- name: gitlab-ci-runner-scripts
projected:
sources:
- configMap:
name: gitlab-ci-runner-scripts
items:
- key: run.sh
path: run.sh
mode: 0755
serviceAccountName: gitlab-ci
securityContext:
runAsNonRoot: true
runAsUser: 999
supplementalGroups: [999]
containers:
- image: gitlab/gitlab-runner:latest
name: gitlab-ci-runner
command:
- /scripts/run.sh
envFrom:
- configMapRef:
name: gitlab-ci-runner-cm
- secretRef:
name: gitlab-ci-token
env:
- name: RUNNER_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 9100
name: http-metrics
protocol: TCP
volumeMounts:
- name: gitlab-ci-runner-scripts
mountPath: "/scripts"
readOnly: true
restartPolicy: Always
# kubectl apply -f gitlab-runner-statefulset.yaml
statefulset.apps/gitlab-ci-runner created
zisefeizhu:gitlab-runner root# kubectl get pods -n gitlab-runner -w
NAME READY STATUS RESTARTS AGE
gitlab-ci-runner-0 0/1 ContainerCreating 0 10s
gitlab-ci-runner-0 1/1 Running 0 26s
測試環境同樣的配置
測試環境
預釋出環境
# tar cvf gitlab-runner.tar.gz ./*
a ./gitlab-runner-cm.yaml
a ./gitlab-runner-namespace.yaml
a ./gitlab-runner-rbac.yaml
a ./gitlab-runner-scripts-cm.yaml
a ./gitlab-runner-statefulset.yaml
a ./gitlab-runner-token-secret.yaml
[root@localhost gitlab-runner]# tar xf gitlab-runner.tar.gz
[root@localhost gitlab-runner]# ll
總用量 36
-rw-r--r--. 1 root wheel 831 7月 20 23:27 gitlab-runner-cm.yaml
-rw-r--r--. 1 root wheel 63 7月 20 23:05 gitlab-runner-namespace.yaml
-rw-r--r--. 1 root wheel 547 7月 20 23:31 gitlab-runner-rbac.yaml
-rw-r--r--. 1 root wheel 853 7月 20 23:17 gitlab-runner-scripts-cm.yaml
-rw-r--r--. 1 root wheel 1315 7月 20 23:54 gitlab-runner-statefulset.yaml
-rw-r--r--. 1 root root 9728 7月 21 03:24 gitlab-runner.tar.gz
-rw-r--r--. 1 root wheel 178 7月 20 23:14 gitlab-runner-token-secret.yaml
[root@localhost gitlab-runner]# kubectl apply -f .
namespace/gitlab-runner created
serviceaccount/gitlab-ci created
role.rbac.authorization.k8s.io/gitlab-ci created
rolebinding.rbac.authorization.k8s.io/gitlab-ci created
configmap/gitlab-ci-runner-scripts created
statefulset.apps/gitlab-ci-runner created
secret/gitlab-ci-token created
configmap/gitlab-ci-runner-cm created
[root@localhost gitlab-runner]# kubectl get pods -n gitlab-runner -w
NAME READY STATUS RESTARTS AGE
gitlab-ci-runner-0 1/1 Running 0 36s
gitlab-ci-runner-1 1/1 Running 0 1s
gitlab-ci和阿裡雲鏡像倉庫 & 不同kubernetes叢集調試
gitlab-ci使用有關文檔
1、GitLab的CI|CD編譯的實作(基礎) : https://www.jianshu.com/p/b69304279c5f
2、gitlab-ci.yml 配置Gitlab pipeline以達到持續內建的方法 (參數講解): https://www.jianshu.com/p/617f035f01b8
3、持續內建之gitlab-ci.yml(指令及示例講解):https://segmentfault.com/a/1190000019540360
4、gitlab 官方文檔位址(官):https://docs.gitlab.com/ee/ci/yaml/README.html
gitlab-ci和阿裡雲鏡像倉庫聯調
注:此處有docker in docker的問題
# cat gitlab-ci
# 設定執行鏡像
image: busybox:latest
before_script:
- export REGISTRY_IMAGE="${ALI_IMAGE_REGISTRY}"/gitlab-test/"${CI_PROJECT_NAME}":"${CI_COMMIT_REF_NAME//\.}"-"${CI_PIPELINE_ID}"
stages:
- build
- deploy
docker_build_job:
stage: build
tags:
- advance
image: docker:latest
stage: build
stage: build
script:
- docker login -u "${ALI_IMAGE_USER}" -p "${ALI_IMAGE_PASSWORD}" "${ALI_IMAGE_REGISTRY}"
- docker build . -t $REGISTRY_IMAGE --build-arg CI_COMMIT_SHORT_SHA=$CI_COMMIT_SHORT_SHA --build-arg NODE_DEPLOY_ENV=$NODE_DEPLOY_ENV --build-arg DEPLOY_VERSION=$DEPLOY_VERSION --build-arg API_SERVER_HOST=$API_SERVER_HOST --build-arg COOKIE_DOMAIN=$COOKIE_DOMAIN --build-arg LOGIN_HOST=$LOGIN_HOST
- docker push "${REGISTRY_IMAGE}"
tags:
- advance
gitlab-ci和kubernetes叢集聯調
注:因為測試環境是在内網,測試環境的k8s叢集無法通過外網連接配接,如果gitlab-runner是部署在外部的裸機上,将無法和測試環境k8s對接,如果gitlab-runner是部署在k8s叢集上,那麼如何實作runner job跑在對應的分支上呢?
我的方案是通過制作三個kubectl 鏡像分别控制三個叢集,在gitlab-ci腳本中引用。
制作kubectl鏡像,以測試環境為例 在香港伺服器上制作鏡像
# pwd
/root/linkun/gitlab-runner/test
# ll
total 20
drwxr-xr-x 2 root root 4096 Jul 23 16:38 ./
drwxr-xr-x 4 root root 4096 Jul 23 15:55 ../
-rw-r--r-- 1 root root 5451 Jul 23 16:31 config
-rw-r--r-- 1 root root 545 Jul 23 16:38 Dockerfile
此處的config是k8s叢集的/root/.kube/config 保密期間,再次不引開
# cat Dockerfile
FROM alpine:3.8
MAINTAINER cnych <[email protected]>
ENV KUBE_LATEST_VERSION="v1.17.5"
RUN apk add --update ca-certificates \
&& apk add --update -t deps curl \
&& apk add --update gettext \
&& apk add --update git \
&& curl -L https://storage.googleapis.com/kubernetes-release/release/${KUBE_LATEST_VERSION}/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl \
&& chmod +x /usr/local/bin/kubectl \
&& apk del --purge deps \
&& rm /var/cache/apk/*
RUN mkdir /root/.kube
COPY config /root/.kube/
ENTRYPOINT ["kubectl"]
CMD ["--help"]
# docker build -t registry.cn-shenzhen.aliyuncs.com/zisefeizhu/kubectl:test .
# docker push registry.cn-shenzhen.aliyuncs.com/zisefeizhu/kubectl:test
阿裡雲鏡像倉庫
gitlab-ci腳本
stages:
- kubectl
kubernetes_kuectl:
tags:
- test
stage: kubectl
image: "${KUBECTL_ADVANCE}"
script:
- kubectl get node
- kubectl get pods -A
# - sleep 500
- echo "success"
觀察job
觀察ku job
至此說明:gitlab-ci和阿裡雲鏡像倉庫 & 不同kubernetes叢集調試 👌
環境變量
内置環境變量
.gitlab-ci.yml預設環境變量【系統級】
名稱 | 說明 |
---|---|
$CI_PROJECT_NAME | 項目名稱 |
$CI_PROJECT_NAMESPACE | 組名稱 |
$CI_PROJECT_PATH | 項目相對路徑 |
$CI_PROJECT_URL | 項目URL位址 |
$GITLAB_USER_NAME | 使用者名稱 |
$GITLAB_USER_EMAIL | 使用者郵箱 |
$CI_PROJECT_DIR | 項目絕對路徑 |
$CI_PIPELINE_ID | 流水線ID |
$CI_COMMIT_REF_NAME | 目前分支 |
案例示範
自設環境變量
stages:
- neizhi
- release
- kubectl
neizhibianliang:
tags:
- advance
stage: neizhi
script:
- echo "$CI_PROJECT_NAME $CI_PROJECT_NAMESPACE $CI_PROJECT_PATH $CI_PROJECT_URL $GITLAB_USER_NAME $GITLAB_USER_EMAIL $CI_PROJECT_DIR $CI_PIPELINE_ID $CI_COMMIT_REF_NAME" > test.txt
- cat test.txt
image_build:
tags:
- advance
stage: release
image: docker:latest
variables:
DOCKER_DRIVER: overlay
DOCKER_HOST: tcp://localhost:2375
services:
- name: docker:18.03-dind
command: ["--insecure-registry=registry.cn-shenzhen.aliyuncs.com"]
script:
- docker info
- docker login -u $ALI_IMAGE_USER -p $ALI_IMAGE_PASSWORD $ALI_IMAGE_REGISTRY
- docker pull alpine:latest
- docker tag alpine:latest ${ALI_IMAGE_REGISTRY}/gitlab-test/test:${CI_COMMIT_REF_NAME//\.}-$CI_PIPELINE_ID
- docker image ls
kubernetes_kuectl:
tags:
- advance
stage: kubectl
image: $KUBECTL_ADVANCE
script:
- kubectl get node
- kubectl get pods -A
- echo "success"
通過自設環境變量可以避免因為敏感資訊外洩!
gitlab-ci指令集
Gitlab CI 使用進階技巧:https://www.jianshu.com/p/3c0cbb6c2936
案例項目
項目:📎go-daemon.tar.gz
gitlab-ci.yml
image:
name: golang:1.13.2-stretch
entrypoint: ["/bin/sh", "-c"]
# The problem is that to be able to use go get, one needs to put
# the repository in the $GOPATH. So for example if your gitlab domain
# is mydomainperso.com, and that your repository is repos/projectname, and
# the default GOPATH being /go, then you'd need to have your
# repository in /go/src/mydomainperso.com/repos/projectname
# Thus, making a symbolic link corrects this.
before_script:
- pwd
- ls -al
- mkdir -p "/go/src/git.qikqiak.com/${CI_PROJECT_NAMESPACE}"
- ls -al "/go/src/git.qikqiak.com/"
- ln -sf "${CI_PROJECT_DIR}" "/go/src/git.qikqiak.com/${CI_PROJECT_PATH}"
- ls -la "${CI_PROJECT_DIR}"
- pwd
- cd "/go/src/git.qikqiak.com/${CI_PROJECT_PATH}/"
- pwd
- export REGISTRY_IMAGE="${ALI_IMAGE_REGISTRY}"/gitlab-test/"${CI_PROJECT_NAME}":"${CI_COMMIT_REF_NAME//\.}"-"${CI_PIPELINE_ID}"
variables:
NAMESPACE: gitlab-runner
PORT: 8000
stages:
- test
- build
- release
- review
test:
tags:
- advance
stage: test
script:
- make test
test2:
tags:
- advance
stage: test
script:
- echo "We did it! Something else runs in parallel!"
compile:
tags:
- advance
stage: build
script:
# Add here all the dependencies, or use glide/govendor/...
# to get them automatically.
- make build
artifacts:
paths:
- app
image_build:
tags:
- advance
stage: release
image: docker:latest
script:
- docker info
- docker login -u "${ALI_IMAGE_USER}" -p "${ALI_IMAGE_PASSWORD}" "${ALI_IMAGE_REGISTRY}"
- docker build -t "${REGISTRY_IMAGE}" .
- docker push "${REGISTRY_IMAGE}"
deploy_review:
tags:
- advance
image: "${KUBECTL_ADVANCE}"
stage: review
environment:
name: stage-studio
url: http://studio.advance.zisefeizhubox.com/api-ews/
script:
- kubectl version
- cd manifests/
- sed -i "s/__ALI_IMAGE_REGISTRY__/${ALI_IMAGE_REGISTRY}/" secret-namespace-advance.sh
- sed -i "s/__ALI_IMAGE_USER__/${ALI_IMAGE_USER}/" secret-namespace-advance.sh
- sed -i "s/__ALI_IMAGE_PASSWORD__/${ALI_IMAGE_PASSWORD}/" secret-namespace-advance.sh
- sed -i "s/__NAMESPACE__/${NAMESPACE}/g" secret-namespace-advance.sh deployment.yaml svc.yaml ingress.yaml zisefeizhubox-namespace.yaml
- sed -i "s/__CI_PROJECT_NAME__/${CI_PROJECT_NAME}/g" deployment.yaml svc.yaml ingress.yaml
- sed -i "s/__VERSION__/"${CI_COMMIT_REF_NAME//\.}"-"${CI_PIPELINE_ID}"/" deployment.yaml
- sed -i "s/__PORT__/${PORT}/g" deployment.yaml svc.yaml ingress.yaml
- cat secret-namespace-advance.sh
- cat deployment.yaml
- cat svc.yaml
- cat ingress.yaml
- |
if kubectl apply -f zisefeizhubox-namespace.yaml | grep -q unchanged; then
echo "=> The namespace already exists."
else
echo "=> The namespace is created"
fi
- |
if sh -x secret-namespace-advance.sh || echo $? != 0; then
echo "=> The secret already exists."
else
echo "=> The secret is created"
fi
- |
if kubectl apply -f deployment.yaml | grep -q unchanged; then
echo "=> Patching deployment to force image update."
kubectl patch -f deployment.yaml -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ci-last-updated\":\"$(date +'%s')\"}}}}}"
else
echo "=> Deployment apply has changed the object, no need to force image update."
fi
- kubectl apply -f svc.yaml
- kubectl apply -f ingress.yaml
- kubectl rollout status -f deployment.yaml
- kubectl get all -l name=${CI_PROJECT_NAME} -n ${NAMESPACE}
項目流水線
項目資源檢視
項目界面通路
👌!
過手如登山,一步一重天