天天看點

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

寫在前面

最近在 K8S 1.18.2 版本的叢集上搭建DevOps環境,期間遇到了各種坑。目前,搭建環境的過程中出現的各種坑均已被填平,特此記錄,并分享給大家!

伺服器規劃

IP 主機名 節點 作業系統
192.168.175.101 binghe101 K8S Master CentOS 8.0.1905
192.168.175.102 binghe102 K8S Worker CentOS 8.0.1905
192.168.175.103 binghe103 K8S Worker CentOS 8.0.1905

安裝環境版本

軟體名稱 軟體版本 說明
Docker 19.03.8 提供容器環境
docker-compose 1.25.5 定義和運作由多個容器組成的應用
K8S 1.8.12 是一個開源的,用于管理雲平台中多個主機上的容器化的應用,Kubernetes的目标是讓部署容器化的應用簡單并且高效(powerful),Kubernetes提供了應用部署,規劃,更新,維護的一種機制。
GitLab 12.1.6 代碼倉庫(與SVN安裝一個即可)
Harbor 1.10.2 私有鏡像倉庫
Jenkins 2.89.3 持續內建傳遞
SVN 1.10.2 代碼倉庫(與GitLab安裝一個即可)
JDK 1.8.0_202 Java運作基礎環境
maven 3.6.3 建構項目的基礎插件

伺服器免密碼登入

在各伺服器執行如下指令。

ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
           

将binghe102和binghe103伺服器上的id_rsa.pub檔案複制到binghe101伺服器。

[root@binghe102 ~]# scp .ssh/id_rsa.pub binghe101:/root/.ssh/102
[root@binghe103 ~]# scp .ssh/id_rsa.pub binghe101:/root/.ssh/103
           

在binghe101伺服器上執行如下指令。

cat ~/.ssh/102 >> ~/.ssh/authorized_keys
cat ~/.ssh/103 >> ~/.ssh/authorized_keys
           

然後将authorized_keys檔案分别複制到binghe102、binghe103伺服器。

[root@binghe101 ~]# scp .ssh/authorized_keys binghe102:/root/.ssh/authorized_keys
[root@binghe101 ~]# scp .ssh/authorized_keys binghe103:/root/.ssh/authorized_keys
           

删除binghe101節點上~/.ssh下的102和103檔案。

rm ~/.ssh/102
rm ~/.ssh/103
           

安裝JDK

需要在每台伺服器上安裝JDK環境。到Oracle官方下載下傳JDK,我這裡下的JDK版本為1.8.0_202,下載下傳後解壓并配置系統環境變量。

tar -zxvf jdk1.8.0_212.tar.gz
mv jdk1.8.0_212 /usr/local
           

接下來,配置系統環境變量。

vim /etc/profile
           

配置項内容如下所示。

JAVA_HOME=/usr/local/jdk1.8.0_212
CLASS_PATH=.:$JAVA_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH PATH
           

接下來執行如下指令使系統環境變量生效。

source /etc/profile
           

安裝Maven

到Apache官方下載下傳Maven,我這裡下載下傳的Maven版本為3.6.3。下載下傳後直接解壓并配置系統環境變量。

tar -zxvf apache-maven-3.6.3-bin.tar.gz
mv apache-maven-3.6.3-bin /usr/local
           

接下來,就是配置系統環境變量。

vim /etc/profile
           

配置項内容如下所示。

JAVA_HOME=/usr/local/jdk1.8.0_212
MAVEN_HOME=/usr/local/apache-maven-3.6.3-bin
CLASS_PATH=.:$JAVA_HOME/lib
PATH=$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH MAVEN_HOME PATH
           

接下來執行如下指令使系統環境變量生效。

source /etc/profile
           

接下來,修改Maven的配置檔案,如下所示。

<localRepository>/home/repository</localRepository>
           

将Maven下載下傳的Jar包存儲到/home/repository目錄下。

安裝Docker環境

本文檔基于Docker 19.03.8 版本搭建Docker環境。

在所有伺服器上建立install_docker.sh腳本,腳本内容如下所示。

export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
dnf install yum*
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
dnf install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm
yum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8
systemctl enable docker.service
systemctl start docker.service
docker version
           

在每台伺服器上為install_docker.sh腳本賦予可執行權限,并執行腳本即可。

安裝docker-compose

注意:在每台伺服器上安裝docker-compose

1.下載下傳docker-compose檔案

curl -L https://github.com/docker/compose/releases/download/1.25.5/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose 
           

2.為docker-compose檔案賦予可執行權限

chmod a+x /usr/local/bin/docker-compose
           

3.檢視docker-compose版本

[root@binghe ~]# docker-compose version
docker-compose version 1.25.5, build 8a1c60f6
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019
           

安裝K8S叢集環境

本文檔基于K8S 1.8.12版本來搭建K8S叢集

安裝K8S基礎環境

在所有伺服器上建立install_k8s.sh腳本檔案,腳本檔案的内容如下所示。

#配置阿裡雲鏡像加速器
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker

#安裝nfs-utils
yum install -y nfs-utils
yum install -y wget

#啟動nfs-server
systemctl start nfs-server
systemctl enable nfs-server

#關閉防火牆
systemctl stop firewalld
systemctl disable firewalld

#關閉SeLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# 關閉 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

#修改 /etc/sysctl.conf
# 如果有配置,則修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf
# 可能沒有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf
# 執行指令以應用
sysctl -p

# 配置K8S的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 解除安裝舊版本K8S
yum remove -y kubelet kubeadm kubectl

# 安裝kubelet、kubeadm、kubectl,這裡我安裝的是1.18.2版本,你也可以安裝1.17.2版本
yum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2

# 修改docker Cgroup Driver為systemd
# # 将/usr/lib/systemd/system/docker.service檔案中的這一行 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
# # 修改為 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
# 如果不修改,在添加 worker 節點時可能會碰到如下錯誤
# [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". 
# Please follow the guide at https://kubernetes.io/docs/setup/cri/
sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service

# 設定 docker 鏡像,提高 docker 鏡像下載下傳速度和穩定性
# 如果通路 https://hub.docker.io 速度非常穩定,亦可以跳過這個步驟
# curl -sSL https://kuboard.cn/install-script/set_mirror.sh | sh -s ${REGISTRY_MIRROR}

# 重新開機 docker,并啟動 kubelet
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet && systemctl start kubelet

docker version
           

在每台伺服器上為install_k8s.sh腳本賦予可執行權限,并執行腳本即可。

初始化Master節點

隻在binghe101伺服器上執行的操作。

1.初始化Master節點的網絡環境

注意:下面的指令需要在指令行手動執行。

# 隻在 master 節點執行
# export 指令隻在目前 shell 會話中有效,開啟新的 shell 視窗後,如果要繼續安裝過程,請重新執行此處的 export 指令
export MASTER_IP=192.168.175.101
# 替換 k8s.master 為 您想要的 dnsName
export APISERVER_NAME=k8s.master
# Kubernetes 容器組所在的網段,該網段安裝完成後,由 kubernetes 建立,事先并不存在于實體網絡中
export POD_SUBNET=172.18.0.1/16
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
           

2.初始化Master節點

在binghe101伺服器上建立init_master.sh腳本檔案,檔案内容如下所示。

#!/bin/bash
# 腳本出錯時終止執行
set -e

if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then
  echo -e "\033[31;1m請確定您已經設定了環境變量 POD_SUBNET 和 APISERVER_NAME \033[0m"
  echo 目前POD_SUBNET=$POD_SUBNET
  echo 目前APISERVER_NAME=$APISERVER_NAME
  exit 1
fi


# 檢視完整配置選項 https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
rm -f ./kubeadm-config.yaml
cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
controlPlaneEndpoint: "${APISERVER_NAME}:6443"
networking:
  serviceSubnet: "10.96.0.0/16"
  podSubnet: "${POD_SUBNET}"
  dnsDomain: "cluster.local"
EOF

# kubeadm init
# 根據伺服器網速的情況,您需要等候 3 - 10 分鐘
kubeadm init --config=kubeadm-config.yaml --upload-certs

# 配置 kubectl
rm -rf /root/.kube/
mkdir /root/.kube/
cp -i /etc/kubernetes/admin.conf /root/.kube/config

# 安裝 calico 網絡插件
# 參考文檔 https://docs.projectcalico.org/v3.13/getting-started/kubernetes/self-managed-onprem/onpremises
echo "安裝calico-3.13.1"
rm -f calico-3.13.1.yaml
wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml
kubectl apply -f calico-3.13.1.yaml
           

賦予init_master.sh腳本檔案可執行權限并執行腳本。

3.檢視Master節點的初始化結果

(1)確定所有容器組處于Running狀态

# 執行如下指令,等待 3-10 分鐘,直到所有的容器組處于 Running 狀态
watch kubectl get pod -n kube-system -o wide
           

具體執行如下所示。

[root@binghe101 ~]# watch kubectl get pod -n kube-system -o wide
Every 2.0s: kubectl get pod -n kube-system -o wide                                                                                                                          binghe101: Sun May 10 11:01:32 2020

NAME                                       READY   STATUS    RESTARTS   AGE    IP                NODE        NOMINATED NODE   READINESS GATES          
calico-kube-controllers-5b8b769fcd-5dtlp   1/1     Running   0          118s   172.18.203.66     binghe101   <none>           <none>          
calico-node-fnv8g                          1/1     Running   0          118s   192.168.175.101   binghe101   <none>           <none>          
coredns-546565776c-27t7h                   1/1     Running   0          2m1s   172.18.203.67     binghe101   <none>           <none>          
coredns-546565776c-hjb8z                   1/1     Running   0          2m1s   172.18.203.65     binghe101   <none>           <none>          
etcd-binghe101                             1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>          
kube-apiserver-binghe101                   1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>          
kube-controller-manager-binghe101          1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>          
kube-proxy-dvgsr                           1/1     Running   0          2m1s   192.168.175.101   binghe101   <none>           <none>          
kube-scheduler-binghe101                   1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>
           

(2) 檢視 Master 節點初始化結果

kubectl get nodes -o wide
           

具體執行如下所示。

[root@binghe101 ~]# kubectl get nodes -o wide
NAME        STATUS   ROLES    AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION         CONTAINER-RUNTIME
binghe101   Ready    master   3m28s   v1.18.2   192.168.175.101   <none>        CentOS Linux 8 (Core)   4.18.0-80.el8.x86_64   docker://19.3.8
           

初始化Worker節點

1.擷取join指令參數

在Master節點(binghe101伺服器)上執行如下指令擷取join指令參數。

kubeadm token create --print-join-command
           

具體執行如下所示。

[root@binghe101 ~]# kubeadm token create --print-join-command
W0510 11:04:34.828126   56132 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 
           

其中,有如下一行輸出。

kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 
           

這行代碼就是擷取到的join指令。

注意:join指令中的token的有效時間為 2 個小時,2小時内,可以使用此 token 初始化任意數量的 worker 節點。

2.初始化Worker節點

針對所有的 worker 節點執行,在這裡,就是在binghe102伺服器和binghe103伺服器上執行。

在指令分别手動執行如下指令。

# 隻在 worker 節點執行
# 192.168.175.101 為 master 節點的内網 IP
export MASTER_IP=192.168.175.101
# 替換 k8s.master 為初始化 master 節點時所使用的 APISERVER_NAME
export APISERVER_NAME=k8s.master
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

# 替換為 master 節點上 kubeadm token create 指令輸出的join
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 
           

具體執行如下所示。

[root@binghe102 ~]# export MASTER_IP=192.168.175.101
[root@binghe102 ~]# export APISERVER_NAME=k8s.master
[root@binghe102 ~]# echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
[root@binghe102 ~]# kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 
W0510 11:08:27.709263   42795 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
           

根據輸出結果可以看出,Worker節點加入了K8S叢集。

注意:kubeadm join…就是master 節點上 kubeadm token create 指令輸出的join。

3.檢視初始化結果

在Master節點(binghe101伺服器)執行如下指令檢視初始化結果。

kubectl get nodes -o wide
           

具體執行如下所示。

[root@binghe101 ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
binghe101   Ready    master   20m     v1.18.2
binghe102   Ready    <none>   2m46s   v1.18.2
binghe103   Ready    <none>   2m46s   v1.18.2
           
注意:kubectl get nodes指令後面加上-o wide參數可以輸出更多的資訊。

重新開機K8S叢集引起的問題

1.Worker節點故障不能啟動

Master 節點的 IP 位址發生變化,導緻 worker 節點不能啟動。需要重新安裝K8S叢集,并確定所有節點都有固定的内網 IP 位址。

2.Pod崩潰或不能正常通路

重新開機伺服器後使用如下指令檢視Pod的運作狀态。

kubectl get pods --all-namespaces
           

發現很多 Pod 不在 Running 狀态,此時,需要使用如下指令删除運作不正常的Pod。

kubectl delete pod <pod-name> -n <pod-namespece>
           
注意:如果Pod 是使用 Deployment、StatefulSet 等控制器建立的,K8S 将建立新的 Pod 作為替代,重新啟動的 Pod 通常能夠正常工作。

K8S安裝ingress-nginx

注意:在Master節點(binghe101伺服器上執行)

1.建立ingress-nginx命名空間

建立ingress-nginx-namespace.yaml檔案,檔案内容如下所示。

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    name: ingress-nginx
           

執行如下指令建立ingress-nginx命名空間。

kubectl apply -f ingress-nginx-namespace.yaml
           

2.安裝ingress controller

建立ingress-nginx-mandatory.yaml檔案,檔案内容如下所示。

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: default-http-backend
          # Any image is permissible as long as:
          # 1. It serves a 404 page at /
          # 2. It serves 200 on a /healthz endpoint
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 10m
              memory: 20Mi
            requests:
              cpu: 10m
              memory: 20Mi

---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

---
           

執行如下指令安裝ingress controller。

kubectl apply -f ingress-nginx-mandatory.yaml
           

3.安裝K8S SVC:ingress-nginx

主要是用來用于暴露pod:nginx-ingress-controller。

建立service-nodeport.yaml檔案,檔案内容如下所示。

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 30080
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 30443
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
           

執行如下指令安裝。

kubectl apply -f service-nodeport.yaml
           

4.通路K8S SVC:ingress-nginx

檢視ingress-nginx命名空間的部署情況,如下所示。

[root@binghe101 k8s]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
default-http-backend-796ddcd9b-vfmgn        1/1     Running   1          10h
nginx-ingress-controller-58985cc996-87754   1/1     Running   2          10h
           

在指令行伺服器指令行輸入如下指令檢視ingress-nginx的端口映射情況。

kubectl get svc -n ingress-nginx 
           

具體如下所示。

[root@binghe101 k8s]# kubectl get svc -n ingress-nginx 
NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
default-http-backend   ClusterIP   10.96.247.2   <none>        80/TCP                       7m3s
ingress-nginx          NodePort    10.96.40.6    <none>        80:30080/TCP,443:30443/TCP   4m35s
           

是以,可以通過Master節點(binghe101伺服器)的IP位址和30080端口号來通路ingress-nginx,如下所示。

[root@binghe101 k8s]# curl 192.168.175.101:30080       
default backend - 404
           

也可以在浏覽器打開http://192.168.175.101:30080 來通路ingress-nginx,如下所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

K8S安裝gitlab代碼倉庫

注意:在Master節點(binghe101伺服器上執行)

1.建立k8s-ops命名空間

建立k8s-ops-namespace.yaml檔案,檔案内容如下所示。

apiVersion: v1
kind: Namespace
metadata:
  name: k8s-ops
  labels:
    name: k8s-ops
           

執行如下指令建立命名空間。

kubectl apply -f k8s-ops-namespace.yaml 
           

2.安裝gitlab-redis

建立gitlab-redis.yaml檔案,檔案的内容如下所示。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: k8s-ops
  labels:
    name: redis
spec:
  selector:
    matchLabels:
      name: redis
  template:
    metadata:
      name: redis
      labels:
        name: redis
    spec:
      containers:
      - name: redis
        image: sameersbn/redis
        imagePullPolicy: IfNotPresent
        ports:
        - name: redis
          containerPort: 6379
        volumeMounts:
        - mountPath: /var/lib/redis
          name: data
        livenessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 10
          timeoutSeconds: 5
      volumes:
      - name: data
        hostPath:
          path: /data1/docker/xinsrv/redis

---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: k8s-ops
  labels:
    name: redis
spec:
  ports:
    - name: redis
      port: 6379
      targetPort: redis
  selector:
    name: redis
           

首先,在指令行執行如下指令建立/data1/docker/xinsrv/redis目錄。

mkdir -p /data1/docker/xinsrv/redis
           

執行如下指令安裝gitlab-redis。

kubectl apply -f gitlab-redis.yaml 
           

3.安裝gitlab-postgresql

建立gitlab-postgresql.yaml,檔案内容如下所示。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgresql
  namespace: k8s-ops
  labels:
    name: postgresql
spec:
  selector:
    matchLabels:
      name: postgresql
  template:
    metadata:
      name: postgresql
      labels:
        name: postgresql
    spec:
      containers:
      - name: postgresql
        image: sameersbn/postgresql
        imagePullPolicy: IfNotPresent
        env:
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production
        - name: DB_EXTENSION
          value: pg_trgm
        ports:
        - name: postgres
          containerPort: 5432
        volumeMounts:
        - mountPath: /var/lib/postgresql
          name: data
        livenessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        hostPath:
          path: /data1/docker/xinsrv/postgresql
---
apiVersion: v1
kind: Service
metadata:
  name: postgresql
  namespace: k8s-ops
  labels:
    name: postgresql
spec:
  ports:
    - name: postgres
      port: 5432
      targetPort: postgres
  selector:
    name: postgresql
           

首先,執行如下指令建立/data1/docker/xinsrv/postgresql目錄。

mkdir -p /data1/docker/xinsrv/postgresql
           

接下來,安裝gitlab-postgresql,如下所示。

kubectl apply -f gitlab-postgresql.yaml
           

4.安裝gitlab

(1)配置使用者名和密碼

首先,在指令行使用base64編碼為使用者名和密碼進行轉碼,本示例中,使用的使用者名為admin,密碼為admin.1231

轉碼情況如下所示。

[root@binghe101 k8s]# echo -n 'admin' | base64 
YWRtaW4=
[root@binghe101 k8s]# echo -n 'admin.1231' | base64 
YWRtaW4uMTIzMQ==
           

轉碼後的使用者名為:YWRtaW4= 密碼為:YWRtaW4uMTIzMQ==

也可以對base64編碼後的字元串解碼,例如,對密碼字元串解碼,如下所示。

[root@binghe101 k8s]# echo 'YWRtaW4uMTIzMQ==' | base64 --decode 
admin.1231
           

接下來,建立secret-gitlab.yaml檔案,主要是使用者來配置GitLab的使用者名和密碼,檔案内容如下所示。

apiVersion: v1
kind: Secret
metadata:
  namespace: k8s-ops
  name: git-user-pass
type: Opaque
data:
  username: YWRtaW4=
  password: YWRtaW4uMTIzMQ==
           

執行配置檔案的内容,如下所示。

kubectl create -f ./secret-gitlab.yaml
           

(2)安裝GitLab

建立gitlab.yaml檔案,檔案的内容如下所示。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitlab
  namespace: k8s-ops
  labels:
    name: gitlab
spec:
  selector:
    matchLabels:
      name: gitlab
  template:
    metadata:
      name: gitlab
      labels:
        name: gitlab
    spec:
      containers:
      - name: gitlab
        image: sameersbn/gitlab:12.1.6
        imagePullPolicy: IfNotPresent
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: GITLAB_TIMEZONE
          value: Beijing
        - name: GITLAB_SECRETS_DB_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_SECRETS_SECRET_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_SECRETS_OTP_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: git-user-pass
              key: password
        - name: GITLAB_ROOT_EMAIL
          value: [email protected]
        - name: GITLAB_HOST
          value: gitlab.binghe.com
        - name: GITLAB_PORT
          value: "80"
        - name: GITLAB_SSH_PORT
          value: "30022"
        - name: GITLAB_NOTIFY_ON_BROKEN_BUILDS
          value: "true"
        - name: GITLAB_NOTIFY_PUSHER
          value: "false"
        - name: GITLAB_BACKUP_SCHEDULE
          value: daily
        - name: GITLAB_BACKUP_TIME
          value: 01:00
        - name: DB_TYPE
          value: postgres
        - name: DB_HOST
          value: postgresql
        - name: DB_PORT
          value: "5432"
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production
        - name: REDIS_HOST
          value: redis
        - name: REDIS_PORT
          value: "6379"
        ports:
        - name: http
          containerPort: 80
        - name: ssh
          containerPort: 22
        volumeMounts:
        - mountPath: /home/git/data
          name: data
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 180
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        hostPath:
          path: /data1/docker/xinsrv/gitlab
---
apiVersion: v1
kind: Service
metadata:
  name: gitlab
  namespace: k8s-ops
  labels:
    name: gitlab
spec:
  ports:
    - name: http
      port: 80
      nodePort: 30088
    - name: ssh
      port: 22
      targetPort: ssh
      nodePort: 30022
  type: NodePort
  selector:
    name: gitlab

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gitlab
  namespace: k8s-ops
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: gitlab.binghe.com
    http:
      paths:
      - backend:
          serviceName: gitlab
          servicePort: http
           

注意:在配置GitLab時,監聽主機時,不能使用IP位址,需要使用主機名或者域名,上述配置中,我使用的是gitlab.binghe.com主機名。

在指令行執行如下指令建立/data1/docker/xinsrv/gitlab目錄。

mkdir -p /data1/docker/xinsrv/gitlab
           

安裝GitLab,如下所示。

kubectl apply -f gitlab.yaml
           

5.安裝完成

檢視k8s-ops命名空間部署情況,如下所示。

[root@binghe101 k8s]# kubectl get pod -n k8s-ops
NAME                          READY   STATUS    RESTARTS   AGE
gitlab-7b459db47c-5vk6t       0/1     Running   0          11s
postgresql-79567459d7-x52vx   1/1     Running   0          30m
redis-67f4cdc96c-h5ckz        1/1     Running   1          10h
           

也可以使用如下指令檢視。

[root@binghe101 k8s]# kubectl get pod --namespace=k8s-ops
NAME                          READY   STATUS    RESTARTS   AGE
gitlab-7b459db47c-5vk6t       0/1     Running   0          36s
postgresql-79567459d7-x52vx   1/1     Running   0          30m
redis-67f4cdc96c-h5ckz        1/1     Running   1          10h
           

二者效果一樣。

接下來,檢視GitLab的端口映射,如下所示。

[root@binghe101 k8s]# kubectl get svc -n k8s-ops
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                     AGE
gitlab       NodePort    10.96.153.100   <none>        80:30088/TCP,22:30022/TCP   2m42s
postgresql   ClusterIP   10.96.203.119   <none>        5432/TCP                    32m
redis        ClusterIP   10.96.107.150   <none>        6379/TCP                    10h
           

此時,可以看到,可以通過Master節點(binghe101)的主機名gitlab.binghe.com和端口30088就能夠通路GitLab。由于我這裡使用的是虛拟機來搭建相關的環境,在本機通路虛拟機映射的gitlab.binghe.com時,需要配置本機的hosts檔案,在本機的hosts檔案中加入如下配置項。

192.168.175.101 gitlab.binghe.com
           

注意:在Windows作業系統中,hosts檔案所在的目錄如下。

C:\Windows\System32\drivers\etc
           

接下來,就可以在浏覽器中通過連結:http://gitlab.binghe.com:30088 來通路GitLab了,如下所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

此時,可以通過使用者名root和密碼admin.1231來登入GitLab了。

注意:這裡的使用者名是root而不是admin,因為root是GitLab預設的超級使用者。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

登入後的界面如下所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

到此,K8S安裝gitlab完成。

安裝Harbor私有倉庫

注意:這裡将Harbor私有倉庫安裝在Master節點(binghe101伺服器)上,實際生産環境中建議安裝在其他伺服器。

1.下載下傳Harbor的離線安裝版本

wget https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgz
           

2.解壓Harbor的安裝包

tar -zxvf harbor-offline-installer-v1.10.2.tgz
           

解壓成功後,會在伺服器目前目錄生成一個harbor目錄。

3.配置Harbor

注意:這裡,我将Harbor的端口修改成了1180,如果不修改Harbor的端口,預設的端口是80。

(1)修改harbor.yml檔案

cd harbor
vim harbor.yml
           

修改的配置項如下所示。

hostname: 192.168.175.101
http:
  port: 1180
harbor_admin_password: binghe123
###并把https注釋掉,不然在安裝的時候會報錯:ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
#https:
  #port: 443
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path
           

(2)修改daemon.json檔案

修改/etc/docker/daemon.json檔案,沒有的話就建立,在/etc/docker/daemon.json檔案中添加如下内容。

[root@binghe~]# cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"],
  "insecure-registries":["192.168.175.101:1180"]
}
           

也可以在伺服器上使用 ip addr 指令檢視本機所有的IP位址段,将其配置到/etc/docker/daemon.json檔案中。這裡,我配置後的檔案内容如下所示。

{
    "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"],
    "insecure-registries":["192.168.175.0/16","172.17.0.0/16", "172.18.0.0/16", "172.16.29.0/16", "192.168.175.101:1180"]
}
           

4.安裝并啟動harbor

配置完成後,輸入如下指令即可安裝并啟動Harbor

[root@binghe harbor]# ./install.sh 
           

5.登入Harbor并添加賬戶

安裝成功後,在浏覽器位址欄輸入http://192.168.175.101:1180打開連結,如下圖所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

輸入使用者名admin和密碼binghe123,登入系統,如下圖所示

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

接下來,我們選擇使用者管理,添加一個管理者賬戶,為後續打包Docker鏡像和上傳Docker鏡像做準備。添加賬戶的步驟如下所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)
【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

此處填寫的密碼為Binghe123。

點選确定後,如下所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

此時,賬戶binghe還不是管理者,此時選中binghe賬戶,點選“設定為管理者”。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)
【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

此時,binghe賬戶就被設定為管理者了。到此,Harbor的安裝就完成了。

6.修改Harbor端口

如果安裝Harbor後,大家需要修改Harbor的端口,可以按照如下步驟修改Harbor的端口,這裡,我以将80端口修改為1180端口為例

(1)修改harbor.yml檔案

cd harbor
vim harbor.yml
           

修改的配置項如下所示。

hostname: 192.168.175.101
http:
  port: 1180
harbor_admin_password: binghe123
###并把https注釋掉,不然在安裝的時候會報錯:ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
#https:
  #port: 443
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path
           

(2)修改docker-compose.yml檔案

vim docker-compose.yml
           

修改的配置項如下所示。

ports:
      - 1180:80
           

(3)修改config.yml檔案

cd common/config/registry
vim config.yml
           

修改的配置項如下所示。

realm: http://192.168.175.101:1180/service/token
           

(4)重新開機Docker

systemctl daemon-reload
systemctl restart docker.service
           

(5)重新開機Harbor

[root@binghe harbor]# docker-compose down
Stopping harbor-log ... done
Removing nginx             ... done
Removing harbor-portal     ... done
Removing harbor-jobservice ... done
Removing harbor-core       ... done
Removing redis             ... done
Removing registry          ... done
Removing registryctl       ... done
Removing harbor-db         ... done
Removing harbor-log        ... done
Removing network harbor_harbor
 
[root@binghe harbor]# ./prepare
prepare base dir is set to /mnt/harbor
Clearing the configuration file: /config/log/logrotate.conf
Clearing the configuration file: /config/nginx/nginx.conf
Clearing the configuration file: /config/core/env
Clearing the configuration file: /config/core/app.conf
Clearing the configuration file: /config/registry/root.crt
Clearing the configuration file: /config/registry/config.yml
Clearing the configuration file: /config/registryctl/env
Clearing the configuration file: /config/registryctl/config.yml
Clearing the configuration file: /config/db/env
Clearing the configuration file: /config/jobservice/env
Clearing the configuration file: /config/jobservice/config.yml
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
loaded secret from file: /secret/keys/secretkey
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir
 
[root@binghe harbor]# docker-compose up -d
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-db   ... done
Creating redis       ... done
Creating registry    ... done
Creating registryctl ... done
Creating harbor-core ... done
Creating harbor-jobservice ... done
Creating harbor-portal     ... done
Creating nginx             ... done
 
[root@binghe harbor]# docker ps -a
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS                             PORTS
           

安裝Jenkins(一般的做法)

1.安裝nfs(之前安裝過的話,可以省略此步)

使用 nfs 最大的問題就是寫權限,可以使用 kubernetes 的 securityContext/runAsUser 指定 jenkins 容器中運作 jenkins 的使用者 uid,以此來指定 nfs 目錄的權限,讓 jenkins 容器可寫;也可以不限制,讓所有使用者都可以寫。這裡為了簡單,就讓所有使用者可寫了。

如果之前已經安裝過nfs,則這一步可以省略。找一台主機,安裝 nfs,這裡,我以在Master節點(binghe101伺服器)上安裝nfs為例。

在指令行輸入如下指令安裝并啟動nfs。

yum install nfs-utils -y
systemctl start nfs-server
systemctl enable nfs-server
           

2.建立nfs共享目錄

在Master節點(binghe101伺服器)上建立

/opt/nfs/jenkins-data

目錄作為nfs的共享目錄,如下所示。

mkdir -p /opt/nfs/jenkins-data
           

接下來,編輯/etc/exports檔案,如下所示。

vim /etc/exports
           

在/etc/exports檔案檔案中添加如下一行配置。

/opt/nfs/jenkins-data 192.168.175.0/24(rw,all_squash)
           

這裡的 ip 使用 kubernetes node 節點的 ip 範圍,後面的

all_squash

選項會将所有通路的使用者都映射成 nfsnobody 使用者,不管你是什麼使用者通路,最終都會壓縮成 nfsnobody,是以隻要将

/opt/nfs/jenkins-data

的屬主改為 nfsnobody,那麼無論什麼使用者來通路都具有寫權限。

這個選項在很多機器上由于使用者 uid 不規範導緻啟動程序的使用者不同,但是同時要對一個共享目錄具有寫權限時很有效。

接下來,為

/opt/nfs/jenkins-data

目錄授權,并重新加載nfs,如下所示。

chown -R 1000 /opt/nfs/jenkins-data/
systemctl reload nfs-server
           

在K8S叢集中任意一個節點上使用如下指令進行驗證:

showmount -e NFS_IP
           

如果能夠看到 /opt/nfs/jenkins-data 就表示 ok 了。

具體如下所示。

[root@binghe101 ~]# showmount -e 192.168.175.101
Export list for 192.168.175.101:
/opt/nfs/jenkins-data 192.168.175.0/24

[root@binghe102 ~]# showmount -e 192.168.175.101
Export list for 192.168.175.101:
/opt/nfs/jenkins-data 192.168.175.0/24
           

3.建立PV

Jenkins 其實隻要加載對應的目錄就可以讀取之前的資料,但是由于 deployment 無法定義存儲卷,是以我們隻能使用 StatefulSet。

首先建立 pv,pv 是給 StatefulSet 使用的,每次 StatefulSet 啟動都會通過 volumeClaimTemplates 這個模闆去建立 pvc,是以必須得有 pv,才能供 pvc 綁定。

建立jenkins-pv.yaml檔案,檔案内容如下所示。

apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins
spec:
  nfs:
    path: /opt/nfs/jenkins-data
    server: 192.168.175.101
  accessModes: ["ReadWriteOnce"]
  capacity:
    storage: 1Ti
           

我這裡給了 1T存儲空間,可以根據實際配置。

執行如下指令建立pv。

kubectl apply -f jenkins-pv.yaml 
           

4.建立serviceAccount

建立service account,因為 jenkins 後面需要能夠動态建立 slave,是以它必須具備一些權限。

建立jenkins-service-account.yaml檔案,檔案内容如下所示。

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create", "delete", "get", "list", "patch", "update", "watch"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create", "delete", "get", "list", "patch", "update", "watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
  - kind: ServiceAccount
    name: jenkins
           

上述配置中,建立了一個 RoleBinding 和一個 ServiceAccount,并且将 RoleBinding 的權限綁定到這個使用者上。是以,jenkins 容器必須使用這個 ServiceAccount 運作才行,不然 RoleBinding 的權限它将不具備。

RoleBinding 的權限很容易就看懂了,因為 jenkins 需要建立和删除 slave,是以才需要上面這些權限。至于 secrets 權限,則是 https 證書。

執行如下指令建立serviceAccount。

kubectl apply -f jenkins-service-account.yaml 
           

5.安裝Jenkins

建立jenkins-statefulset.yaml檔案,檔案内容如下所示。

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: jenkins
  labels:
    name: jenkins
spec:
  selector:
    matchLabels:
      name: jenkins
  serviceName: jenkins
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      name: jenkins
      labels:
        name: jenkins
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: jenkins
      containers:
        - name: jenkins
          image: docker.io/jenkins/jenkins:lts
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
            - containerPort: 32100
          resources:
            limits:
              cpu: 4
              memory: 4Gi
            requests:
              cpu: 4
              memory: 4Gi
          env:
            - name: LIMITS_MEMORY
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            - name: JAVA_OPTS
              # value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
              value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
          volumeMounts:
            - name: jenkins-home
              mountPath: /var/jenkins_home
          livenessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
          readinessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
  # pvc 模闆,對應之前的 pv
  volumeClaimTemplates:
    - metadata:
        name: jenkins-home
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 1Ti
           

jenkins 部署時需要注意它的副本數,你的副本數有多少就要有多少個 pv,同樣,存儲會有多倍消耗。這裡我隻使用了一個副本,是以前面也隻建立了一個 pv。

使用如下指令安裝Jenkins。

kubectl apply -f jenkins-statefulset.yaml 
           

6.建立Service

建立jenkins-service.yaml檔案,檔案内容如下所示。

apiVersion: v1
kind: Service
metadata:
  name: jenkins
spec:
  # type: LoadBalancer
  selector:
    name: jenkins
  # ensure the client ip is propagated to avoid the invalid crumb issue when using LoadBalancer (k8s >=1.7)
  #externalTrafficPolicy: Local
  ports:
    - name: http
      port: 80
      nodePort: 31888
      targetPort: 8080
      protocol: TCP
    - name: jenkins-agent
      port: 32100
      nodePort: 32100
      targetPort: 32100
      protocol: TCP
  type: NodePort
           

使用如下指令安裝Service。

kubectl apply -f jenkins-service.yaml 
           

7.安裝 ingress

jenkins 的 web 界面需要從叢集外通路,這裡我們選擇的是使用 ingress。建立jenkins-ingress.yaml檔案,檔案内容如下所示。

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: jenkins
spec:
  rules:
    - http:
        paths:
          - path: /
            backend:
              serviceName: jenkins
              servicePort: 31888
      host: jekins.binghe.com
           

這裡,需要注意的是host必須配置為域名或者主機名,否則會報錯,如下所示。

The Ingress "jenkins" is invalid: spec.rules[0].host: Invalid value: "192.168.175.101": must be a DNS name, not an IP address
           

使用如下指令安裝ingress。

kubectl apply -f jenkins-ingress.yaml 
           

最後,由于我這裡使用的是虛拟機來搭建相關的環境,在本機通路虛拟機映射的jekins.binghe.com時,需要配置本機的hosts檔案,在本機的hosts檔案中加入如下配置項。

192.168.175.101 jekins.binghe.com
           

注意:在Windows作業系統中,hosts檔案所在的目錄如下。

C:\Windows\System32\drivers\etc
           

接下來,就可以在浏覽器中通過連結:http://jekins.binghe.com:31888 來通路Jekins了。

實體機安裝SVN

這裡,以在Master節點(binghe101伺服器)上安裝SVN為例。

1.使用yum安裝SVN

在指令行執行如下指令安裝SVN。

yum -y install subversion 
           

2.建立SVN庫

依次執行如下指令。

#建立/data/svn
mkdir -p /data/svn 
#初始化svn
svnserve -d -r /data/svn
#建立代碼倉庫
svnadmin create /data/svn/test
           

3.配置SVN

mkdir /data/svn/conf
cp /data/svn/test/conf/* /data/svn/conf/
cd /data/svn/conf/
[root@binghe101 conf]# ll
總用量 20
-rw-r--r-- 1 root root 1080 5月  12 02:17 authz
-rw-r--r-- 1 root root  885 5月  12 02:17 hooks-env.tmpl
-rw-r--r-- 1 root root  309 5月  12 02:17 passwd
-rw-r--r-- 1 root root 4375 5月  12 02:17 svnserve.conf
           
  • 配置authz檔案,
vim authz
           

配置後的内容如下所示。

[aliases]
# joe = /C=XZ/ST=Dessert/L=Snake City/O=Snake Oil, Ltd./OU=Research Institute/CN=Joe Average

[groups]
# harry_and_sally = harry,sally
# harry_sally_and_joe = harry,sally,&joe
SuperAdmin = admin
binghe = admin,binghe

# [/foo/bar]
# harry = rw
# &joe = r
# * =

# [repository:/baz/fuz]
# @harry_and_sally = rw
# * = r

[test:/]
@SuperAdmin=rw
@binghe=rw
           
  • 配置passwd檔案
vim passwd
           

配置後的内容如下所示。

[users]
# harry = harryssecret
# sally = sallyssecret
admin = admin123
binghe = binghe123
           
  • 配置 svnserve.conf
vim svnserve.conf
           

配置後的檔案如下所示。

### This file controls the configuration of the svnserve daemon, if you
### use it to allow access to this repository.  (If you only allow
### access through http: and/or file: URLs, then this file is
### irrelevant.)

### Visit http://subversion.apache.org/ for more information.

[general]
### The anon-access and auth-access options control access to the
### repository for unauthenticated (a.k.a. anonymous) users and
### authenticated users, respectively.
### Valid values are "write", "read", and "none".
### Setting the value to "none" prohibits both reading and writing;
### "read" allows read-only access, and "write" allows complete 
### read/write access to the repository.
### The sample settings below are the defaults and specify that anonymous
### users have read-only access to the repository, while authenticated
### users have read and write access to the repository.
anon-access = none
auth-access = write
### The password-db option controls the location of the password
### database file.  Unless you specify a path starting with a /,
### the file's location is relative to the directory containing
### this configuration file.
### If SASL is enabled (see below), this file will NOT be used.
### Uncomment the line below to use the default password file.
password-db = /data/svn/conf/passwd
### The authz-db option controls the location of the authorization
### rules for path-based access control.  Unless you specify a path
### starting with a /, the file's location is relative to the
### directory containing this file.  The specified path may be a
### repository relative URL (^/) or an absolute file:// URL to a text
### file in a Subversion repository.  If you don't specify an authz-db,
### no path-based access control is done.
### Uncomment the line below to use the default authorization file.
authz-db = /data/svn/conf/authz
### The groups-db option controls the location of the file with the
### group definitions and allows maintaining groups separately from the
### authorization rules.  The groups-db file is of the same format as the
### authz-db file and should contain a single [groups] section with the
### group definitions.  If the option is enabled, the authz-db file cannot
### contain a [groups] section.  Unless you specify a path starting with
### a /, the file's location is relative to the directory containing this
### file.  The specified path may be a repository relative URL (^/) or an
### absolute file:// URL to a text file in a Subversion repository.
### This option is not being used by default.
# groups-db = groups
### This option specifies the authentication realm of the repository.
### If two repositories have the same authentication realm, they should
### have the same password database, and vice versa.  The default realm
### is repository's uuid.
realm = svn
### The force-username-case option causes svnserve to case-normalize
### usernames before comparing them against the authorization rules in the
### authz-db file configured above.  Valid values are "upper" (to upper-
### case the usernames), "lower" (to lowercase the usernames), and
### "none" (to compare usernames as-is without case conversion, which
### is the default behavior).
# force-username-case = none
### The hooks-env options specifies a path to the hook script environment 
### configuration file. This option overrides the per-repository default
### and can be used to configure the hook script environment for multiple 
### repositories in a single file, if an absolute path is specified.
### Unless you specify an absolute path, the file's location is relative
### to the directory containing this file.
# hooks-env = hooks-env

[sasl]
### This option specifies whether you want to use the Cyrus SASL
### library for authentication. Default is false.
### Enabling this option requires svnserve to have been built with Cyrus
### SASL support; to check, run 'svnserve --version' and look for a line
### reading 'Cyrus SASL authentication is available.'
# use-sasl = true
### These options specify the desired strength of the security layer
### that you want SASL to provide. 0 means no encryption, 1 means
### integrity-checking only, values larger than 1 are correlated
### to the effective key length for encryption (e.g. 128 means 128-bit
### encryption). The values below are the defaults.
# min-encryption = 0
# max-encryption = 256
           

接下來,将/data/svn/conf目錄下的svnserve.conf檔案複制到/data/svn/test/conf/目錄下。如下所示。

[root@binghe101 conf]# cp /data/svn/conf/svnserve.conf /data/svn/test/conf/
cp:是否覆寫'/data/svn/test/conf/svnserve.conf'? y
           

4.啟動SVN服務

(1)建立svnserve.service服務

建立svnserve.service檔案

vim /usr/lib/systemd/system/svnserve.service
           

檔案的内容如下所示。

[Unit]
Description=Subversion protocol daemon
After=syslog.target network.target
Documentation=man:svnserve(8)

[Service]
Type=forking
EnvironmentFile=/etc/sysconfig/svnserve
#ExecStart=/usr/bin/svnserve --daemon --pid-file=/run/svnserve/svnserve.pid $OPTIONS
ExecStart=/usr/bin/svnserve --daemon $OPTIONS
PrivateTmp=yes

[Install]
WantedBy=multi-user.target
           

接下來執行如下指令使配置生效。

systemctl daemon-reload
           

指令執行成功後,修改 /etc/sysconfig/svnserve 檔案。

vim /etc/sysconfig/svnserve 
           

修改後的檔案内容如下所示。

# OPTIONS is used to pass command-line arguments to svnserve.
# 
# Specify the repository location in -r parameter:
OPTIONS="-r /data/svn"
           

(2)啟動SVN

首先檢視SVN狀态,如下所示。

[root@itence10 conf]# systemctl status svnserve.service
● svnserve.service - Subversion protocol daemon
   Loaded: loaded (/usr/lib/systemd/system/svnserve.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:svnserve(8)
           

可以看到,此時SVN并沒有啟動,接下來,需要啟動SVN。

systemctl start svnserve.service
           

設定SVN服務開機自啟動。

systemctl enable svnserve.service
           

接下來,就可以下載下傳安裝TortoiseSVN,輸傳入連結接svn://192.168.0.10/test 并輸入使用者名binghe,密碼binghe123來連接配接SVN了。

實體機安裝Jenkins

注意:安裝Jenkins之前需要安裝JDK和Maven,我這裡同樣将Jenkins安裝在Master節點(binghe101伺服器)。

1.啟用Jenkins庫

運作以下指令以下載下傳repo檔案并導入GPG密鑰:

wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
           

2.安裝Jenkins

執行如下指令安裝Jenkis。

yum install jenkins
           

接下來,修改Jenkins預設端口,如下所示。

vim /etc/sysconfig/jenkins
           

修改後的兩項配置如下所示。

JENKINS_JAVA_CMD="/usr/local/jdk1.8.0_212/bin/java"
JENKINS_PORT="18080"
           

此時,已經将Jenkins的端口由8080修改為18080

3.啟動Jenkins

在指令行輸入如下指令啟動Jenkins。

systemctl start jenkins
           

配置Jenkins開機自啟動。

systemctl enable jenkins
           

檢視Jenkins的運作狀态。

[root@itence10 ~]# systemctl status jenkins
● jenkins.service - LSB: Jenkins Automation Server
   Loaded: loaded (/etc/rc.d/init.d/jenkins; generated)
   Active: active (running) since Tue 2020-05-12 04:33:40 EDT; 28s ago
     Docs: man:systemd-sysv-generator(8)
    Tasks: 71 (limit: 26213)
   Memory: 550.8M
           

說明,Jenkins啟動成功。

配置Jenkins運作環境

1.登入Jenkins

首次安裝後,需要配置Jenkins的運作環境。首先,在浏覽器位址欄通路連結http://192.168.0.10:18080,打開Jenkins界面。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

根據提示使用如下指令到伺服器上找密碼值,如下所示。

[root@binghe101 ~]# cat /var/lib/jenkins/secrets/initialAdminPassword
71af861c2ab948a1b6efc9f7dde90776
           

将密碼71af861c2ab948a1b6efc9f7dde90776複制到文本框,點選繼續。會跳轉到自定義Jenkins頁面,如下所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

這裡,可以直接選擇“安裝推薦的插件”。之後會跳轉到一個安裝插件的頁面,如下所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

此步驟可能有下載下傳失敗的情況,可直接忽略。

2.安裝插件

需要安裝的插件

  • Kubernetes Cli Plugin:該插件可直接在Jenkins中使用kubernetes指令行進行操作。
  • Kubernetes plugin: 使用kubernetes則需要安裝該插件
  • Kubernetes Continuous Deploy Plugin:kubernetes部署插件,可根據需要使用

還有更多的插件可供選擇,可點選 系統管理->管理插件進行管理和添加,安裝相應的Docker插件、SSH插件、Maven插件。其他的插件可以根據需要進行安裝。如下圖所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)
【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

3.配置Jenkins

(1)配置JDK和Maven

在Global Tool Configuration中配置JDK和Maven,如下所示,打開Global Tool Configuration界面。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

接下來就開始配置JDK和Maven了。

由于我在伺服器上将Maven安裝在/usr/local/maven-3.6.3目錄下,是以,需要在“Maven 配置”中進行配置,如下圖所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

接下來,配置JDK,如下所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

注意:不要勾選“Install automatically”

接下來,配置Maven,如下所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

注意:不要勾選“Install automatically”

(2)配置SSH

進入Jenkins的Configure System界面配置SSH,如下所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

找到 SSH remote hosts 進行配置。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)
【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

配置完成後,點選Check connection按鈕,會顯示 Successfull connection。如下所示。

【K8S】基于Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續內建傳遞環境(環境搭建篇)

至此,Jenkins的基本配置就完成了。

寫在最後

如果覺得文章對你有點幫助,請微信搜尋并關注「 冰河技術 」微信公衆号,跟冰河學習各種程式設計技術。

最後附上K8S最全知識圖譜連結:

https://www.processon.com/view/link/5ac64532e4b00dc8a02f05eb?spm=a2c4e.10696291.0.0.6ec019a4bYSFIw#map

祝大家在學習K8S時,少走彎路。