一、叢集環境規劃配置
生産環境不要使用一主多從,要使用多主多從。這裡使用三台主機進行測試一台Master(172.16.20.111),兩台Node(172.16.20.112和172.16.20.113)
1、設定主機名
CentOS7安裝完成之後,設定固定ip,三台主機做相同設定
vi /etc/sysconfig/network-scripts/ifcfg-ens33
#在最下面ONBOOT改為yes,新增固定位址IPADDR,172.16.20.111,172.16.20.112,172.16.20.113
ONBOOT=yes
IPADDR=172.16.20.111
三台主機ip分别設定好之後,修改hosts檔案,設定主機名
#master 機器上執行
hostnamectl set-hostname master
#node1 機器上執行
hostnamectl set-hostname node1
#node2 機器上執行
hostnamectl set-hostname node2
vi /etc/hosts
172.16.20.111 master
172.16.20.112 node1
172.16.20.113 node2
2、時間同步
開啟chronyd服務
systemctl start chronyd
設定開機啟動
systemctl enable chronyd
測試
date
3、禁用firewalld和iptables(測試環境)
systemctl stop firewalld
systemctl disable firewalld
systemctl stop iptables
systemctl disable iptables
4、禁用selinux
vi /etc/selinux/config
SELINUX=disabled
5、禁用swap分區
注釋掉 /dev/mapper/centos-swap swap
vi /etc/fstab
# 注釋掉
# /dev/mapper/centos-swap swap
6、修改linux的核心參數
vi /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
#重新加載配置
sysctl -p
#加載網橋過濾子產品
modprobe br_netfilter
#檢視網橋過濾子產品
lsmod | grep br_netfilter
7、配置ipvs
安裝ipset和ipvsadm
yum install ipset ipvsadm -y
添加需要加載的子產品(整個執行)
cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
添加執行權限
chmod +x /etc/sysconfig/modules/ipvs.modules
執行腳本
/bin/bash /etc/sysconfig/modules/ipvs.modules
檢視是否加載成功
lsmod | grep -e -ip_vs -e nf_conntrack_ipv4
以上完成設定之後,一定要執行重新開機使配置生效
reboot
二、Docker環境安裝配置
1、安裝依賴
docker依賴于系統的一些必要的工具:
yum install -y yum-utils device-mapper-persistent-data lvm2
2、添加軟體源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum clean all
yum makecache fast
3、安裝docker-ce
#檢視可以安裝的docker版本
yum list docker-ce --showduplicates
#選擇安裝需要的版本,直接安裝最新版,可以執行 yum -y install docker-ce
yum install --setopt=obsoletes=0 docker-ce-19.03.13-3.el7 -y
4、啟動服務
#通過systemctl啟動服務
systemctl start docker
#通過systemctl設定開機啟動
systemctl enable docker
5、檢視安裝版本
啟動服務使用docker version檢視一下目前的版本:
docker version
6、 配置鏡像加速
通過修改daemon配置檔案/etc/docker/daemon.json加速,如果使用k8s,這裡一定要設定 "exec-opts": ["native.cgroupdriver=systemd"]。 "insecure-registries" : ["172.16.20.175"]配置是可以通過http從我們的harbor上拉取資料。
vi /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"registry-mirrors": ["https://eiov0s1n.mirror.aliyuncs.com"],
"insecure-registries" : ["172.16.20.175"]
}
sudo systemctl daemon-reload && sudo systemctl restart docker
7、安裝docker-compose
如果網速太慢,可以直接到 https://github.com/docker/compose/releases 選擇對應的版本進行下載下傳,然後上傳到伺服器/usr/local/bin/目錄。
sudo curl -L "https://github.com/docker/compose/releases/download/v2.0.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
注意:(非必須設定)開啟Docker遠端通路 (這裡不是必須開啟的,生産環境不要開啟,開啟之後,可以在開發環境直連docker)
vi /lib/systemd/system/docker.service
修改ExecStart,添加 -H tcp://0.0.0.0:2375
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375 --containerd=/run/containerd/containerd.sock
修改後執行以下指令:
systemctl daemon-reload && service docker restart
測試是否能夠連得上:
curl http://localhost:2375/version
三、Harbor私有鏡像倉庫安裝配置(重新設定一台伺服器172.16.20.175,不要放在K8S的主從伺服器上)
首先需要按照前面的步驟,在環境上安裝Docker,才能安裝Harbor。
1、選擇合适的版本進行下載下傳,下載下傳位址:
https://github.com/goharbor/harbor/releases
2、解壓
tar -zxf harbor-offline-installer-v2.2.4.tgz
3、配置
cd harbor
mv harbor.yml.tmpl harbor.yml
vi harbor.yml
4、将hostname改為目前伺服器位址,注釋掉https配置。
......
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 172.16.20.175
# http related config
http:
# port for http, default is 80. If https enabled, this port will redirect to https port
port: 80
# https related config
#https:
# https port for harbor, default is 443
# port: 443
# The path of cert and key files for nginx
# certificate: /your/certificate/path
# private_key: /your/private/key/path
......
5、執行安裝指令
mkdir /var/log/harbor/
./install.sh
6、檢視安裝是否成功
[root@localhost harbor]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de1b702759e7 goharbor/harbor-jobservice:v2.2.4 "/harbor/entrypoint.…" 13 seconds ago Up 9 seconds (health: starting) harbor-jobservice
55b465d07157 goharbor/nginx-photon:v2.2.4 "nginx -g 'daemon of…" 13 seconds ago Up 9 seconds (health: starting) 0.0.0.0:80->8080/tcp, :::80->8080/tcp nginx
d52f5557fa73 goharbor/harbor-core:v2.2.4 "/harbor/entrypoint.…" 13 seconds ago Up 10 seconds (health: starting) harbor-core
4ba09aded494 goharbor/harbor-db:v2.2.4 "/docker-entrypoint.…" 13 seconds ago Up 11 seconds (health: starting) harbor-db
647f6f46e029 goharbor/harbor-portal:v2.2.4 "nginx -g 'daemon of…" 13 seconds ago Up 11 seconds (health: starting) harbor-portal
70251c4e234f goharbor/redis-photon:v2.2.4 "redis-server /etc/r…" 13 seconds ago Up 11 seconds (health: starting) redis
21a5c408afff goharbor/harbor-registryctl:v2.2.4 "/home/harbor/start.…" 13 seconds ago Up 11 seconds (health: starting) registryctl
b0937800f88b goharbor/registry-photon:v2.2.4 "/home/harbor/entryp…" 13 seconds ago Up 11 seconds (health: starting) registry
d899e377e02b goharbor/harbor-log:v2.2.4 "/bin/sh -c /usr/loc…" 13 seconds ago Up 12 seconds (health: starting) 127.0.0.1:1514->10514/tcp harbor-log
7、harbor的啟動停止指令
docker-compose down #停止
docker-compose up -d #啟動
8、通路harbor管理台位址,上面配置的hostname,http://172.16.20.175(預設使用者名/密碼: admin/Harbor12345):
四、Kubernetes安裝配置
1、切換鏡像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2、安裝kubeadm、kubelet和kubectl
yum install -y kubelet kubeadm kubectl
3、配置kubelet的cgroup
vi /etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
4、啟動kubelet并設定開機啟動
systemctl start kubelet && systemctl enable kubelet
5、初始化k8s叢集(隻在Master執行)
初始化
kubeadm init --kubernetes-version=v1.22.3 \
--apiserver-advertise-address=172.16.20.111 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.20.0.0/16 --pod-network-cidr=10.222.0.0/16
建立必要檔案
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
6、加入叢集(隻在Node節點執行)
在Node節點(172.16.20.112和172.16.20.113)運作上一步初始化成功後顯示的加入叢集指令
kubeadm join 172.16.20.111:6443 --token fgf380.einr7if1eb838mpe \
--discovery-token-ca-cert-hash sha256:fa5a6a2ff8996b09effbf599aac70505b49f35c5bca610d6b5511886383878f7
在Master檢視叢集狀态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 2m54s v1.22.3
node1 NotReady <none> 68s v1.22.3
node2 NotReady <none> 30s v1.22.3
7、安裝網絡插件(隻在Master執行)
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
鏡像加速:修改kube-flannel.yml檔案,将quay.io/coreos/flannel:v0.15.0 改為 quay.mirrors.ustc.edu.cn/coreos/flannel:v0.15.0
執行安裝
kubectl apply -f kube-flannel.yml
再次檢視叢集狀态,(需要等待一段時間大概1-2分鐘)發現STATUS都是Ready。
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 42m v1.22.3
node1 Ready <none> 40m v1.22.3
node2 Ready <none> 39m v1.22.3
8、叢集測試
使用kubectl安裝部署nginx服務
kubectl create deployment nginx --image=nginx --replicas=1
kubectl expose deploy nginx --port=80 --target-port=80 --type=NodePort
檢視服務
[root@master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-6799fc88d8-z5tm8 1/1 Running 0 26s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.20.0.1 <none> 443/TCP 68m
service/nginx NodePort 10.20.17.199 <none> 80:32605/TCP 9s
服務顯示service/nginx的PORT(S)為80:32605/TCP, 我們在浏覽器中通路主從位址的32605端口,檢視nginx是否運作
http://172.16.20.111:32605/
http://172.16.20.112:32605/
http://172.16.20.113:32605/
成功後顯示如下界面:
9、安裝Kubernetes管理界面Dashboard
Kubernetes可以通過指令行工具kubectl完成所需要的操作,同時也提供了友善操作的管理控制界面,使用者可以用 Kubernetes Dashboard 部署容器化的應用、監控應用的狀态、執行故障排查任務以及管理 Kubernetes 各種資源。
1、下載下傳安裝配置檔案recommended.yaml ,注意在https://github.com/kubernetes/dashboard/releases檢視Kubernetes 和 Kubernetes Dashboard的版本對應關系。
# 執行下載下傳
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
2、修改配置資訊,在service下添加 type: NodePort和nodePort: 30010
vi recommended.yaml
......
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
# 新增
nodeName: Master
# 新增
type: NodePort
ports:
- port: 443
targetPort: 8443
# 新增
nodePort: 30010
......
注釋掉以下資訊,否則不能安裝到master伺服器
# Comment the following tolerations if Dashboard must not be deployed on master
#tolerations:
# - key: node-role.kubernetes.io/master
# effect: NoSchedule
新增nodeName: master,安裝到master伺服器
......
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
nodeName: master
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.4.0
imagePullPolicy: Always
......
3、執行安裝部署指令
kubectl apply -f recommended.yaml
4、檢視運作狀态指令,可以看到service/kubernetes-dashboard 已運作,通路端口為30010
[root@master ~]# kubectl get pod,svc -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-c45b7869d-6k87n 0/1 ContainerCreating 0 10s
pod/kubernetes-dashboard-576cb95f94-zfvc9 0/1 ContainerCreating 0 10s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.20.222.83 <none> 8000/TCP 10s
service/kubernetes-dashboard NodePort 10.20.201.182 <none> 443:30010/TCP 10s
5、建立通路Kubernetes Dashboard的賬号
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
kubectl create clusterrolebinding dashboard-admin-rb --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
6、查詢通路Kubernetes Dashboard的token
[root@master ~]# kubectl get secrets -n kubernetes-dashboard | grep dashboard-admin
dashboard-admin-token-84gg6 kubernetes.io/service-account-token 3 64s
[root@master ~]# kubectl describe secrets dashboard-admin-token-84gg6 -n kubernetes-dashboard
Name: dashboard-admin-token-84gg6
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 2d93a589-6b0b-4ed6-adc3-9a2eeb5d1311
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1099 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImRmbVVfRy15QzdfUUF4ZmFuREZMc3dvd0IxQ3ItZm5SdHVZRVhXV3JpZGcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tODRnZzYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMmQ5M2E1ODktNmIwYi00ZWQ2LWFkYzMtOWEyZWViNWQxMzExIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.xsDBLeZdn7IO0Btpb4LlCD1RQ2VYsXXPa-bir91VXIqRrL1BewYAyFfZtxU-8peU8KebaJiRIaUeF813x6WbGG9QKynL1fTARN5XoH-arkBTVlcjHQ5GBziLDE-KU255veVqORF7J5XtB38Ke2n2pi8tnnUUS_bIJpMTF1s-hV0aLlqUzt3PauPmDshtoerz4iafWK0u9oWBASQDPPoE8IWYU1KmSkUNtoGzf0c9vpdlUw4j0UZE4-zSoMF_XkrfQDLD32LrG56Wgpr6E8SeipKRfgXvx7ExD54b8Lq9DyAltr_nQVvRicIEiQGdbeCu9dwzGyhg-cDucULTx7TUgA
7、在頁面通路Kubernetes Dashboard,注意一定要使用https,https://172.16.20.111:30010,輸入token登入成功後就進入了背景管理界面,原先指令行的操作就可以在管理界面進操作了
五、GitLab安裝配置
GitLab是可以部署在本地環境的Git項目倉庫,這裡介紹如何安裝使用,在開發過程中我們将代碼上傳到本地倉庫,然後Jenkins從倉庫中拉取代碼打包部署。
1、下載下傳需要的安裝包,下載下傳位址 https://packages.gitlab.com/gitlab/gitlab-ce/ ,我們這裡下載下傳最新版gitlab-ce-14.4.1-ce.0.el7.x86_64.rpm,當然在項目開發中需要根據自己的需求選擇穩定版本
2、點選需要安裝的版本,會提示安裝指令,按照上面提示的指令進行安裝即可
curl -s https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.rpm.sh | sudo bash
sudo yum install gitlab-ce-14.4.1-ce.0.el7.x86_64
3、配置并啟動Gitlab
gitlab-ctl reconfigure
4、檢視Gitlab狀态
gitlab-ctl status
5、設定初始登入密碼
cd /opt/gitlab/bin
sudo ./gitlab-rails console
# 進入控制台之後執行
u=User.where(id:1).first
u.password='root1234'
u.password_confirmation='root1234'
u.save!
quit
5、浏覽器通路伺服器位址,預設是80端口,是以直接通路即可,在登入界面輸入我們上面設定的密碼root/root1234。
6、設定界面為中文
User Settings ----> Preferences ----> Language ----> 簡體中文 ----> 重新整理界面
7、Gitlab常用指令
gitlab-ctl stop
gitlab-ctl start
gitlab-ctl restart
六、使用Docker安裝配置Jenkins+Sonar(代碼品質檢查)
實際項目應用開發過程中,單獨為SpringCloud工程部署一台運維伺服器,不要安裝在Kubernetes伺服器上,同樣按照上面的步驟安裝docker和docker-compose,然後使用docker-compose建構Jenkins和Sonar。
1、建立主控端挂載目錄并賦權
mkdir -p /data/docker/ci/nexus /data/docker/ci/jenkins/lib /data/docker/ci/jenkins/home /data/docker/ci/sonarqube /data/docker/ci/postgresql
chmod -R 777 /data/docker/ci/nexus /data/docker/ci/jenkins/lib /data/docker/ci/jenkins/home /data/docker/ci/sonarqube /data/docker/ci/postgresql
2、建立Jenkins+Sonar安裝腳本jenkins-compose.yml腳本,這裡的Jenkins使用的是Docker官方推薦的鏡像jenkinsci/blueocean,在實際使用中發現,即使不修改插件下載下傳位址,也可以下載下傳插件,是以比較推薦這個鏡像。
version: '3'
networks:
prodnetwork:
driver: bridge
services:
sonardb:
image: postgres:12.2
restart: always
ports:
- "5433:5432"
networks:
- prodnetwork
volumes:
- /data/docker/ci/postgresql:/var/lib/postgresql
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
sonar:
image: sonarqube:8.2-community
restart: always
ports:
- "19000:9000"
- "19092:9092"
networks:
- prodnetwork
depends_on:
- sonardb
volumes:
- /data/docker/ci/sonarqube/conf:/opt/sonarqube/conf
- /data/docker/ci/sonarqube/data:/opt/sonarqube/data
- /data/docker/ci/sonarqube/logs:/opt/sonarqube/logs
- /data/docker/ci/sonarqube/extension:/opt/sonarqube/extensions
- /data/docker/ci/sonarqube/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
environment:
- TZ=Asia/Shanghai
- SONARQUBE_JDBC_URL=jdbc:postgresql://sonardb:5432/sonar
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
nexus:
image: sonatype/nexus3
restart: always
ports:
- "18081:8081"
networks:
- prodnetwork
volumes:
- /data/docker/ci/nexus:/nexus-data
jenkins:
image: jenkinsci/blueocean
user: root
restart: always
ports:
- "18080:8080"
networks:
- prodnetwork
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/localtime:/etc/localtime:ro
- $HOME/.ssh:/root/.ssh
- /data/docker/ci/jenkins/lib:/var/lib/jenkins/
- /usr/bin/docker:/usr/bin/docker
- /data/docker/ci/jenkins/home:/var/jenkins_home
depends_on:
- nexus
- sonar
environment:
- NEXUS_PORT=8081
- SONAR_PORT=9000
- SONAR_DB_PORT=5432
cap_add:
- ALL
3、在jenkins-compose.yml檔案所在目錄下執行安裝啟動指令
docker-compose -f jenkins-compose.yml up -d
安裝成功後,展示以下資訊
[+] Running 5/5
⠿ Network root_prodnetwork Created 0.0s
⠿ Container root-sonardb-1 Started 1.0s
⠿ Container root-nexus-1 Started 1.0s
⠿ Container root-sonar-1 Started 2.1s
⠿ Container root-jenkins-1 Started 4.2s
4、檢視服務的啟動情況
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
52779025a83e jenkins/jenkins:lts "/sbin/tini -- /usr/…" 4 minutes ago Up 3 minutes 50000/tcp, 0.0.0.0:18080->8080/tcp, :::18080->8080/tcp root-jenkins-1
2f5fbc25de58 sonarqube:8.2-community "./bin/run.sh" 4 minutes ago Restarting (0) 21 seconds ago root-sonar-1
4248a8ba71d8 sonatype/nexus3 "sh -c ${SONATYPE_DI…" 4 minutes ago Up 4 minutes 0.0.0.0:18081->8081/tcp, :::18081->8081/tcp root-nexus-1
719623c4206b postgres:12.2 "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 0.0.0.0:5433->5432/tcp, :::5433->5432/tcp root-sonardb-1
2b6852a57cc2 goharbor/harbor-jobservice:v2.2.4 "/harbor/entrypoint.…" 5 days ago Up 29 seconds (health: starting) harbor-jobservice
ebf2dea994fb goharbor/nginx-photon:v2.2.4 "nginx -g 'daemon of…" 5 days ago Restarting (1) 46 seconds ago nginx
adfaa287f23b goharbor/harbor-registryctl:v2.2.4 "/home/harbor/start.…" 5 days ago Up 7 minutes (healthy) registryctl
8e5bcca3aaa1 goharbor/harbor-db:v2.2.4 "/docker-entrypoint.…" 5 days ago Up 7 minutes (healthy) harbor-db
ebe845e020dc goharbor/harbor-portal:v2.2.4 "nginx -g 'daemon of…" 5 days ago Up 7 minutes (healthy) harbor-portal
68263dea2cfc goharbor/harbor-log:v2.2.4 "/bin/sh -c /usr/loc…" 5 days ago Up 7 minutes (healthy) 127.0.0.1:1514->10514/tcp harbor-log
我們發現 jenkins端口映射到了18081 ,但是sonarqube沒有啟動,檢視日志發現sonarqube檔案夾沒有權限通路,日志上顯示容器目錄的權限不夠,但實際是主控端的權限不夠,這裡需要給主控端賦予權限
chmod 777 /data/docker/ci/sonarqube/logs
chmod 777 /data/docker/ci/sonarqube/bundled-plugins
chmod 777 /data/docker/ci/sonarqube/conf
chmod 777 /data/docker/ci/sonarqube/data
chmod 777 /data/docker/ci/sonarqube/extension
執行重新開機指令
docker-compose -f jenkins-compose.yml restart
再次使用指令檢視服務啟動情況,就可以看到jenkins映射到18081,sonarqube映射到19000端口,我們在浏覽器就可以通路jenkins和sonarqube的背景界面了
5、Jenkins登入初始化
從Jenkins的登入界面提示可以知道,預設密碼路徑為/var/jenkins_home/secrets/initialAdminPassword,這裡顯示的事Docker容器内部的路徑,實際對應我們上面伺服器設定的路徑為/data/docker/ci/jenkins/home/secrets/initialAdminPassword ,我們打開這個檔案并輸入密碼就可以進入Jenkins管理界面
6、選擇安裝推薦插件,安裝完成之後,根據提示進行下一步操作,直到進入管理背景界面
備注:
- sonarqube預設使用者名密碼: admin/admin
- 解除安裝指令:docker-compose -f jenkins-compose.yml down -v
七、Jenkins自動打包部署配置
項目部署有多種方式,從最原始的可運作jar包直接部署到JDK環境下運作,到将可運作的jar包放到docker容器中運作,再到現在比較流行的把可運作的jar包和docker放到k8s的pod環境中運作。每一種新的部署方式都是對原有部署方式的改進和優化,這裡不着重介紹每種方式的優缺點,隻簡單說明一下使用Kubernetes 的原因:Kubernetes 主要提供彈性伸縮、服務發現、自我修複,版本回退、負載均衡、存儲編排等功能。
日常開發部署過程中的基本步驟如下:
- 送出代碼到gitlab代碼倉庫
- gitlab通過webhook觸發Jenkins建構代碼品質檢查
- Jenkins需通過手動觸發,來拉取代碼、編譯、打包、建構Docker鏡像、釋出到私有鏡像倉庫Harbor、執行kubectl指令從Harbor拉取Docker鏡像部署至k8s
1、安裝Kubernetes plugin插件、Git Parameter插件(用于流水線參數化建構)、
Extended Choice Parameter
插件(用于多個微服務時,選擇需要建構的微服務)、 Pipeline Utility Steps插件(用于讀取maven工程的.yaml、pom.xml等)和 Kubernetes Continuous Deploy(一定要使用1.0版本,從官網下載下傳然後上傳) ,Jenkins --> 系統管理 --> 插件管理 --> 可選插件 --> Kubernetes plugin /Git Parameter/Extended Choice Parameter ,選中後點選Install without restart按鈕進行安裝
Blueocean目前還不支援Git Parameter插件和Extended Choice Parameter插件,Git Parameter是通過Git Plugin讀取分支資訊,我們這裡使用Pipeline script而不是使用Pipeline script from SCM,是因為我們不希望把建構資訊放到代碼裡,這樣做可以開發和部署分離。
2、配置Kubernetes plugin插件,Jenkins --> 系統管理 --> 節點管理 --> Configure Clouds --> Add a new cloud -> Kubernetes
3、增加kubernetes證書
cat ~/.kube/config
# 以下步驟暫不使用,将certificate-authority-data、client-certificate-data、client-key-data替換為~/.kube/config裡面具體的值
#echo certificate-authority-data | base64 -d > ca.crt
#echo client-certificate-data | base64 -d > client.crt
#echo client-key-data | base64 -d > client.key
# 執行以下指令,自己設定密碼
#openssl pkcs12 -export -out cert.pfx -inkey client.key -in client.crt -certfile ca.crt
系統管理-->憑據-->系統-->全局憑據
4、添加通路Kubernetes的憑據資訊,這裡填入上面登入Kubernetes Dashboard所建立的token即可,添加完成之後選擇剛剛添加的憑據,然後點選連接配接測試,如果提示連接配接成功,那麼說明我們的Jenkins可以連接配接Kubernetes了
5、jenkins全局配置jdk、git和maven
jenkinsci/blueocean鏡像預設安裝了jdk和git,這裡需要登入容器找到路徑,然後配置進去。
通過指令進入jenkins容器,并檢視JAVA_HOEM和git路徑
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0520ebb9cc5d jenkinsci/blueocean "/sbin/tini -- /usr/…" 2 days ago Up 30 hours 50000/tcp, 0.0.0.0:18080->8080/tcp, :::18080->8080/tcp root-jenkins-1
[root@localhost ~]# docker exec -it 0520ebb9cc5d /bin/bash
bash-5.1# echo $JAVA_HOME
/opt/java/openjdk
bash-5.1# which git
/usr/bin/git
通過指令查詢可知,JAVA_HOME=/opt/java/openjdk GIT= /usr/bin/git , 在Jenkins全局工具配置中配置
Maven可以在主控端映射的/data/docker/ci/jenkins/home中安裝,然後配置時,配置容器路徑為/var/jenkins_home下的Maven安裝路徑
在系統配置中設定MAVEN_HOME供Pipeline script調用,如果執行腳本時提示沒有權限,那麼在宿主Maven目錄的bin目錄下執行chmod 777 *
6、為k8s建立harbor-key,用于k8s拉取私服鏡像,配置在代碼的k8s-deployment.yml中使用。
kubectl create secret docker-registry harbor-key --docker-server=172.16.20.175 --docker-username='robot$gitegg' --docker-password='Jqazyv7vvZiL6TXuNcv7TrZeRdL8U9n3'
7、建立pipeline流水線任務
8、配置流水線任務參數
9、配置pipeline釋出腳本
在流水線下面選擇Pipeline script
pipeline {
agent any
parameters {
gitParameter branchFilter: 'origin/(.*)', defaultValue: 'master', name: 'Branch', type: 'PT_BRANCH', description:'請選擇需要建構的代碼分支'
choice(name: 'BaseImage', choices: ['openjdk:8-jdk-alpine'], description: '請選擇基礎運作環境')
choice(name: 'Environment', choices: ['dev','test','prod'],description: '請選擇要釋出的環境:dev開發環境、test測試環境、prod 生産環境')
extendedChoice(
defaultValue: 'gitegg-gateway,gitegg-oauth,gitegg-plugin/gitegg-code-generator,gitegg-service/gitegg-service-base,gitegg-service/gitegg-service-extension,gitegg-service/gitegg-service-system',
description: '請選擇需要建構的微服務',
multiSelectDelimiter: ',',
name: 'ServicesBuild',
quoteValue: false,
saveJSONParameterToFile: false,
type: 'PT_CHECKBOX',
value:'gitegg-gateway,gitegg-oauth,gitegg-plugin/gitegg-code-generator,gitegg-service/gitegg-service-base,gitegg-service/gitegg-service-extension,gitegg-service/gitegg-service-system',
visibleItemCount: 6)
string(name: 'BuildParameter', defaultValue: 'none', description: '請輸入建構參數')
}
environment {
PRO_NAME = "gitegg"
BuildParameter="${params.BuildParameter}"
ENV = "${params.Environment}"
BRANCH = "${params.Branch}"
ServicesBuild = "${params.ServicesBuild}"
BaseImage="${params.BaseImage}"
k8s_token = "7696144b-3b77-4588-beb0-db4d585f5c04"
}
stages {
stage('Clean workspace') {
steps {
deleteDir()
}
}
stage('Process parameters') {
steps {
script {
if("${params.ServicesBuild}".trim() != "") {
def ServicesBuildString = "${params.ServicesBuild}"
ServicesBuild = ServicesBuildString.split(",")
for (service in ServicesBuild) {
println "now got ${service}"
}
}
if("${params.BuildParameter}".trim() != "" && "${params.BuildParameter}".trim() != "none") {
BuildParameter = "${params.BuildParameter}"
}
else
{
BuildParameter = ""
}
}
}
}
stage('Pull SourceCode Platform') {
steps {
echo "${BRANCH}"
git branch: "${Branch}", credentialsId: 'gitlabTest', url: 'http://172.16.20.188:2080/root/gitegg-platform.git'
}
}
stage('Install Platform') {
steps{
echo "==============Start Platform Build=========="
sh "${MAVEN_HOME}/bin/mvn -DskipTests=true clean install ${BuildParameter}"
echo "==============End Platform Build=========="
}
}
stage('Pull SourceCode') {
steps {
echo "${BRANCH}"
git branch: "${Branch}", credentialsId: 'gitlabTest', url: 'http://172.16.20.188:2080/root/gitegg-cloud.git'
}
}
stage('Build') {
steps{
script {
echo "==============Start Cloud Parent Install=========="
sh "${MAVEN_HOME}/bin/mvn -DskipTests=true clean install -P${params.Environment} ${BuildParameter}"
echo "==============End Cloud Parent Install=========="
def workspace = pwd()
for (service in ServicesBuild) {
stage ('buildCloud${service}') {
echo "==============Start Cloud Build ${service}=========="
sh "cd ${workspace}/${service} && ${MAVEN_HOME}/bin/mvn -DskipTests=true clean package -P${params.Environment} ${BuildParameter} jib:build -Djib.httpTimeout=200000 -DsendCredentialsOverHttp=true -f pom.xml"
echo "==============End Cloud Build ${service}============"
}
}
}
}
}
stage('Sync to k8s') {
steps {
script {
echo "==============Start Sync to k8s=========="
def workspace = pwd()
mainpom = readMavenPom file: 'pom.xml'
profiles = mainpom.getProfiles()
def version = mainpom.getVersion()
def nacosAddr = ""
def nacosConfigPrefix = ""
def nacosConfigGroup = ""
def dockerHarborAddr = ""
def dockerHarborProject = ""
def dockerHarborUsername = ""
def dockerHarborPassword = ""
def serverPort = ""
def commonDeployment = "${workspace}/k8s-deployment.yaml"
for(profile in profiles)
{
// 擷取對應配置
if (profile.getId() == "${params.Environment}")
{
nacosAddr = profile.getProperties().getProperty("nacos.addr")
nacosConfigPrefix = profile.getProperties().getProperty("nacos.config.prefix")
nacosConfigGroup = profile.getProperties().getProperty("nacos.config.group")
dockerHarborAddr = profile.getProperties().getProperty("docker.harbor.addr")
dockerHarborProject = profile.getProperties().getProperty("docker.harbor.project")
dockerHarborUsername = profile.getProperties().getProperty("docker.harbor.username")
dockerHarborPassword = profile.getProperties().getProperty("docker.harbor.password")
}
}
for (service in ServicesBuild) {
stage ('Sync${service}ToK8s') {
echo "==============Start Sync ${service} to k8s=========="
dir("${workspace}/${service}") {
pom = readMavenPom file: 'pom.xml'
echo "group: artifactId: ${pom.artifactId}"
def deployYaml = "k8s-deployment-${pom.artifactId}.yaml"
yaml = readYaml file : './src/main/resources/bootstrap.yml'
serverPort = "${yaml.server.port}"
if(fileExists("${workspace}/${service}/k8s-deployment.yaml")){
commonDeployment = "${workspace}/${service}/k8s-deployment.yaml"
}
else
{
commonDeployment = "${workspace}/k8s-deployment.yaml"
}
script {
sh "sed 's#{APP_NAME}#${pom.artifactId}#g;s#{IMAGE_URL}#${dockerHarborAddr}#g;s#{IMAGE_PROGECT}#${PRO_NAME}#g;s#{IMAGE_TAG}#${version}#g;s#{APP_PORT}#${serverPort}#g;s#{SPRING_PROFILE}#${params.Environment}#g' ${commonDeployment} > ${deployYaml}"
kubernetesDeploy configs: "${deployYaml}", kubeconfigId: "${k8s_token}"
}
}
echo "==============End Sync ${service} to k8s=========="
}
}
echo "==============End Sync to k8s=========="
}
}
}
}
}
常見問題:
1、Pipeline Utility Steps 第一次執行會報錯Scripts not permitted to use method或者Scripts not permitted to use staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods getProperties java.lang.Object
解決:系統管理-->In-process Script Approval->點選 Approval
2、通過NFS服務将所有容器的日志統一存放在NFS的服務端
3、Kubernetes Continuous Deploy,使用1.0.0版本,否則報錯,不相容
4、解決docker注冊到内網問題
spring:
cloud:
inetutils:
ignored-interfaces: docker0
5、配置ipvs模式,kube-proxy監控Pod的變化并建立相應的ipvs規則。ipvs相對iptables轉發效率更高。除此以外,ipvs支援更多的LB算法。
kubectl edit cm kube-proxy -n kube-system
修改mode: "ipvs"
重新加載kube-proxy配置檔案
kubectl delete pod -l k8s-app=kube-proxy -n kube-system
檢視ipvs規則
ipvsadm -Ln
6、k8s叢集内部通路外部服務,nacos,redis等
- a、内外互通模式,在部署的服務設定hostNetwork: true
spec:
hostNetwork: true
- b、Endpoints模式
kind: Endpoints
apiVersion: v1
metadata:
name: nacos
namespace: default
subsets:
- addresses:
- ip: 172.16.20.188
ports:
- port: 8848
apiVersion: v1
kind: Service
metadata:
name: nacos
namespace: default
spec:
type: ClusterIP
ports:
- port: 8848
targetPort: 8848
protocol: TCP
- c、service的type: ExternalName模式,“ExternalName” 使用 CNAME 重定向,是以無法執行端口重映射,域名使用
EndPoints和type: ExternalName
以上外部建立yaml,不要用内部的,這些需要在環境設定時配置好。
7、k8s常用指令:
檢視pod: kubectl get pods
檢視service: kubectl get svc
檢視endpoints: kubectl get endpoints
安裝: kubectl apply -f XXX.yaml
删除:kubectl delete -f xxx.yaml
删除pod: kubectl delete pod podName
删除service: kubectl delete service serviceName
進入容器: kubectl exec -it podsNamexxxxxx -n default -- /bin/sh