本次操作基于centos7,其它centos版本請參考:
https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart
一、準備工作:
1、準備四台centos7的系統
我用的系統版本是centos7.6,除了k8s-master之外的節點都挂載一塊硬碟,不要分區格式化。
主機ip | 主機名 | 功能 | 磁盤 |
192.168.87.100 | k8s-master | k8s控制節點 | sda-系統盤 |
192.168.87.101 | k8s-node1 | k8s工作節點,glusterfs節點 | sda-系統盤,sdb-glusterfs資料盤 |
192.168.87.102 | k8s-node2 | k8s工作節點,glusterfs節點 | sda-系統盤,sdb-glusterfs資料盤 |
192.168.87.103 | k8s-node3 | k8s工作節點,glusterfs節點 | sda-系統盤,sdb-glusterfs資料盤 |
2、配置
其它初始化配置與kubeadm安裝kubernetes叢集系統同,各節點配置主機名,各個節點間配置無密碼登入,關閉交換分區,關閉selinux,關閉防火牆,關閉iptables,修改核心參數加載net.bridge和ipv4_ip_forward子產品,配置aliyun repo源,配置阿裡雲docker和k8s repo源, 配置時間同步,開啟ipvs。
阿裡雲centos repo: https://developer.aliyun.com/mirror/centos?spm=a2c6h.13651102.0.0.54201b11G7DwgW
阿裡雲docker 源:https://developer.aliyun.com/mirror/docker-ce?spm=a2c6h.13651102.0.0.76231b11bCt2K4
阿裡雲kubernetes源:https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.76231b11bCt2K4
3、glusterfs節點配置glusterfs yum 源
yum install centos-release-gluster
![](https://img.laitimes.com/img/9ZDMuAjOiMmIsIjOiQnIsIiZpdmL0QjM5gDM1kTO3EzY3MzMwUTO4QmMhJ2N3QDZmhTOxQ2YiNGO4EzLcZDMyIDMy8CXzV2Zh1WavwVbvNmLvR3YxUjLyM3Lc9CX6MHc0RHaiojIsJye.gif)
4、glusterfs節點磁盤資訊
每個glusterfs節點都執行,以第一個節點為例
[root@k8s-node1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 300M 0 part /boot
├─sda2 8:2 0 5.9G 0 part
└─sda3 8:3 0 93.9G 0 part /
sdb 8:16 0 100G 0 disk
sr0 11:0 1 1024M 0 rom
二、安裝gluserfs叢集服務
1、格式化新挂載的sdb盤
每個glusterfs節點都執行,以第一個節點為例
[root@k8s-node1 ~]# mkfs.xfs -i size=512 /dev/sdb
meta-data=/dev/sdb isize=512 agcount=4, agsize=6553600 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=26214400, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=12800, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
2、安裝gluserfs-server
每個glusterfs節點都執行
yum install glusterfs-server
3、加載核心子產品
modprobe dm_snapshot
modprobe dm_mirror
modprobe dm_thin_pool
lsmod | grep dm_snapshot
lsmod | grep dm_mirror
lsmod | grep dm_thin_pool
4、安裝device-mapper
yum install -y device-mapper*
5、開啟glusterfs服務并設定開機啟動
systemctl enable glusterd
systemctl start glusterd
systemctl status glusterd
6、在第一個節點k8s-node1上配置gluserfs 叢集
gluster peer probe k8s-node2
gluster peer probe k8s-node3
7、檢視叢集資訊
[root@k8s-node1 ~]# gluster peer status
Number of Peers: 2
Hostname: 192.168.11.102
Uuid: b2b3c79b-463f-409f-9e2e-18b0ff1ae977
State: Peer in Cluster (Connected)
Other names:
k8s-node2
Hostname: 192.168.11.103
Uuid: ccb63074-f19a-4cf7-84e3-36b96725b3dc
State: Peer in Cluster (Connected)
Other names:
k8s-node3
三、安裝heketi服務
由于 GlusterFS 本身不提供 API 調用的方法,是以您可以安裝 Heketi,通過用于 Kubernetes 調用的 RESTful API 來管理 GlusterFS 存儲卷的生命周期。這樣,您的 Kubernetes 叢集就可以動态地配置 GlusterFS 存儲卷。
1、在k8s-node節點下載下傳heketi服務
本次使用的是10.4.0版本
https://github.com/heketi/heketi/releases/download/v10.4.0/heketi-v10.4.0-release-10.linux.amd64.tar.gz
https://github.com/heketi/heketi/releases/download/v10.4.0/heketi-v10.4.0-release-10.linux.amd64.tar.gz
2、解壓縮heketi
[root@k8s-node1 heketi]# tar -zxvf heketi-v10.4.0-release-10.linux.amd64.tar.gz
heketi/
heketi/heketi
heketi/heketi-cli
heketi/heketi.json
[root@k8s-node1 heketi]# cd heketi/
[root@k8s-node1 heketi]# pwd
/root/heketi/heketi
[root@k8s-node1 heketi]# ls
heketi heketi-cli heketi.json
[root@k8s-node1 heketi]# cp heketi /usr/bin
[root@k8s-node1 heketi]# cp heketi-cli /usr/bin
3、建立 Heketi 服務檔案
vi /usr/lib/systemd/system/heketi.service
[Unit]
Description=Heketi Server
[Service]
Type=simple
WorkingDirectory=/var/lib/heketi
ExecStart=/usr/bin/heketi --config=/etc/heketi/heketi.json
Restart=on-failure
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
4、建立 Heketi 檔案夾
mkdir -p /var/lib/heketi
mkdir -p /etc/heketi
5、建立hekete.json檔案
vi /etc/heketi/heketi.json
{
"_port_comment": "Heketi Server Port Number",
"port": "8080",
"_use_auth": "Enable JWT authorization. Please enable for deployment",
"use_auth": false,
"_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "123456"
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "123456"
}
},
"_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
"_executor_comment": [
"Execute plugin. Possible choices: mock, ssh",
"mock: This setting is used for testing and development.",
" It will not send commands to any node.",
"ssh: This setting will notify Heketi to ssh to the nodes.",
" It will need the values in sshexec to be configured.",
"kubernetes: Communicate with GlusterFS containers over",
" Kubernetes exec api."
],
"executor": "ssh",
"_sshexec_comment": "SSH username and private key file information",
"sshexec": {
"keyfile": "/root/.ssh/id_rsa",
"user": "root"
},
"_kubeexec_comment": "Kubernetes configuration",
"kubeexec": {
"host" :"https://kubernetes.host:8443",
"cert" : "/path/to/crt.file",
"insecure": false,
"user": "kubernetes username",
"password": "password for kubernetes user",
"namespace": "Kubernetes namespace",
"fstab": "Optional: Specify fstab file on node. Default is /etc/fstab"
},
"_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db",
"brick_max_size_gb" : 1024,
"brick_min_size_gb" : 1,
"max_bricks_per_volume" : 33,
"_loglevel_comment": [
"Set log level. Choices are:",
" none, critical, error, warning, info, debug",
"Default is warning"
],
"loglevel" : "debug"
}
}
在安裝 GlusterFS 作為 KubeSphere 叢集的存儲類型時,必須提供帳戶 及其
admin
值。
Secret
6、啟動 Heketi
[root@k8s-node1 heketi]# systemctl start heketi
[root@k8s-node1 heketi]# systemctl status heketi
● heketi.service - Heketi Server
Loaded: loaded (/usr/lib/systemd/system/heketi.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2022-04-18 14:45:45 CST; 4s ago
Main PID: 109493 (heketi)
Tasks: 8
Memory: 8.9M
CGroup: /system.slice/heketi.service
└─109493 /usr/bin/heketi --config=/etc/heketi/heketi.json
Apr 18 14:45:45 k8s-node1 heketi[109493]: 2022/04/18 14:45:45 no SSH_KNOWN_HOSTS specified, skipping ssh host verification
Apr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Loaded ssh executor
Apr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Adv: Max bricks per volume set to 33
Apr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Adv: Max brick size 1024 GB
Apr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Adv: Min brick size 1 GB
Apr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Volumes per cluster limit is set to default value of 1000
Apr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 GlusterFS Application Loaded
Apr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Started Node Health Cache Monitor
Apr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Started background pending operations cleaner
Apr 18 14:45:45 k8s-node1 heketi[109493]: Listening on port 8080
7、開機啟動heketi
systemctl enable heketi
8、為 Heketi 建立拓撲配置檔案,該檔案包含添加到 Heketi 的叢集、節點和磁盤的資訊
vi /etc/heketi/topology.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"k8s-node1"
],
"storage": [
"192.168.11.101"
]
},
"zone": 1
},
"devices": [
{
"name": "/dev/sdb",
"destroydata": true
}
]
},
{
"node": {
"hostnames": {
"manage": [
"k8s-node2"
],
"storage": [
"192.168.11.102"
]
},
"zone": 1
},
"devices": [
{
"name": "/dev/sdb",
"destroydata": true
}
]
},
{
"node": {
"hostnames": {
"manage": [
"k8s-node3"
],
"storage": [
"192.168.11.103"
]
},
"zone": 1
},
"devices": [
{
"name": "/dev/sdb",
"destroydata": true
}
]
}
]
}
]
}
請使用您自己的 IP 替換上述 IP 位址。
請在
一欄添加您自己的磁盤名稱。
devices
9、加載 Heketi 拓撲配置 JSON 檔案
配置環境變量:192.168.87.101為k8s-node1的節點ip
[root@k8s-node1 heketi]# export HEKETI_CLI_SERVER=http://192.168.11.101:8080
[root@k8s-node1 heketi]# echo $HEKETI_CLI_SERVER
http://192.168.11.101:8080
加載json檔案,其中admin就是上文中heketi.json配置的admin使用者名
[root@k8s-node1 heketi]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret '123456' topology load --json=/etc/heketi/topology.json
Creating cluster ... ID: b689b88cf1d243d580e0b91af21aa543
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node k8s-node1 ... ID: e1c64f9e1d0b48d0b35cbed842271f18
Adding device /dev/sdb ... OK
Creating node k8s-node2 ... ID: d85251fb83641f3e5d755b8e7794663a
Adding device /dev/sdb ... OK
Creating node k8s-node3 ... ID: a8aeafbd6136faf1f9b7f54429ac9560
Adding device /dev/sdb ... OK
10、通過heketi-cli檢視叢集資訊
[root@k8s-node1 heketi]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret '123456' topology info
Cluster Id: b689b88cf1d243d580e0b91af21aa543
File: true
Block: true
Volumes:
Nodes:
Node Id: a8aeafbd6136faf1f9b7f54429ac9560
State: online
Cluster Id: b689b88cf1d243d580e0b91af21aa543
Zone: 1
Management Hostnames: k8s-node3
Storage Hostnames: 192.168.11.103
Devices:
Id:411e4b0927f72b85d7c9472a644d4494 State:online Size (GiB):99 Used (GiB):0 Free (GiB):99
Known Paths: /dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:1:0 /dev/sdb
Bricks:
Node Id: d85251fb83641f3e5d755b8e7794663a
State: online
Cluster Id: b689b88cf1d243d580e0b91af21aa543
Zone: 1
Management Hostnames: k8s-node2
Storage Hostnames: 192.168.11.102
Devices:
Id:fdce711e3c5f17236709d06ef03a81e7 State:online Size (GiB):99 Used (GiB):0 Free (GiB):99
Known Paths: /dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:1:0 /dev/sdb
Bricks:
Node Id: e1c64f9e1d0b48d0b35cbed842271f18
State: online
Cluster Id: b689b88cf1d243d580e0b91af21aa543
Zone: 1
Management Hostnames: k8s-node1
Storage Hostnames: 192.168.11.101
Devices:
Id:adeca0813972e01b532495245857486f State:online Size (GiB):99 Used (GiB):0 Free (GiB):99
Known Paths: /dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:1:0 /dev/sdb
Bricks:
至此,二進制安裝gluserfs三節點叢集安裝完畢,大家可以通過參考之前我的文章:kubernetes部署glusterfs持久化檔案存儲_North-java的部落格-CSDN部落格 ,這篇文章時使用kubernetes方式來安裝gluserfs叢集,文末建立了storageclass、pvc、pv、pod進行了glusterfs作為k8s持久化存儲。
三、安裝kubesphere
KubeSphere 願景是打造一個以 Kubernetes 為核心的雲原生分布式作業系統,它的架構可以非常友善地使第三方應用與雲原生生态元件進行即插即用(plug-and-play)的內建,支援雲原生應用在多雲與多叢集的統一分發和運維管理。本次測試安裝的是kubesphere最新版本:v3.2.1
參考文檔:在 Kubernetes 上最小化安裝 KubeSphere
官方要求:
- 如需在 Kubernetes 上安裝 KubeSphere 3.2.1,您的 Kubernetes 版本必須為:1.19.x、1.20.x、1.21.x 或 1.22.x(實驗性支援)。
- 確定您的機器滿足最低硬體要求:CPU > 1 核,記憶體 > 2 GB。
- 在安裝之前,需要配置 Kubernetes 叢集中的預設存儲類型。
準備工作:
第一步、安裝kubesphere之前準備好kubenetes叢集,如本文開頭所示,目前官方支援的版本是1.22,我測試時使用的叢集時1.23,目前沒發現問題。
[root@k8s-master1 kubesphere]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane,master 102d v1.23.1
k8s-node1 Ready worker 102d v1.23.1
k8s-node2 Ready worker 18d v1.23.1
k8s-node3 Ready worker 5d5h v1.23.1
第二步、安裝metrics-server,kubesphere也會安裝,但是很慢,是以提前自己安裝。
1、下載下傳官方yaml檔案
wget https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml --no-check-certificate
wget https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml
2、apply官方yaml
cluster-fonfiguration.yaml中包含了一些kubesphere的可插拔元件,大家可以根據自己的需要動态的修改,參考官方文檔:啟用可插拔元件
kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
3、檢查安裝日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
4、檢查元件是否運作正常:
使用
kubectl get pod --all-namespaces
檢視所有 Pod 是否在 KubeSphere 的相關命名空間中正常運作。如果是,請通過以下指令檢查控制台的端口(預設為
30880
)
kubectl get svc/ks-console -n kubesphere-system
5、通路管理背景:
確定在安全組中打開了端口
30880
,并通過 NodePort
(IP:30880)
使用預設帳戶和密碼
(admin/P@88w0rd)
通路 Web 控制台。